var/home/core/zuul-output/0000755000175000017500000000000015136156010014523 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136171637015503 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000361254615136171435020275 0ustar corecorexikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD p ?"mv?_eGbuuțx{w7ݭ7֫M% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR"ggóisR)N %emOQ/Ϋ[oa0vs68/Jʢ ܚʂ9ss3+aô٥J}{3CF*A(-aD~JwFPO7M$n6iXύO^%26lDt#3{f!f6;WR.!$5 J:1*S%V!F([EbD]娍ԹiE03`Cfw&:ɴ@=yN{f}\{+>2^G) u.`l(Sm&F4a0>eBmFR5]!PI6f٘"y/(":[#;`1}+7 s'ϨF&%8'# $9b"r>B)GF%\bi/ Ff/Bp 4YH~BŊ6EZ|^߸3%L[EC 7gg/碓@e=Vn)h\\lwCzDiQJxTsL] ,=M`nͷ~Vܯ5n|X*ǘ;RJK!b>JR*kl|+"N'C_#a7]d]sJg;;>Yp׫,w`ɚ'd$ecwŻ^~7EpQС3DCS[Yʧ?DDS aw߾)VxX帟AB}nyи0stĈCo.:wAZ{sy:7qsWctx{}n-+ZYsI{/.Ra9XcђQ0FK@aEDO2es ׇN# ZF͹b,*YVi+$<QMGhC}^}?BqG!(8l K3T[<~6]90}(*T7siv'=k 9Q2@vN ( R['>v*;o57sp$3ncx!>t®W>]tF-iܪ%GYbaRvHa}dkD̶*';ک|s_}8yj,('GrgTZ'U鋊TqOſ * /Ijo!՟8`"j}zӲ$k3jS|C7;A)͎V.r?t\WU1ojjr<~Tq> `=tJ!aݡ=h6Yݭw}?lѹ`f_" J9w4ts7NG GGG]ҡgc⌝M b/Ζlpah E ur C&`XR JcwB~R2EL9j7e\(Uё$׿atyХ?*t5z\+`/ErVQUxMҔ&ۈt.3;eg_O ξL1KiYLizpV:C5/=v-}҅"o ']쌕|tϓX8nJ*A*%J[T2pI1Je;s_[,Ҩ38_ь ͰM0ImY/MiVJ5&jNgBt90v߁R:~U jځU~oN9xԞ~J|dݤ߯R> kH&Y``:"s ayiBq)u%'4 yܽ yW0 -i̭uJ{KưЖ@+UBj -&JO x@}DS.€>3T0|9ē7$3z^.I< )9qf e%dhy:O40n'c}c1XҸuFiƠIkaIx( +")OtZ l^Z^CQ6tffEmDφǽ{QiOENG{P;sHz"G- >+`قSᔙD'Ad ѭj( ہO r:91v|ɛr|٦/o{C Ӹ!uWȳ)gjw&+uߕt*:͵UMQrN@fYDtEYZb4-UCqK٪L.2teB ˛"ո{Gci`du듎q+;C'16FgVlWaaB)"F,u@30YQg˾_YҊŏ#_f^ TD=VAKNl4Kš4GScѦa0 J ()¾5m'p/\խX\=z,Mw˭x:qu礛WԓL!I xӤ1(5AKRVF2ɌУլ F "vuhc=JS\kkZAY`R"Hr1]%oR[^oI]${&L8<=#0yaKL: JJl r;t#H+B|ɧJiM cm)>H=l}.^\ݧM<lu Y> XH\z:dHElL(uHR0i#q%]!=t_쾋-, vW~* ^g/5n]FhNU˿oۂ6C9C7sn,kje*;iΓA7,Q)-,=1A sK|ۜLɽy]ʸEO<-YEqKzϢ \{>dDLF amKGm+`VLJsC>?5rk{-3Ss`y_C}Q v,{*)ߎ% qƦat:D=uNvdߋ{Ny[$ {ɴ6hOI']dC5`t9:GO: FmlN*:g^;T^B0$B%C6Θ%|5u=kkN2{'FEc* A>{avdt)8|mg定TN7,TEXt+`F P |ɧ<Ғ8_iqE b}$B#fethBE;1"l r  B+R6Qp%;R8P󦟶Ub-L::;Ⱦ7,VW.JE:PgXoΰUv:ΰdɆΰ (ΰ0eTUgXun[g, ׽-t!X򴱞_aM:E.Qg1DllЊE҉L ehJx{̗Uɾ?si&2"C]u$.`mjmƒVe9f6NŐsLu6fe wkىKR%f"6=rw^)'Hz }x>1yFX09'A%bDb0!i(`Z;TyֻΗ|ִ0-6dAC5t[OM91c:VJR9&ksvJ;0ɝ$krogB= FYtЩOte=?>T&O{Ll)HClba1PIFĀ":tu^}.&R*!^pHPQuSVO$.KMb.:DK>WtWǭKv4@Va3"a`R@gbu%_J5Ґ 3Dc3[n )ܗKj/jUSsȕD $([LH%xa1yrO`$BrW XWz<%fpG"m%6PGEH^*JL֗J)oEv[Ң߃x[䚒}0BOnYr猸p$nu?ݣ RF]NHw2k혿q}lrCy u)xF$Z83Ec罋}[εUX%}< ݻln"sv&{b%^AAoۺ(I#hKD:Bߩ#蘈f=9oN*.Ѓ M#JC1?tean`3-SHq$2[ĜSjXRx?}-m6Mw'yR3q㕐)HW'X1BEb $xd(21i)//_і/Cޮm0VKz>I; >d[5Z=4>5!!T@[4 1.x XF`,?Hh]b-#3J( &uz u8.00-(9ŽZcX Jٯ^蒋*k.\MA/Xp9VqNo}#ƓOފgv[r*hy| IϭR-$$m!-W'wTi:4F5^z3/[{1LK[2nM|[<\t=3^qOp4y}|B}yu}뚬"P.ԘBn방u<#< A Q(j%e1!gkqiP(-ʢ-b7$66|*f\#ߍp{8sx[o%}wS`ýͽ>^U_S1VF20:d T2$47mSl*#lzFP_3yb.63>NKnJۦ^4*rB쑓:5Ǧ٨C.1`mU]+y_:,eXX맻c5ޖSwe݊O4L)69 War)|VϟT;Cq%KK-*i ѩQٰ`DݎGu( 꿢\cXn }7Ҫa nG{Y bcWa?\34 P U!7 _* kTuwmUr%ԀjƮĀdU#^ۈӕ3ΊeBO`^}ܖj49lnAvoI "%\;OF& wctغBܮl##mϸ.6p5k0C5PdKB g:=G<$w 24 6e/!~߽f)Q UbshY5mseڠ5_m4(sgz1v&YN2姟d4"?oWNW݃yh~%DTt^W7q.@ L⃳662G,:* $: e~7[/P%F on~$dƹɥO"dޢt|BpYqc@P`ڄj҆anCѢMU sf`Yɇك]@Rɯ?ٽf? ntպ$ˣ>TDNIGW .Z#YmDvS|]F)5vSsiExţ=8#r&ᘡĩDȈ\d cRKw*#zJ9tT :<XK*ɤwoJarExfKB4t@y[6OO6qDfEz]1,ʹB֒H ֱw;SpM8hGG&ƫEJި_1N`Ac2 GP)"nD&D #-aGoz%<ѡh (jF9L`fMN]eʮ"3_q7:.rRGT;}:֪a$)gPSj0j3hLư/7:D-F۶c}87uixoxG+5EekV{fS_d* |a%ĉUHSR0=>u)oQCC|ק_;%X6Q@d 8&a)a.#ۿD> vfA{$g ăyd) SK?ɧc/"ɭex^k$# $V :]PGszy iuKVMٞM9$1#HR1(7x]mD@0ngd6#eMy"[ ^Q $[d8  i#i8YlsI!2(ȐP'3ޜb6xo^.׀ ft~c.!R0N<R{mtdFdHÃФsxBl] " Δ<=9i/ d ␙F9Ґ)Hnxps2wApP!se]I)^ k?'k:%Ѹ)?wɧ6a{r7%]_Ϧi~ԞnZhubW*IakVC-(>Z#"U4Xk1G;7#m eji'ĒGIqB//(O &1I;svHd=mJW~ړUCOīpAiB^MP=MQ`=JB!"]b6Ƞi]ItЀ'Vf:yo=K˞r:( n72-˒#K9T\aVܩO "^OF1%e"xm뻱~0GBeFO0ޑ]w(zM6j\v00ׅYɓHڦd%NzT@gID!EL2$%Ӧ{(gL pWkn\SDKIIKWi^9)N?[tLjV}}O͌:&c!JC{J` nKlȉW$)YLE%I:/8)*H|]}\E$V*#(G;3U-;q7KǰfξC?ke`~UK mtIC8^P߼fub8P銗KDi'U6K×5 .]H<$ ^D'!" b1D8,?tT q lKxDȜOY2S3ҁ%mo(YT\3}sѦoY=-- /IDd6Gs =[F۴'c,QAIٰ9JXOz);B= @%AIt0v[Ƿ&FJE͙A~IQ%iShnMІt.޿>q=$ts,cJZڗOx2c6 .1zҪR "^Q[ TF )㢥M-GicQ\BL(hO7zNa>>'(Kgc{>/MoD8q̒vv73'9pM&jV3=ɹvYƛ{3iψI4Kp5 d2oOgd||K>R1Qzi#f>夑3KմԔ萴%|xyr>ķx>{E>Z4Ӥ͋#+hI{hNZt 9`b˝`yB,Ȍ=6Z" 8L O)&On?7\7ix@ D_P"~GijbɠM&HtpR:4Si גt&ngb9%islԃ)Hc`ebw|Ī Zg_0FRYeO:F)O>UD;;MY,2ڨi"R"*R2s@AK/u5,b#u>cY^*xkJ7C~pۊ ~;ɰ@ՙ.rT?m0:;}d8ۈ ݨW>.[Vhi̒;̥_9$W!p.zu~9x۾vC;kN?WƟ+fx3SuKQqxST Ζ2%?T74a{N8;lr`$pZds=3jwlL Eڲ t|*n8[#yN SrA GYb8ZIaʼn8 #fg3i`F#5N 3q_M]j 8E!@1vցP7!|+R@;HspSI]ڻCZUcg5pDcIϹ,oN-_XI,3\j ]ٟ5~' SuipA!C厐$&k7dmhz/#"݃,YqCL$ڲ`"MUbeT>Xuv~4Le͢ }UVM)[A`b}mcE]LCEg=2ȴcmZ?E*-8nhױ1xR2ϫCya` A y!?h!9yL%VLU2gr26A!4vbSG ]ꧧWp/ &ee *w$-`J\ ptǣC^p#_`{ К8EW>*(D{ٛ,[fnY𱹞M=6&$ʗTBZyWLE,^˖jS㯻M4uFv hQhhPsBÅV@6F \ bI9iן{r_j⩭νo؞i2gm4 .I|<.a}*sJɪ`E@+B@IFQ BFIg@5ZP[F,Gi7Y{rp`b Y1LȤѧ=+kcBacPXZs'UM}/}6 _aM;Iw]?.L3AnE wx%汅\OPY\Nh淓籋K)-7R(y[M1<;̭%qѕ֢b(LF/ bSULy8UA'3DZ笪}閹V#g8#,;RTHZd¡lY}1R/[?)xx 찤Q!b%U=(Kb4 1\)y$.M饸+ wcV?C)MΈ^RNi?u3Np> x삖A7 u/~&ӄMu.<|yi I?@)XJ7{ޱ?Q]{#\4ZfR-dVaz./f+yGNMGOK?2_~3\z=y}^G$*A! IcuR.o=MZ9zu b#s9@*иrI@*qQN||Ix;I}&ݢ6ɢ}{]x}_o>Mm8S]~(EX{;6q6^9.EPHŽ{pN>`cZV yB 8yݪkIf-8>V#ہll/ؽnA(ȱbAj>C9O n6HNe">0]8@*0)QsUN8t^N+mXU q2EDö0^R) hCt{d}ܜFnԴ.2w⠪R/r| w,?VMqܙ7;qpUۚ5Tnj ۝jlN$q:w$U>tL)NC*<` `)ĉJآS2 z]gQ)Bی:D`W&jDk\7XD&?Y\9ȢG:${1`+i n8=%Ml%İȖb7AޗuV3A7ำqE*\qb'YpuHƩҬV nm=Ɂ-2=|5ʹ zi ' ׹U>8bK0%V\ t!Lku`+]c0h&)IVC)p| QUA:]XL/2La[Xѓ F;/-rtx-rei0hE˝ݸDt#{I} `v;jUvK S x1Q2XU&6k&lE"} Q\E)+u>.,SzbQ!g:l0r5aI`"Ǒm O\B!,ZDbjKM%q%Em(>Hm 2z=Eh^&hBk X%t>g:Y #)#vǷOV't d1 =_SEp+%L1OUaY쎹aZNnDZ6fV{r&ȑ|X!|i*FJT+gj׾,$'qg%HWc\4@'@—>9V*E :lw)e6;KK{s`>3X: P/%d1ؑHͦ4;W\hx锎vgqcU!}xF^jc5?7Ua,X nʬ^Cv'A$ƝKA`d;_/EZ~'*"ȜH*Duƽ˳bKg^raͭ̍*tPu*9bJ_ ;3It+v;3O'CX}k:U{⧘pvzz0V Y3'Dco\:^dnJF7a)AH v_§gbȩ<+S%EasUNfB7™:%GY \LXg3۾4\.?}f kj· dM[CaVۿ$XD'QǛU>UݸoRR?x^TE.1߬VwխmLaF݄",Uy%ífz,/o/Z^]ݖF\\UR7򱺹...m/~q[ /7n!7xB[)9nI [GۿsH\ow!>66}եl?|i [%۾s& Z&el-ɬeb.E)բA l1O,dE>-KjLOgeΏe|Bf".ax)֒t0E)J\8ʁ,Gulʂ+lh)6tqd!eó5d ¢ku|M"kP-&ђ5h ^pN0[|B>+q"/[ڲ&6!%<@fpѻKQ31pxFP>TU?!$VQ`Rc1wM "U8V15> =҆#xɮ}U`۸ہt=|X!~Pu(UeS@%Nb:.SZ1d!~\<}LY aBRJ@ѥuȑz.# 3tl7 ]وb Xnݔ[TN1|ttc‡-5=VrPhE0Ǐ}Wd|\aD;(;Ha.]1-{s1`HbKV$n}Z+sz'ʀ*E%N3o2c06JZW?V g>ed\)g.C]pj|4逜*@ nBID f"!!*7kS4޷V+8弔*A19`RI/Hй qPq3TY'퀜+/Ĥ'cp2\1: 0mtH,.7>\hSؗ΀ѩ آSNEYdEcaLF&"FhQ|![gIK v~,Jc%+8[dI368fp*CDrc3k.2WM:UbX[cO;R`RA]d+w!e rr솜[/V`+@;Τ`5d0ϕ_Lع`C"cK>JG.}Ε00e>& 2䯫vNj31c$ i '2Sn-51Y}rE~b>|Ď6Oj~ebIapul9| 3QtUqSCxTD7U9/nq.JYCtuc nrCtVDƖϧ;INOKx%'t+sFUJq:ǫf!NRT1D(3.8Q;І?O+JL0SU%jfˬ1lމZ|VA/.ȍȱh M-r ~[0AG꠭y*8D*-Rz_z{/S[*"꫒?`a;N6uilLn<Yllmb rY״͆jqTI!j.Pٱh s!:W_´KxA|Hk1nE6=W|$O -{]1Ak$ ѫQ6Plp;3F$RveL l5`:~@c>q,7}VE-Q8W70up˳ A¦g/OEU:غA>?=CۣPqȅlW11/$f*0@б 2Dݘrt +qrx!8 J&[V =͋A,z`S,J|L/vrʑ=}IhM4fG(Ȋ1{TT%41Oa'$ {W6_ޗPMv8s'~hyl_HJbL;~UJ&Z (@˸lɖdY; jSt;CǪ7BRĕȲ(J=66BcUJ&\.!$F5/usg*8+nϒOyRH?1Uaj?o"!U8)x{IU)Q84Z ƪayS cUn9nj |34HFwc;U]A_#\ȱ ^`4Ldz͘|i=N!`Ǚe?"'9Q_U@tJʰ񔔭))[a +d2tf4w}ݗP-G/+"˓5 ;csuF ~𲸲Cڑe,B=2 DKt 9% 5z#4iʹc_øq^{@V"/뙯,rABϴ'_Aw%ò^.lP FM6} 7{NMt.t´/tBg3t÷^O<)ӰG-(l?b@x#b8޴<>m5-,@7G_"1?t]cO+y4'_R 2*`DCYS^'U3Njz==C7oӔg;CH _7˨mo^6x%OB#$w wWsARY%'AvrVKGhh} 4~Ge<+ \nH䐠C9^49|p!aX+;gg'gկy1.'?tT>iS󷰘d Ne]!0ݟhAVMq|13{ww,y<,z<$yˢwM YeoS׊%ɪx[L5 sYMY7<K@[/Mq+Ձ7I }(3 :*  ?UȳpuY]3Uk3|8y{L@$ ,im\oGw@L4[`ńBtX r$܄#D<%{oc> qd3=_n{0r 3:TՇ+Ҁ5+7٪ϝ)d~;%Q'Xxޞ n\lr=ګ) !`P9DM4͵)UY|:.}@B(j5u&q<(='l:y٬'|^!{뻣^O4 >mw@Y|ZWRhMVHAҰN_躛UGfyvMUahMEGK;xqEK4 &krSY{u~)3J<<ہg+,P` _`4jRaiZEQmFBSk+V^ov5gXM[%/XXSqAY%WMa=!˸ -W ^_94\4lGpFj۳+ Н _>e0Q63xfEƉoY h0XBe? nq.ژ(h-]nksysx!]=5hi-_ߞr{e>!p0j^u &$51!f>-8i.|["@Vlwt>M%Q^R n<85@O/lDsȣԗ\MY+r:j Tv#mX &}9W"Pe 4x KGg[CG ojK֒ƪcRaNxR- nJ%Ϫ{睚"AٱAcQvnk pOe&Tix\Ke0 0mO't8I]]D-Ѩ, '4smFSI]PKe!&ǀ"j"*"%ޟL2Ӎ^Yu4ok`7-5WXw$Dz.}* D Wy/h;\6TmSk"T l4bph|!Qa7,e&hvkGn5!-(g2)γztY",W@AXEiw}Xo}"hh<%K R JkjLB@ Q`dKʳ8U5eBbLUYޤ"Xu%{OR4CtUilՉD ")Rjmm(k%)l6yyfX3f*M؇.1v.uLPPoK183O ӝmm{Wx.^y"zwlZXt=}?ɫX0'1`j&}P=ݦAUUv4S bNY`0u3t}x7nMoȁ\ЖT8+U n:krllW&gQ=ˏò~̈yzF21S쮽 zi![k,ܝ> OP[f%5`*{:^%֫$\P LIKƂ+(Ĕe萖gSth$=%~C! 3m VB&骒xz7& 3FiT5.eK#f+EC$2q$yTrѽC>,KUUFO.u9ѷ+۝vR&˳PBKU/^_gg0iaw6Uy;yS*GaʱD*dcsE$$¡YU>xmnŭs!FA#>08o|Ќ1B!@3FhU@3F5pO\c/Z}2{11{31u24uܯǜ>bx{Ѡ~(~ ѺvSD(l_oۋ)Uݛo3cD@=ΊX ueoI} eN0#ۣ,df!`!Js!t9@q} 8Rg1x̌8o}}2j` ڏw2`"6`u% =c@|)ۻq\0P}0Aw Q+vӸ_11}(=0(s`6 BEy*ms}F#"կ^Lg &lRCİq+HBft CsPu[3s~hNBA؛ԮOޜ. E BGQFxz&<0=9 8wBu- س\f@yƞE,6n+{+[(Qi lM{@wȒm$;~3dӉڦھ$?~3$(lAh4 n#&GKoׯK9{8Gҷ}`9sGVu-=3(?t_E8o3m]V%57G$@[}_?$U87q.WQ(gF.oi'tcӬ~&/q c_D~G#~N)0B1u*)/ ~ՔlvE/TV)(=rFqU7'Jq&*gbգMj'^|Yqٳnhϥ=Vz_ ^Y= ۟~G`|u']0DW#7OapƝ";zݸ#^y2"wUp8`0i밈n8jU6:@W a w [a ݸ(սEDW3w-pޅp?׻W x};ߓ &B#.@: C@5gF n^RЮj#0F"@a`MK_(& P\"#ie7?c`Fop+B *;&R]9 }j`xѪs B =p\ as85&v]=\+Θ{~!W? gBh Aku$]r@H`ޫ{ F?t; 8ڵL*x~w iR|=9k">io rWR꾢%7iO;vJ "mw<_8;Ja`ij|!NZCm/>ܮ@Q6J1G<_{6tnEHeaA?08^Oھz  g~?c3~`b^Nɮ;dגZ CkVt# n w*]Y(N\9&|p <^j v*e`"[(SqїG8C$JK9a wٶ]ZJ b1&fy!lb5r((c]q-FaVr1 B|QSWr64:5h GYy}Ir`(H aWH_PjF{tz^MҢ<: ?@p$LJ||Uln>:KY`:' YގqQޣzu}S(#ES{G\7ɇߎ&"a4Va:=A>:-nP^"*;۠華S u8lK˷ 184$7z@~,,!R-,Qv<N&@p"v{ͳa懶py,, Rs]`)Mj$˂Gްc$d Lڣ :E:9'x4]:pg^bn';YzO(,ԅy"_ӽx$qPE,Q3?0sCϧ)ڜXǃŨ>݊d|74vҀM'g}9R60<{F_2Kɒ/.rSTGF<9(Ѥ`ĈI67(W~S4a#yh"k6hVMby^ ࠺R/vnҫMu Uڪ4 (ڃz<-y;ڰ֦kk$bi֦AӵgtϢla4VA<9LĆ3=4iX7]+{Z *}0`?|,%a]oe_leR[tk8/`[]&ZDcP5F֖Arz.{1AcEZcb5(@J f]E**MI/ 9,cᷦtPtsBn- `H&cJTmn4vfhwۭweԲ ev#mI)ۀPwPwB uw#ݒPwB*|B[7 [%ۀPosB$ۀPPB w#ߒPB*bB愊[*6 4X%4؀`sB $4؀ppB w#4ܒp KqYqL 3Pn!ό&<=IIm˻ppҪ*J8g|FNU!raؐd7B Y k31[kDϨA]Lɣ94腼߳rӠ\`P-Ċ_>1UO_n|ŋ\q*ePpٳОwif_y:6{'40r\}qZQ3I` ѸDi?]EDvj.0kpax|x%_ǘb,A ^YDy[ MsYzc~W^5ʓ*5 PqL[)|~rr|q?aYH H)sKZ˚~yQJ]£p=Co(rS 0q6p TE᩷[?Xo'âL2%E*ڭMB2C_q_6Yl}Y\9XbLw62bՒFPX-,48_xOoh.)9˒QV!PL[GwK[23FxB2agߚ X`!BOcA oNd'2 ~R6bg3hL`jQg,;=YFH' 64 Qik>AB\4x!sG>dYYXϚsYݹXE@"ETbm麎cJM H Йu'+!i<{켪P A~FX9[ P]+NJN=YL-6WluP{ G(kcmsG^Z@?EcZDROCUf8MdJ(%J, D>D?D3@z@r~é=s7La|ɔS 0Ts _4.'r".& Ѱf6?3?zI<:ki;797 V0|zq7>}IʾLU,Z{Ƭ^BS*iϥ=:.*ޣA*q*CoKԄ.Ӳ@Y9JNP8&EX:8Y'1rI+2}HGe Z*xL0lqp~$6h{S1=R}㫜iqhCzj}`I{rz_6Q&0]b@y\E[elʸ2ǚ=l[Ys`^V*j}ͬeH}kn)YI{nNF{|E5"U2fגm9bRIlxp G[_#xwNjBF<ҿJMFq'$ԍgk!sGjWY9r0Ae^ːbx Q s|::#gtcq=><^ؗD.]X;U-d҅:͍4~s)I>4,64K`u|\.G]:T%{Optݶr"+LG촱 l}exO .5($5̥%N[u$8/ፚ-tE ʅiE(g-*U7dT9zģ=cr>m^|\7S ;fwP;k|kzaw쇺*3FI_ͳ#)aԡCha"Cy4b7UH5O0p>k 8 t (5"2 ~㵺vI ߛO+:賁7V{Bgky/n%.o'vw-=nuғhZR{vj,dÿI;d.p \'g? 8򇸾 7`z !я'ئp6 WG8(Ssm/?OҴFkQxp{.!gI [ > :0b gۦi ɋ3'jF`gkĔVi*K*&9i_#==?Na[wx2+ZXkGǼ9n2G|-Hb1,N+GPCqFXU2p h u481p$ v%=pY/a55ŪJ:*Eζ]h$ƒlOFCV`3S&mBXC}fz+8`B^/u6}i*n c35!m⎕-ZDY3k~\Scm}$eLYx{*APKR=XSxNLX0sUk_6ڒO&EM{E(h܉H:;ePhf"1{FŒB[.4)IEߑ8ysM2~!x{Nʊ²PuaRLQ($#ޕWwK&$ pV(-z%$8s2`ܙk] I'Lƅ.=Ѣ;*A.M1i۷ޭض'ppRD"Z6j]QBqe HqT#F K#OZ9uFLM*,XH:ߚu&k$کK(yEc0j:?QeìGw0̏ q+HZqGIw;|4p"רeAX1n_aǔ/c@4I~|B5x5J3_Q*2˽kU@dp,F O;'nSlc #jY7ӧwZ ɾr9-';z/si'ĴѾz-nji~ɛ^l 'k5/|fcs)89KP[9ï:zvKof:-.cGٿ$ePY6!zm#5G޼$ I/5/t>z)A/ӮI}CZ9Z3me |j f^4#[- > p7KW.wr1fIG7xC \]\>,e2BRyG4ԮL&sT 2X.m?AN& 1qxs8Yh=۟5ASQ"$#mVD<"@lw-|OwO*:c]93k$fUA.PɑWxA Z>uBHrY%ڵo}cM;gc+SȠ^sh@Eh&x j-ջSˇtԬֳZJ|>P{s\,cZVj~\[""ūxގRy=wQnuHIK A#b#u#|vqY;F׍1$4NY $`Aĸ5'Ӕtݮq.\-Ŋpd\\3U`g"KKg@U4 G1b]`6qBEV {\XԷ%5Sv[̏4)y2!S00x64}; KřsL'b7=!!mO"lX\wݒ 9(KMwOOdV71'j|-/[ʠj= y̐X7͋}gmhE#cӲtgsx)ʹCDH:gOKl g]zXi#x(Y>x0@{)d=ۜ'q]4P9tcia1#87לvQuw\ ]TI$r_g]찄L$qX.JJ]iW r_-rq=H@cïM.%Uߺ3ޚφGfa, +95/$BuVúfu%}ȥrI:#A&][6pY.0m6Zd2х'Wn_H|1G0VpBydٜsVc# hszpT)>(2G#JE1eղ83b:LF/`,(J+*@0UK F0bcܬ惞#o]GTOL5< Ip<93'<-̨8n= ҄0+щ!=<2l+OW]#T=-vۮVa^leXQ/z0SMƣAf_SqܩzQyԫDcRve2wg* )Uܙ)ޫx4aVpmҙ !<^k9L$dì]rg">GU m͢'w͑¿/%${Ď{zOXƵ%՚ Go/~M23BY_ɐS0MFv48?f-Ǝu@`Sv5tb8o0(dË;!Y?|&I}["NIID蒒h4^Ӈ=7z:[6ǞaChi&y~wt fm7&&ɥ3Nn,f0'<]G")48-Q~]c7X$jƧ@r'o&mlFϋ`7 EY1Iqn`|.'q?: !Pzz9/(^xX}jmllr*h[ ^?q0O'Tb8lX0_AOu4 ^+&4Gy-$k@?w~6{ەm?=L P33*BZ=ԯ0j6F|qNIL+s݉H$4yb&c/?ڐɅȔv8wP B\ݜܔ{wz>_S|c dʹ̶0:Ɋ@Ɗi[;6JԠ֎s̙wg?ߺL=7 [%1Q[XB7mo̮*F(n"*r•=nV k_bGݹ)"17@qo$v 3P"f^]pnCr,g|T3ֻ'W܈br!)ƒ77z+@f->c26ѵɏYzT QFq~{jt+&k E{w^iQ84x:'V'_bW!0"q)Wdz5l8^2R|wrK*+ʜ[qh \faR% @M>z싃!}x |d0䩹BeAVkˎ{*p4gÄ́G>/ 5}?!L@03G$W@RBK@цV&tv*h!;4$Bto'jqCeA/.$)WW^*U'JYd+C0 :0dTl1użl+E2T794TZUc!Wc:& ]#.ext_Huf2uyͥ6#AE.8 Xu(5!BF7F+h I>dŤ& Ic)KzSsmGKo^ 16T(‹6gcV%e-صBWm^4b,RtnGWW0²$bZeȁ?Ʋ_|wwWvO.DɨUç޵Yut_<~Q19eIGX3R:s r ˠR W-.}?.k<6+ ,wǤjXJp<94wH)ʭcK5%یf(qz,:"R'Rmq  :z0k_hbYˏbEI+S'J`2V SfR-mre,2GA`jD$".' `w4̃H3YfaNw F4η{6l}A#vǝ.e܆X`,v #=kr1&tEqkr"AYakqe'3ĪG&_R7t,UQ)(`Z䠻p`kwOі8޽_iեI0bY,xR4@<>w%4 E-/>պ<#ɨNj! 4͊+in!)`狦L-3-^IkM+$[bkϭGp5&#bV~VY,Rx›ηH pƅպ hKk7X !*m q%~}E Ff$Uˆ گ.jKyXJլx%%t?BWfPq5n,bK,xTl:fsWNh /~!Q$M댷዁fNk'IT֛/}L6F']"k2ef9E-K:S]AvQp+0 z@T#=X wTvFjx+^"Eבb,@͎y7W?O7cGI邕TAfݟ4|X,z_.g )Â-rK2mR$q!Fs(iu2LեIȆnvζ Hb\)8.Z D&e%w#s;o!#7,iJ`頍s¥c !@V+@( ,֐W?]2j^4ܒ,֔ UC)`>iŘ>U`{E,x!0FʒѶVѶyPuƒ̦dFLZZ0)sqjZ94 iR-m7)/aehQ+hqzO2$b /ʫWtC!%-?t~KuTjqbg-FϜY\wG$[L쬮(D60wFn@/UZ P!(xR~(Mj4\wߟu)*u͂x6T-2WS|;Ȯ1GU_c=˧ ULaJ4C)}ZVրuf+wQe[`tN 1l@V\C!W_qZ7z~g5ލ`W~ODyee5n˂y>L/rZ) JNV*|uVhAODX4/#|ty8??@L"N\1߼"_+8Е A8ֱH"M?&o}n@VuE (mW 4bsh;U$4@2e)1Xpm2ԁ69gF'$eTR7 +.ˌh\b@)b my-/Uۄ|+0ròJ{dnڮt=$!t_y遹Tm/tGn`b(&7ڱo&istەmwyլG~<ϒd5Ͽ$rzuXjzc'?&V#Z,̲eTrA7J_ϗ}Zq/1I]IGWw%B }?H!THln!8n5mfCW{e޲k!%e{Uwk>e()ѬTӇ+]de:P!p+}rﭒ1MdO.Z2odR)W QJTPK 8HAQȇ)L$o=)0k|!r>`C sS`)"xeJc8T™%h Oe@)F`T-SXB)@)):",Q@fZv(T #Ju%T6O*7ogtrhꃻ5+=|@_Ͱ'곗ʔqG9aM4 X1gRTT̂dRm'6'ITk4 %0&u-78Ʋ;M4/lۖv`0Cv<*|  ڰ P9 s)!&Y!BZn6jYN`U wGiJ6-pw\:K054x")EVCRY2iⅴx#6"ifQ4˵l5[?qNmPKwGK;ZS$:p14Pܝzi\f/-Drb䔖a,"3bS)R)AB2jէ.y+|@IYǹҍZ.`˸,SVS*(v")%$%BfYɽIzkTG0[#W+s%"Sq.zNE|o] S{7^a+qJq@!fL!.50=;ֆzTRH9/DaޤcQT SJfaSm|w#3Ό@FcRIR $xJr%&%$q¹aDK1]J9jRMwb<m @M8!$E &ssEgix6+pP^6 :E\Qr7$&MnD.wgfgfgvfgF1J>l))k1i `ip ak*#S!L DV4gCj?Bdtp9d`ZE_ VF"pa)S$ QIJ4ŒŠ XJ5RR4e}E(!iCIl$8PEXʈR0U)NEHWoR,G0`FJS#$)0HS̭F|'=r|thԈ,nq]24nO0`T 8OPߋqm "tS*^ RX /)պK QcÁTWLO;6 Oi;]xh0_D3i_Ϧ={3G#j{ǸRUDE !"COM#F;3To.(yuO璦 lXPMl"8aE"3_=J2ĩ La"sD#y.) ikϼg{ _\r/Iq` 0UJu<hDRR+N AF%ʦ:',:8N5Q`#d=Rl G<+F|@)ZKѝ լ;kxl[Y !n'T `w>"7!: ZNqǴC^vAWVRp-i0SEd;UJs$sfsAEbR) fb86*quXMXE`WjQ%NHU;L6g#f -q0[LV ѡ|Ytq,Xdxj%EIP$MSÑQOce$)8i}]T Z<'z"9d9&y<7BPwhSW` v@J8Aj^Xؤ1#lh ?T$q_TXx$ྷ-|k@)'Vě1q᥇hmf-:f1x~y\Ԃ_<,Z>X\@uhH/`E>L+.c(!)Ms0b>Ck(q2H+YjU.|J:$3B(0wEW?`7TC9sF2}alj!X!s:3||$SL?5]`[>{Tܖ/ W[IKf %(w̫MP)\ [\]E[f 뮸0Cܡ2nd ^=q#hʹ&@rޖ~R `h˷WCˤF(q9C"yh%x"}BݵB1Q#P+mUA&5nzખ3kA+sZB&K3Ry5Sԩwi`g(ط]pŠVDuFQ:p늢!yL+wiUIti\ݶgQ(q*]:qrۄb9ĸLQOU8Q}*{zULݒYvښ/\#oSzjF~zReŵIztXI7Ez3|p~`pNq>$oGWvXDsi]Hcv0{̆Wٰ:{ðKwӬ><)]PFt >t*fw~zzv9I'/Vi>+!L0^ JN R "Hmr ok|ʏሏ9?pO `uCPF0xt>̂}^/o8Zyko`zu6iO>&Io{`܂ '+'z?=»s|oʞY]` ༳bvkH=pSqB3$XxQ`ZN l0D-h%9(->|M>ݾݕlx *99zzIJO^ f}`3ρ+PWtTZ2rOA#>n"j'ƨŞ]p'ߗ,y9] w)H+ ՜4%b̾xlvBȜQvw x~tBz |2ithH>t[ F0wqeGcW;Ъ!wɱ2g\eς/1XvTfU_Kq>j[{N~؁Xs}vu]|&?>k`H.y[sU Y !vJx/C}l$=k4pK_L i9k@'r s6-LK!'/ݑ;6_f4-s7 ׉C@%\dS#VG)|>I88Y30/g菻n/1FAd" `زFt |ϊ[HT<[JY*K D9z d?&!/)Fpm[RY} ˡֱ]ѭzS[K:ݞfFNKPV*O$)J.) !V_R;OYk2M J2r&. ݢpw>D0KģN΃gqJVnuO  5Ϸ$74?_,C^l,nW2xx~Q?^q9]o%Kɦc;(~!<;T?b~dGkP:9>,;\|sN]ћ[nag6dś^L2j|6IV;7 QVSmQ:<;0s?FYp T^ + ,(<m~Drl8L%!( ,4JXS5aoļu\歮u¸ܛH.Uv.GZ:E!4YE$ :J&l¦.A@,~f]ütwk x2#ZҐ<ְrFbiUTּ'u!M}+CzܭMD}37tT7|S‘~YW~ 5`q` R>)Lo&_V6j?cBG旝233x|>l~QrQӲVş ; 7³F^K05ći,q{YJ%58Kxx4w{E/~ۺK -W;z8hu25`:xk,JuAe%7򔊢]Tt.5, +̡x,X!~ʊa`IGc[Jdº|asϚ5dh62ʰЦLF*^ 4F*B`=+w/He𸤽\>D.+wRsyHD,LRB cfy ւ$<<ƒU)QZTǣλ׳sm2T1R dB-"uBb;/\5oJfMFuCV4!.ĆJXF4 7]Rʧ%Vkސ􊄟.$BE#1 ff/ )jє ei>kkʕxrIjO!qy?=n#Ir"̓ ;y 5mm4EiHJ= 3yGIQ*U*ȈbDUdܱ ~ =3/΀b`d.xy6 !T;A9$JzzH I.[lToqeo;e.ė_YhN.ĎNRjg# m[ F2׻`S."=JX}Wp %L E7C= P&i 9uQxa8T~ 1^w28A?oACD5u\zv;tǠ.rP F﯄zU,OqU&t՚1rv{Y9ZUz?_q7aQ%-JԼ-N9#Wėl7U/,Ke?M8s̼YfEev>lrxae-0U3pi{c_}~jM߽O+vK}aIN [3|t3E :W~BIY@\R1 q/󽢲h ,ҠȻP|?h"p+pj.;eO (Č@Yqx*z۽ Pp.sr\lg٬C]a?~ӽO> 1v4E˩>ܧ;ôL3x6[J$T&B$%.82 &Ҍ*RY2q["< HK2zx>JɡՏo[ "u6lHGPZMGCpX WT%8߅KbEL*" zLOd:L}rлҠqH~^HK1>y/z)36(| f4DvPB|?L+?S<_͵ "C dU}S_oͪcqј{NiU\,\(VL# \EzK\9,DrM1W`2m125x愔xm@LK>;2 aLbiFo?cu6zH;wpj>I6?NٷpX{e{&2 *^:j*d#i W! qF#W= mSނv]snjY}xo3鶍7^aoY9662G,:lQ.: eƾ!ϊD SY=>¸w-.SKvӟh|12UV}* #A;e gW Pe&W̨hƑ=yc$ї,F))J&T"K=A6q?x#!{Od5'pv8:?9i.62b#-y\"Vbxxc N=+ ڰ*`i$Dhӛf+9+ |*_g5W쫧?qeFvʃAzDhZS3nmw]m"-g9wF>+$kG4~R"%ь#g`,A -R1 E-2IdQltY?Z9ގ}'rWaR-qzB{#b) `6)8m4z~ |*iŊ'j4ZVתJ\O^!i(#C`Jyn,i`@m@S+100 & @R !Bi b !$Tz5/7/p%Adq8rouǁF: oOdǠ"G-zF>9ǠR-td -h?i=qP#d%{Fm)QYfDR/x XF}4: W ,g|!Ȅ/lr6mۼ_褵&z #U!6DIkg@Kp* ,W˿u#b WL`^cR4S3jzGIH%Fʬ$CF T8&ҭ \>뙭g>!I,{*Ha)jlpwBV!TXA m@3!wQ1F*}㮏jfQ 7kw7]vޘ5jח}4A%-f_A㵹 j)LJ[ ?d/T{pu.U7ė%,G!_#Q4)hWa2j󈘴q=23|{ RV//#0@-ǐBzF 0$ n)^p# yWtͶ!M-/#gtpT 9|pIhMvuӈ3e@K~Y-d26eF7k?k>uV]17!h]706cv|HgeI6e:f E, CNƃVXdFTj|Ζ=iI;JAS^Q(q[k<6 ڨtnpTanpqUoN,U hq3BR3v: w1AU_Z hj┎sWaܝqEos13{ܕ_*pgvy{z~/!l+82_Z]>íGrj".3ؤ)W7FߐAx @C»:|&N^wwF'>w"?j)Oנ΀zumIYo2³a1,Vd?:h]B$]d* $:ҟ"-[=N&8(2}V J6AzACRIK1^%޶#{#c'3+gٍg~v1m|"o;g8!@ 8`AEr ZO|FO`Ru$'?:7dɢ׸Az @Cһ㘑Jjf:eaAu)3ƺ(OXf2nyRPtn=Eu-ߗM~qWHYVn""<忤2f5 Eu.]zz\."n^:uroS_-: C#9-$Ջ DN$$bz47"{8Q8&7=ht+8+AY F(&p!W%1b8&+77T3:^X2^ +ΰ7A+1ߤ+m`hP|c@Haɐ=&'2٨4vfۮ?.}_7Dq83,]hN`Di%cR=vt 8۟ _Ifw:AwpN8FT/W;/&#g/ڝrl0F/no ~*KH()Km-bp)5,]0?VXpWX'~2dơ9+95 mCs Ѵ=ۮ㸑b,$U$d 2<$3P">>ֹt%%[/:.<4e]E^--oo&,bn?!O*p[){ۖAo6'&{0cWoDfDg\#2H wΙ Ug܄*z d08mߎuH /sk.^IIt'Psf2V} o7} 4lqС 3M 5$_Jr_x}<+)ծ/^G_nF@TWP ҜK$'Fz= FVn-86$ 9 R!ҕƠ;D;Z|^ Xxkȝ:W޸7aB9LEf~xRj>r4-Ug]~Ck Z/uq}I:@Ҥ[p[%*aͮ$4KDո!{#==\]Icr^SdD8j)ef)\8G [#RG6ƥup3|ϴSC/87#,_܂gէ}2wdiru'h8:@3~)u<12_ۂvy DsHJ P!X0^fzW;!KM"eܥ D,3Pi^hf3feHk5mF5v2!\zgS:LW>H.ȕeH2IA2ἶ'k 4* K)ſkQE޾P JJ9H5: X_J$^5LJ6hq$ /S-fУ' sA0@9*Džb\iCI4(ނv#\E#y2NVJA_$1|;:v9Mzmݴ@97clJ2nGqq- hC2W̴ 3u^ VbcFwۄ&9""]b͎xO8``f0~)ypkY=(Aͅo^6bU^3C=eYjEtbUDn#H.c8JDtWrdߒj|\I|u'i~)uY-lE,xz&7h׏Ʒɬ .viт'r?ŗ(n؂ς3A;x .E̾H<Ϣ?|-o3!OEo2UOYܼC2%)w[N^72G5M~){l=y -%&`[v ՟<ԤSrgO] cڐԕE gz8ƢuV@ҵ&a^vxHcܳ[.'Rbo[&q̢jyds :bX_H/$έi/y0< J_opRRsgd?XdžE27nt&$ )I6Pa~/lQ]ƚStgťCr#)R8L*J5^`*9-,]wÅ n ^ׂ͊_>;,O#x;˿EL,}YQ22@ )ʩ{i)\Bi͡ecSٽjf^) 4 &ČgU5 .ad}yPǤ9F*AiWIF'q*̥<Eo|0we ;lp`5Ay_bpқ..YV"w{y5JE4͸zB01)1l>$n?'w&רñ_DMKeWpG1|NťNGhWTѓm鯨>t ǁnP f5Nee-Пisci(6 m&\5xɱ+Ыմ :SmobK/o2D`DcS?7=ʖ38@^V,_RvJi({ꖓwEa5oq~zUR)> SÔfutU\gYXͅԉT7˚Uߟ\l'$tq>Oh>*./Q~L|w9N@3A;=!1d8+':o7reԋ:g yϔ19a=Ӆ]~iVO+'v?){cXSh)]qLSBPsd=òMC]`u1L.vx;[[ӂ_$TDr> W׹@]tMnӓ{I۫NVv-A39ÚA`XsЗFOLSWN-v.TS3r`鋽Tā ĽTc9g`r81PI a,$ۉ[- 7'6ANnB^uӲsڤ|pdseJ 4E9щ>R w`@ncuTLbҥCH#z˙@\R%X+Lj/^tUJ2aZ;/39u4R,X2zIb|_ 8wޱ FR V7A噐 4rR+4^f(kBKMWDWEl:qnU!ju{&W7cg_^tjfA .ޠUΤ(wĘ0tò5>v#I}zv/KKh6j rU5e~juK4Wvj!E0oM_-RO;`[)It1I0=|8z&ZZIE:WwmS!aL2<)O cqLW \yMQW~k{jf!۾~zz _͋i:9{7*Bwwro˶أf~:I߇mO=ժ 7~irv? #Ia9{EC:<_nVaGfͻ,0e#r,dOB"1)A^k JM~N˪S,]AÀnl.wKUrGnT-_W3}`q [̤WQ[VA<[p:0Ch$Wm#Zq.y\<0NZ9֬`Y ncy5fȧ͂lk1 kLsC-7z5!?lIZ}lǡ׌ ncmњ*jL~q$ {:.m(2i2r<^-\52x۟kB0<15Ս40&MMS9.QX`aEho4-VXYz x#}ضcaxY+œݐcce#u43Xb. +,lH&c齐nԹ!op}cBxغ+SjXj26q歀Tg 6o1͒raJMa$ߖ`RSmcsHw1Mij\S{e넫ۗӫ6pjѯ]Ҝ9/@Bgu k׬ߢ[`(]HFY˧gL૙ jI." _>qoc&1/MP:go2,> {pэ aB ^DS>cp[ܪ c#Z @ Stt6X_h $VQpN,y}JfFlrqYfF'3MiȍJC.&^'С#N3q(3Ӝ_2^s"*3m !3)"4M6`y| '_Fw~r_9#".Mè^N5doH9ݓٲ9Rt#QBoHRSy6 @@T\zPPP*C,OpJv2PAm`} /T$+Hi9Jg^6)jC9IOm"/VBqcmWB3">#%e%$c{BYu"ie=k{ bSKOs/6jf+G/>mTR?Mߋ6|XOݴz(0aR|3ݵp59_i*JID|6sEf*pq/{~|TCJҵ\ C#$ڟT _TJ%pDYdVBnI9/^:ˉ['#&4A-I;I-MF_>|7zcD+9OyB??Ə+oF] Su]_4ޜv(%R>$c{_)d,E <;#^ rdX 9qfX<ΰ STH򱇗by+79!T9E O_3JfJWGO k ,)ڟn~$T3!!E^N.-Lo|92 MĖ 0iFP J.c$#4wjJrSfKRO*^BF,LfU|X٪2[--|OtLG@IqպeXb}/{WFn CO၄P3~ڎYɞpC%6oG5Y$(J(ĕ@f"3۾py7Y?ޭu꺌2<3vvwߦ$MIm<_DEXÛ|)^syo k :e"$#9V碦6Xɪ::6l ?nCfY|U᣸,}&-n=Opdk|<z4ٮ0dvݸq&'+*_Nb&7o"7 Ihtjäp=V%ag vCM &0_n0S& HQpei .V?&ȁ*/@ԤYV ojF&ZA85 A:ry@ gsuӱ41ݜ4]=-F| *tÑr{8©=g(I!ևLRQyʋ%.n􎢸5N<[L}LXF9d9&?̧,POl |:E!sm-ڔl)` ƚ00%/({PϟhaQ[A:O(ǻqa^( 8{0 #%<..1G]ˢ Z#,Dds5<3rG4g&TtyG1]LW G(F``T+\)w dҁ:X(^"puRDAkc}F0)$NВd WV0? wT9wEs0X1NpdS(V$mwpdI!z~vk UWL]dQ2׆C'g 0cd-?4G~%?iM{^5$K|2DXr]HBy{MVۧT07L o}?oB^.>c푍{G4ո}ymb0Z$F=;45GZ-gP>rrFNU&;==- 2y놞<.W1- 1yxRD&iak &RJAb`4pH!]wKqI"۹$1N;;BEvYQ1e;a-+B'Fרvh\9S r.%"K ]bC~/oFlD(!{)dWƧp9. ֆKkFt"A&E"U_ӊ}>RoMv& 1m1+%p/cp)K 2!~:A|Ձ ).XH*b\V-sTX:a\ԛ/]C=0F9Њ/ӔU{gr%T*06ak/r0LLfd-t4EN;1+#Zy&ƔLTx&f]Ka7Q>boca(%Y ݷdkDI? Noj#tJo7ДJBީZPXER yN|姘6Tֵ"~>gJ;5bꧾ-xNq@7FSڳefम.2. =H5ui؅B8D< (59FAzz@lOwG&KН32\;-% ,WŚuu Oqc݉G U##R⩢.9)&Z!H땔I:!`ٕ_u,{5;ީ\5 !R2 ? I/I6n"qCI6Cjg䕝˝I\R?C\.ԕ;`oL]Fsy+ :Q.Ս0sʼjElue1A_ƨbxSa}5ZMi6D4`@)H.a*E88ZZ51m"vc؏ ڏWgL${HXqYJ-PWrLr\,uzɉU;ZQQyp#vJC|p#:ZsUbFYp568Y3ƽrn&FnNݩw#(CcZ/xHe6[<)^ Xf):^}7]Ayj.Nch3g2S "RA>}ZWS"\ˤw8v.wXq _z]y #싎E*%S4 \HHZϠ+a u UNqE\tj%`:J,QC*8'D\eI؋Q3JY@5QuKxEe)LZ쵏nhopKq 80Xop;*^к׎jJjo8841ҚqjS*669c?] Ѻ:f6ψW@0$\چ1zAV婢-k50lY :-뮋vgM XY՟5Z1,u9`T';6#33o6Cp:FK j7RDX)  c ] Z'2NS8g8^ nrG/xtnEL[.FO0.tTYU0WהaX.,w &.WƤ1i<1.WFdfm)'´8b>R`_2I3a\nV@dmjܴHLB_!8imjBc/ͨ( 9>z! ,BbcR]0aP)20Lr1Z؛`o)]0y.x* ia0]@Z e16goQNP^h#Zd'R|rт]|hܓkjJoa[>0Lr^X 1@;ЍR:0V䄪OcW C|f7oA9*¢9&T<" .xI޼[b*Nܔ!Hau!@-UTEi^D CL>N r3ZSuUbQPՙ3L CL0f<#Zbֈm.%>VV`N V4pqcZbNϼXʵ%ppǪ"^C_NT23ژpq~p6!h8}|la1$W;8x7`&Tww`?7 [??" =6s!8[^_m&3Ⱦ((nA=T6NЎ .ߖ֫j>\{n/Li' %䂶a).ࣿzޢ(ye x'3jF!D>SM#߲N 򹵵&;)ꉭe"0}59Nj|: *_g/LsK4 zSVggE0gZ$~U*(]3.,cQ}iU'úr^0ZJ&9์>kVE~-g Ң#18B*8'F c2b-RffR! SKLje.|oё%z?:T6?⏍tOIsA.ap_>/2` \៥ dU%GB0Gu iёqW[m3F/71 ;[TwWფR0 b߼J= 8MoЋ[tǓq=vUx; %s2N5b ⯵x? i6vu;ͭ8˄O+qHobI;!e} #?B,a AEGbpg㩿w?]JPĸDڮeCЎܴ (Q4\P΄)TxU!.U$o %n ՚8 ?9 0fk'QDUjzV-q(JPޢ(J!hJ!3<_l[!&.ߐi#嘅W;r>EȩՋWÛm.̺oOn,$2kt#oʔYYr8Eý`S#27'tڌ)!7(4bZHpb$[eE;8L 38߉pIr#$Qt/ +Q8ADs2DڼΒ$AMc!byf򹜰)Kl3O'p^u؛e[t$'B <ؠW ='k=FМfaۆ"1>eZ̲ÀXy|ـYm3"nƦ) 9 ?5(,"d'AߝX , %b E Zt$G>yn cB[Dh5ԍ^}д+(ޢ#18>G?OE [t$Hi!:rnVjbXW˞Kn.2cEGbplraa4~>}[%* 3p餻֑-JRt9AE-:#wOeF#AZt$Gŏ-^/E*dgEGjp9ԠL()[p_=[DjӐ5"/I_O#xl9Ҩݸi)5K&ӰfK98ъA纖:4 *iёEWw), 9= Ps !s e|3S \KwV<ǎ̍y͍o-mxR*%F-BƗXY,}%iёIm$1l"v}!)ҒD >Ds^K!Pآ#18DIAxW|A'H ;AA;^-:q5[іB&]5;O҅OU^uH~ H Pqv]G>ĀYnP_,dgEYcO0!¶ߋ پ}AMq6ݵ-Lq'$oE?PDzb@OQ]^X wz<ġzpW!bv4 kSutA=#8K>}k&\*owT%Ec]H "' j-#41jDgw1P6XX𬦤X=Q/]Vl[6gx{1qY#~)U_MgU \n> f*,q4|R^`y{Qnum7~/4Q\ woOcP7`S$ ~e3K_+s%ATݺiºg.s0*LEr<Wn>.'Oõmߖ s=*h2UPvtqt+Xc&4ףfzo_=7t/{}5MΨ}4vu›ҏgM]NWB鮢GWkxcV>h3O͕^jw߭/y[!wzqXN׋mux n<`^ UOPn:YS/\rQ?ލ+g;#yPRO<[Fj0!d)SS=*ȟoF|5z#6?l5Kوx3n]zsy tu7J0z ytɨ6o[2``seNb  !\]ɸ\TA;kݧϲ#Hb9 e@k:B5h}ց{S] px.SRF*}tPD>DV R0z妫\7´r%_pze)8TЍ^%X ^h%+?ΝY9.CT.B4y75,37q^,-`\DulWOp lrQ(aS+%nj?.!09;b|ssM+RQu3u+Dv }ڿsL[ IjHKoWN\1>@BeE6>$ !"2,Hl9b|(;b7Iy!dʯ#ޭ#" (+g`jǃ덼ǰkS1kCv6+9/D)!dXI)MIBD"%B'DB )&NGEw0m6o^Vc;J^wxlIpvMz7;p@㓣տ˝C CP /{WaU<",L'H+lL-I A2$44K!)!9O Ԁ1EI ~_"5 tԄ Տ̭ ދ$-E`a^sؽd:EWNF:Ϻv3K`m%v) g|:jr1Powf t#SL7zyI Ho6*^A5(EdO?8J{}SD(bъ@DXKpH`8"#| f=ثәLAv:_ Hw !jTr!!ϥ06\s.PAj\6DFofoOۋO|ntz5ZJQD Vjչ V(xsL^~}f``RʻÏ>`ʏgUxcx1WKo{?#Kbd+h0H|hv!%)765C66I; FЍ[l'6{Wq6JVgꆾF:qv!a8shGn8 ~RN4P7(.GYo X._ox%&_ Xcg'$'n5;4jڛ5M%Ѵdm*f7{[}ZHw{z"z{M2rus6VJ-l&=r=0= j~7UdW/!\x :~6l/g63hC caib&s4u#Lr˙-wkx!rĹ 徽 +!Ek)kfLܕ5F‚tS0hKOBk0\$D12X<ǜHN8UJ ۩q[ |fa˃@[_;׆t%Z[5Z|\ WJ̑Xa*l4:VgwQY8Uăȍ\>ތґQ\:2KGFqY%(_0Vk61H4!(enK\«rFEgD!m~v9.V_a66mKeY^iJ9ԷERLHz41+} `_ؗ.Z*XgҌ #DbT 3k2i&Meq&H1YmhP#ht^ Bi019)IX^se/tAYpi89]\4au_o%JxI-vS!&;OƞQ7o4҄xk2vSPNf?ZNC M&cy ]] ^Q"j>f~ thg5s+G;-/գhFW.n(J.?"g9IBٯYHɁ_<3wwCg}=;FS:EEe6Ή`wDޟEW/#CLDߚ[7xIM':FM̕kE[W-]-U.!BQk)Tb&̘rВ(K0_'ɉ %y u[=~5}Naosg Pg_f'>|>ZNFEpfM{Du#3}4ѮN/NE"q#+'ˋNWBa^qQ>8{lhq IH%=0AD2Y@$X1֖)1Vˆ`xe۹]Wd}_:GK m8(rwI,|i2wHiյBs\~z>a8%vm ̒P [tYR?tpgWٲTe֌'tT"t>eڝ; vү㌬8 r`4#=v;.:[Yګh - y \4q)fg)ZwY.Q0o yEΐB7 1ҌD"Zrwƈx?NOs!5d0=א:hICJu"fRHC|j#^CJŊw ɅZE/j>8JԷ >vţ2 $1qĻv]h<M73<$FITqxHĸڻߜ X&)ц׷%iKלOC t;<&F#/D!it+/c~'~sA"ϳ[^͒2vv5*geΤBٻ7r=[aq!s٠ƹcN;яz(?]u2mj7򢞭[xU^X&p}^[f>ּdvyu= .o0vyy'O\.]M_v3W^WH,OEb&Ғ#ְS0( [fእ)\3n_M4iFc*Ue}P  E'ϧ]VZ],%Bb RB 2R RR ENZ?Ehrrdޞ աͮב.1H{(4+asERTD& 1JTH v8#K7rZN" LNc5/NΥ:Duԍ#`TJJ,V:iʴIS*ӌcf2L  ºK u# `` ]YsҞc縚\KP V0ZI4yN%"9rCUY " ٔ(5ذCZAG399iYKP<yKւЄa91eSXƚ bi:{!Mc,^xY/ke 5xY^xY/ke 5^xY/ke 5x95]xY/6^~S=Dҍp_yCг 8xDMa|c"HƄǢKp_pJF į!@_k ~ R S,Xk b D5"֣zh~S4pN)8Eh=JGm,WbVSWVxM ^%d4T0~#lg'.k5`ۖ9n!Ob$mdoO>yCTC=&1{~߯~4=:bk  8_%D{N<ݰW=y .,XN&l 6b=Dae )qu落}aϦѮ-<|^wB7R[ ?ۡ~3D)My vHZuh< AZs7D"pCn !7Dv!7D qdUSREjW+C`e !2VX+2 ?C_Z"8IZ"PKM@-%D_:D@-DiZ"PKj@*E@-TZ"PKj@-%d/PKj@-%DG L j@-%DD@-QG8J5*`ySItnXn*KɭeȦDq™ū폵(nphɁPAM"9`,1{.D)כ$IKւЄa98ؕd"σ򶢼CBoL t t N! t N!)Ui%5VA4dI2+vu52=\*6ּr4;^ߋ}TB} I8vÅ G~{m߁wB*Mz^I>nF;nاŃz~cJyT8Y*xB1xVP%޼־Yh:AOrt5;rvLAVWw4gVƬ|LPȶڥ?>pG5#׷Ja|EtifV= _m66wtu,<d |69Tgrxg!^^;mY#v`M=;FcO*rc;Vd?чgՋPp~ou1[ TMD&:/A׍cWgСǵ䊎DZ_ʹϢntez9,pgoԍ`vuzQ90N]T6t//^:iL B]S繖GDNqRb:vaNgܡLrԃ:-y=x46@%dr1xE)f=hpZ #D[ct{uį" "pKJK֩!ohcCUeԪ[9{&s'}]L=wuv m|jm7aK{%U)!s˚n _n<NL{-J9XXVlx&.!;wxZ;( ގp#:ӻ1+_mIևKDqhUl꼢 s/[i!$%Y#&g@N#Z|5G"οJ#Qbp<|lGl{;۞:hHZk{8k{gc iЊwJgdHGE ]ko#7+m7 s=$$vsFYZvb$KdV۲=4qn5Y*TxtJ?\xR}+j~I97|y-Br,Lqpo"yrgDMҡ呌SXwFُߠi4gϞZRRKVp/|%—Gs _~P/—_ _~u@a[ _~/|/— g]߿]FP "2vL{ Fsސ_ 3JҪiW=Ԯq\<Ƕ6<bͽ)jt :CC Q2PBX5dp >/]ĠD:Ț iAmc"0 tAH{v֝ĠzAg'_Ҋ#n0\vW6<>_ O e#w!mXT:8Z`M, lfLQ$9$`!$Ɨt?2g8@3\Mb2xdJ ViI.< -"Z4'DvjgѹcPh׼UXwQ"Jb D.P$S-Bh$\"D,^|Zv1$+ P-dp0&%& $OnB;!IR ;Cv2!qu`fcJҚhF(xW#Dϔ􋾬v-;_r\#ktb}.^T?S[gnUrʼ 9~ylGX"Zr8t?P0q F+_+= @p }R1`gxWNg84$Zzq`=!HtByt\]=t7\ 0i֡8j9i@~\ 1TG&NmZ.utԿ8& *yb8mn ?`PЅ`񿗸N|anè$XH4jo!F }s'lMg̙s0J2s=KAj4xyOPC;cNȦ[: ،rc3˪M6Ѹ1Gë@O .N0UJnu.V7Ȋ.mX5N~@[HnXs|/1I=o~`wl0j/W~07N=yoO/ߟ)P}z'+p.# ]^PGn5ɏht47oiCӚw)lwh\וڽm>\kzbm ՏweWNg 5s<W*̿F3?kO=?oJq=֭bٌ o 7n7l_s`^@$Q3 6P&qk@l|JX[pF-NSGs^ݘ_10tɾ, @=Ҷw$M]~^9EPgqm5a19|泧4S/پXˠߓuT2#*$ \ca}*ۉ+qhf4NC/߲eLOଠ/g -́-! ]-)wEŹyM$֋g(P[hJv)٠vxmEZկ C -N$cRxB OH )N11!fwsHɐk-6Ǔd@[Po G}΃N瘔NSOAluFtG{^>خkmjkWeeB3i{8=ZJZ$4Hb +6l<{`n[{KA[(`P (ȅ3Br98 S'ՉFZsFBmךZ V!QAq3%`++BlXJ :ƴG`rmZ[ÑBd7Z[jgID׶=SP7".$y  1a=36DbR$2Kxp`>QGT%Z Ԟꦢp Q;?;ؖ_wT}~ԗ%w* 7ho ;R<@|{ Vd[ꦕl)*[ʖ%\: [7D?7s=6Ni!ӫ~ 4?!%^0m.ط34kvKF%lh{|h! je0_39\uh)MnǍ[_.ύS5̯Vw..GX̍Hh[{3ُy{߫G_/K2l߽NzJfxl] E-.h)]Jwݥtw)]Jwݥtw)*H\نik'+:8'8`g7Vap~c#s`먣9.wiD$7f|>|^΅ 9t[:Cz۟Ѕ W뎅TtLչ:RNi (.X (s&oFDD"O@xF(Q O#TPi .ȼ&+[]Љw._|kTixnGV eJ~n|_b5F {M8 ra*#cuÕѻif= @p }R1`gxWNg84$Zzq`=!HtByt\]EwwmM %b3T_Շcᕝ8@K O0E*"y}2-NLsEB~<{~&>8t5}?5ǾFk{`3;:>s 4w(;t).~cR PUN_NO`v߾y+η8Nަ&hq&z~\jWX߲jiU[gE ^V$3fqv $ӏ/pkfq݅vٚ:]S?fGtbģM!VkIa3:6кo wGl¾>:%lW?'l$\QL> ie152}8eFcI6,缶SaRmEDN \K;/u.(,l:sf2J͠!@1P %`>,d@Ȩ6uzԙ-wcԗ:zs%_ڭÃE l,_; fJX ~Fe十Y:]㲮U0Kn&~.zOM *m k)R'/Lr0h-*N#UD4½ ^ZJSu'8RS:~#y"+.yYdFtt>io48YI9[ˀG =ޚyз4'aHz ) \25~3A3ou:i%x$+THE] )HSXtv#cPX#y-G/ΧUm2/8Iv]P/,036->VT SɊlK,Ĭ@}q"'_ qッhc_ wjNDc5)q̱QLB&[(3I:t$){޺vnmW;@@Jsbq(*@,6ц"BB]'6vNx׮wU5MQg@]{ '" h(; ğ 9~IjDFT`DFT`DFT`DpDT`DFBT`DSDT`DFT`DFFT`D5f#& #*0#*0#*0T7TDF 3\ n:߬ˋ_jGqk!(U%+ \ !*L(XOڥ} N<-I&YTʨӥpPE&sȒ  ngZՠht7w8` ʆfťuw+Kq|fVPlp3StQxdSto 6&Ǥ}*A,e@10"+ǹ Hc%レ8:>8K/ò`ݬ"0CT\(~Q<_L[,NXfsy%fYY32X O6"PajlߒqSEi$nJμw©DY@3u`2hO|@- 8z8Xo2L(6CuG))T.\JԒ'%Qފ@љ @X\EŒ$m#my=},כTtʷTB=JIhkx s3IeuT8"F_-HџzX'ǟ e_rS0=kӓ!#;a-܃HiEmg$j82 M&2@']Ђ -d|i<|XZmbj۠(@rAķ'^3Gnw#ɥ s(s%ȊsA}"ʠqR*t#u%E!Y1Iկ'caK_߸ Y? G]G_&ԏS|\ wjp+83/Qߌd{<`~/'&A:\Akd;ӯzo лhRRYvwm1ɫeB/u nmsο;Z%*hkn3g}NO\4{mӅl^aA4oP>(Z=8M4<^lu͕VkaԙyP/-ڟ&uv=r}̋|ovHo~C@AAv &SksFwGK ]x%R j|DiC6DiC6DiC6DiC6DiCzF6DiC6DiC6DiC6DiC6DiC6DiC6DiCF6DiC6DiC6DiC6(YR0&J!J!JBG t$`(m8"J!J!J!J!Jg`Dk!(m҆(m҆(m҆VHK;ΐ*t2K/sBPD=HA JV8 sZr}\ǬLYSsMx!` 9s!X2**xmNY3\萌%9v1 O}5{1e믖#irᵜ9rpu2+$9g"\YX )΋̊gt! V&xin0.#_"JepFga HD Z*)hrnP#GEy?(  A9p?FKY~*cvsgq겥kӮVJ wVltfL,+ح|;淟6~]£UKApv1V7t8J,z\0%kZPn)&8G h)?04p <k[ͲW(W%}d,Ld++Pj m[U\[OrZ;jϟcM#]X~[ݙKke;h: O1(Ūc'S\'q'g0j~`yלѢt3/^]s:tv̮hQW92GV]s5YC[ (_(6dƚ\]Ѧjj@լ[TٔǓn~XWxi9ӳ:G5"W*.; }EIX:y!b@~rdr:ͤ4uLPpQJ*`zs.tHLlƉ%;hC As-%GjOG>8C)xat! x| %y۰TjZԻZs3 ,2_R^-~$j6U Ev)~2eQ=Ig4쫙$v6PV4@i)/-si?=i񯊝A@nf%z-;Ō; z9nSdkߧxނ f.tXw'2/.K$ޥ?U/ ;nɿ?c^}sW,o'( LLqKS^o͖jy兪hP>fQpz9bl,W(ig*VyΈoSӅ2J)Mz)q-1L/?|QQeם׎_|;s8?GWqCy^T?v8)nXxX7q/?|.|I/"A< դ-bagiv2SΎ-bh?^7_urL+|j*ZWhPpq^rmIޱlvsOR[Ryu{h]XdWr|VgN~ eQF֩/QR)ӕM*(7l"$ٓ-1f U)t+ƸaHZdl $ĥ5 cRMacXSDg N噲XgNMwjA/GimUzkæ=Xn~3OTC];sj{MOta^fguεq,IϮ9p 4Зdd(YRkSݛD]1]Cy1(\2D*:2w Ŋް~>mƫ !F}jBC<# 1w NCF4o[ ~Ek^3h33R6+RtO경] Wʻϕ/T*a)fȳdrӃ ôEZ. ۚIZXv5Ɣ yv2(vgl?)NW(I4k%rĉ`'Zuq9 ц0.CJ@ZᄓB$3_+zނN|1+G2g+j&vy]]N^ˮʍ1<ƃ[Ns7&E; 1.} u"g%|]L>cm<#mTdM\sV>SRfO(܃;Y1p>S̒;5uu9c'8@q3j~ ђ/>of /Q8>*3i.3>s{mQxnQޅF]pF.8 l)3oPqvvfei/GqWE_BSQ8yϕ)!N)R\/}t.IAW6i>#̮G<_۹ a+t9:aQ}z_~b&;IlCwF [(zkR5hChIBM{Ru Vt{M>ztbL+z>V5IC26jQyMdKܧ̭{PZeáegE'ͩN2TzB?1 E 7MWzoW*;lw{'E{,aԙ",tLpr&}%+"I'/9܋J34 =!JF6vɀ&>/鄻JswUowJYh`u 3,h@8i`XTANr| 4\0EEHb4H.Q_ƪcu-rg[{R*rkHNDDDSdNV+͌u&Ⴀx!& d/"W\qW\ і萣b2xJj$$BKfB=!b+RwM@&5Owi YÃ)Ik1.Z RR!5Bt),]2ܺi­Y15NǯNt6xq?ϳDrΎŷVo8UqsXp9)~XLq 8禜7WK\%o :H'Zr<"7|uy߳ ?lȭYfdo)'ӋK% A!?OT;?Ο=> G7Qu:+hBX<Usof/5(.]/0Wʮ8hjZ] -<_sn#]4 CT6**ʆ8UӨNȧhxyГ嘃t3R<|M657uK64uρߎ.{4)F]N)ӷ6tEzIw&Pdn { mǛ4o1]z-ƭrøgV2f~qV Yӛax;MV权|<3W\WlTE@=ۛT KufgUDbiF2<|U@潺_/6G'⛫Y25Ԉ_deOoo*Hzho|++r~@mzmq:\Lj62ЙL\s4@k7ì{[l\׼Eџb."< ?*E7qpk689lϴ)ȎZԭa:4gbԌ]1ZKiױ79PH7H5o[Bd7Z[@tj-!O'=:P(_Hl3N1;l;ȱ1O Vm XRZ[O5b 8&0OrcD*=29w{poBe[67Ni Jx=8 nVĻ_ 3 `=y$cR7M1P*=zFzN\L6J3JJ 2 ֐Ȝ ?r25[UWzL }`9tA㒗DZ"4E+-1Pt>f-}Ϡ3hȠeTy, %PNZf1T{&=19 Pa>0o PN!J0iIrRGr$PhwJǿv9wzH!I<_)6vX w!R<^|(+ڡDx69ɭ)*+ }n,B)D n53-/rc?y~ٱZ̵a֨sJ;QCΙ319$2vS=,SQi GrT*SQXY·"h޸ zRx.?ۨ[ӏ,h $b4)ޕ6r,20 2R߇Y~׎H3VS=qKar' 0're_.J稴{ʱZ1hLxCƁ(lecWrR(*JOt!"ьJlVk DŞQHt,ל(.0s)"$mED`mi@P1yovxUÁYg0[2"RYUjL`0f$͉nxp0RJlX:I[sų4xQDbOkݓ;/t\*+߇ZѶ~uw9ίZV/::bJI:Z"O_QQЯBB4O6r]ܟ qd b0p#UHgb`:hg h ME*L<"DcK)I]nMDB,ExGI4 Ɩ288ޅ7w@ w=@DûA9]7]c|7ij%^q&G\o"(S2c$a9 qlc$fiؐf ܓ2YBK^|{oPX-[ `'0kOyG=}YNA5@s?ob uNZN \25i3IQN)&3Z<9#SMUYfo}ɚu*|[Sn;UJ^T~tdɛۓ7aL3b{e"8_Ya̅4#P4/a„fL6pZX+&2 c?e%ap?/@Y EgE4US9#e"e"e""9Qos"K l )f (hkE!e bBCH'!$#c z"P),>GDĄf 11G`oj|?NZyW4AxEBc|MRj,qװi19psopߗ}}ͧ].jY_v$ cx7gJ\bQ}0CO Xofqgfq4Ɇ6ّ|bF!@KsFN,ju|k\zNHf+3aU>s )Lq˂8~-"`ݰ3Sdwhݔ)OOBGAoZX\ދ]F VV̿|.nK\mHvb-pR)uE ъY|IuLud#;9!0͚^d:54ABv M%z:=u#]TgL1,-0؇hJI+^oI ZoMz7#waX@w⯝6 j~.ZNEanZ6eqmj٢4#7?(LAaݔbYwAh,.ΗPx `Z[*X90§b񍭯bx U9s%&zq^oڽNSyR2RYH^o e1߭aiXf-A- |cP%M^YMpi.~]Z~4c5=5X uFƓl 3;$D)Þ3H(ݢ+)(ԝ7pVoB_ӟ;*aWiC(XK kK1al\sӢ@ ~OCrR'KCvk*5R=Qٿ ]\7Kur)C0l' ؛OެT?`'|Tf0jv ?eK;>MO)*T'?V?x7._fk#ܘ/A''K.p󸬒͏16ͤƙ_?ͦiiiY#aR'pt8gW>9>yϷvVuMn+HE3 Is`#!,)s|?<gMIO &?\[PFwo?^u?_~Xo*ly wX-MM ͦ՘Z&w==j7T|zʙO/^Rì\qu#](yJz3?X6M)n:DVWdw+ȧ*lG6I1U)&\%2>bG$P3*ϘB052w1jmho Mzrr6tți F2wSb :9h $a "CB@2N#/uWOco()T3"R=]+}( ^;NMeDECu"sNP!bC *pCc1:w!uF:D1cG|4P0T+6^)yA)ıH1WOpà fs'hoeD)F`맂hiV'@qEt^y 5$$rhHو0pjoB@& 8)>[PWCU#5)밊ƥ%VH!jjuVҹo{ 2)"XwaæcqࣱRu, wfv:|m'S 54>ZއN堶mxhQd`mIV>jl"icw}t ʓя.ɚopOա-YWr V8jhYCQ߂|̺JSG]5al,Y[aYZvܲ$~N DRa; +ǿtsVDq8%kcYT }s}= :4ZeW/&wHE sUujqAÞGlzeC=q DPKfQJvy' `+y}[S8z,5“)ny|Fu~w_۵:LIfI:U GEl\3ͱ4YMs7p1O'~2ͱPE#b{p̶GJ"m~tj9^u6q4Rjag^SxPb obLGT ,nS8f\^Oߺle0 sQc_Z3]S*MuioTi gi-\MRL"D>V6W>b-'&՚ë81Swyc[{S ro7 %ږ5fsEsd6q ]KW.uTǢgnI*vS$pi@1.O>f4mm2fF2E?ZJ\C߻42KomT#L ͑b.іpg` W8}65-yʐc̱c ;kHE>ג)E 077&"& ʿ|Tu-_}[/Ԯп,vT-^yU);\dXis@ icFbܚJ[cR;ĬW逰 \*";^sXe7ݶcR#L.GF:d}_!Kṿ 7EC,˖s𻫵F7~BJiȐo^uozRMy?=VO`*W#[ f;ܙn KI[٥n<*zwεd육 N3ZseJOe]TU#Qkj^[b6@S m'Ü9Q2E<=Kro+L.JnM^3N}v dz9peHF## g-!pi"x^(Pʍ"Og֑as;X+\$J/h$ wxil9/W_BT ו߸g7>=_?Ep߭a+0Ӌ`hf+$~ERI`a?"Id4$+nq_4c`?{WHn!o;e> /A&)X8H|A ZQ A;u:rK%cS)XE9c0P Q:%R$FF"HCL 1ur|&=Jϋϛt1xX~.AlW މuPPNzBbYIW?A#MexSh+MA$օF8APUd"K?Sd}A5)J RU & ="qPԋx*X]Y`>:-i-SVlG'f1undPy"P!EÃʷl~?쵟<4S;g8;NIEke2\&p"4, APOPhǖKL= )Aa+R!44oZ{V`ܻ8} `,n$^ $sB&a(. )9B0x,S\87dkˀxWG5e$N`lB07A ;T |Aׅ"&P)PC:4J@<7(oTc36pߔK č&d(hkR"a03DI HD%XlKyT 5c' @"đ#qM&q Iߨ3paueB㛝6`_u@o^Nr{gfs=]D|te'c] Ejh\m$H!֌(C6fByC H(Wsq{˒;.R> rv+[0Fd..--D`* h"PɠrW#00pC`km `L; `)g\rbV3~!K}D@ @}N`qaRA}D8Ye4"r-(z/q89" Pni< `PTsbat\}\j1ޟ8JI$f* n\5wH+@9 t;FjڮmCk{Iu;Y~Ա[h_{p$aLVK0ͣ[t&4 1:<8Kp0  `,Vc kxxo ̰F!s<60iQу3DD0Y<[3y&1U:A2V&_wpKQR4}Pek9vcISRΔHeBBN69Q Xh q)iԵL؊]n^8>L9o^'"x8 'oFY5ϰw|ԿB{@H*"{oA>"r!I:Ss*ǻ`(`6(}y.MA}5˜+,'5@`UD![:s GG?d(v%WUg?X䟕K4pyvQe$YД\,ݸ:|wBÅEX~pYp?%vCM@i/Y# MA\e,gQ  _4+{7]H|y ~7?/O9fg9\ z6{H.y*>?V9U\=dǐLS'N79~T^$)ODD6{q.~d)卶/zhe;m؜4l]\ZL]t:.IyZVk-g>πX#Ȩf?H| yBǷx)+`ZICc&ouH!7{l^yR@?B\]`|T -O  E}t}!7N|=ڷ}ʭFWwཊ$jϼkBH*k&'΄OG:W,n]‰}f H:Z K1&-JA&p/lJu "vy<>@h.+`A304Ĭ` [b WvTM[ kRzľ79 \H@) h+N>?щhԋ}\ eڝ*D)bwM× $(|g: ir}+1В.Uzz{ VW:=m/}ܞCfAԫ"4m@4cA Ycnsg d }&o"Gx IV= ,߁CH; A^px9zٵk&# 4S]Dqv=bzĞ8kiI&ٗɸdwnwqN#<`满7kNB”d5'3&<|'k q@ O݈SHd8l;2s: G͂UE.A 8 G|n;f'*X? vkM^bcոze?Fn>ò_[`Wл_ǶKp3 u!q*#U0ߜͿ~?-wà4nB,nk}Wɭ:~fiC.|;V*ɼ\wРq떺=-wDէOf@@S.7o"[;jS4vIAlW\\kuu?Z ঘRRC,SzV(ŽZ,4ZY/wDTAF9~ "jke"؜RXr;3O EA;ȉXKBD #ʃB*<Аka F^ ZLmO<8q^B⧪A'XG5? vƧCՏ>Ocl2ᯋ.LZmڬ>vnek~׫dhEoØWe8o~fx26N$v*% ]?M'[h}9~a\*R͸ zaVP´^84#p(9h(ThdgDx80}٭m65[XV)~5@Fdۆ Ʒw*Dڝ0wy5ʙj87U8VM܍U SVSX7 $'6_U@Fo[ 3_G~l܍~@޵=r~~7}u%NQl-=>*-~'#w7T}BN4"#d(1 l=t=W{W$[AdvNM g>x޻Jd\zP`QH]yk!{ "vW^j92F jnםz6L.gYѶOۢCjb@ z'(|3ZJˀb`Ve[wwT-780]1|Xyd~zp·牭FYt$˃p_"_[au…h@Z gey:2mO˻cuGa?DFCx+zF, ԎEt۷Σû染|0W=0w߯WSٚ{Ю6q dHk7(pϥ)(h+ѫ! UFc2둧]DF:9sX!ᐧT 0'0|``]GI%:"K`ΉsxIcS!]F+~'nS\ 5gE-s|beGqk+kâNщFAd 28Z9Y+'kdrVNZWZuuΞ:{^gi D''\ʜj\}?iB +> QT+A{E=]/,7V*StUҙWA[ N'c(uc%&y$C vPX Jy v:0kP+Vĺdx(Hb!>}Zv_`&K9~/tO,կD+/ a,<2lwsӳ&:G䴳FFC4R#hښb6yST)Ƅ"$O:q/=c^8_Ԣr>{͇xh*UTІt8,s#7w,}',̙!\t\BiA% zE͛m*(T>?~mU?ݮ jQS^]V-cxHѪ$*M5xRRu1Dɩڨ)Kh>/Ʌ)Nh< F npPU;E!2a}Ml:ks:ߣZ$p!]շqW>q;CdvK"wNHPNZhQλ\p=ՆGPNEPKXD 2!1EEihj߭9$8^OE{ #3%#SJAi4 IhB 9@|^LXi-:ΘUY!"XCCp"2:@b)rrJsccI  "D”ٓNr3=?hZ2@ aЭd)Q3jP$J5Fl@&W #' ZDL3|ĩӖ61XCCg4q@63Pvom £&&fIP{oϑG45ֈ{΁LKpU%* 3ۉ_M-wє3&DQHhIUB]L#: ض9́$ 0=b]oוֹRU դ*#X|eF6)&K16`%,f"YF*֏˨},G͓.Mi1+RWIG8@q碦&SΡV`˫AkDvrOӏ2A +NBikͬ/St 7O i;9y)۲D:k֔z.hCL$]T>;$с`*DfvSpOZL͏gqA0Tb z,rb#*(~7:aw۫o˨% }YF1}|thf—r jAW_TnD֙BL! )>X c&:<2Ɖt6#RGR=: LHKT8 fy0Or̓$T4yDՊl nOM:*ꃰ m ՘!|;h]A1PYN*p":UF!9șA-SN91A55$H#kTNOg$xg}aR <Ssyځ"RD.rTbJ')'s*U^^?%φ0:z.|BI Ӎ; D锨ăThTBгJqǑrm]&v+1±䐊i[imѯu9W"@vn$IP\H˃)*犒DV: ai#"q9|:Dǎlw_۹@9W6銀b Pn*9G 9H%XKc,^^>_yl&7iޒ~F>u#z|9t -m=ɕSy6C]^|Z@# [غZn]֭Ww9<5$nnVb[ wg^煖!kot }uhun^]@e[6,zM9iΖ?kbnt)a7 k.g}fG>r}!ء<^.yMd6$+-x% t>>W y2kCFHJ, 'c\zF9a|tH9P >HN$'uIcL(\Mٱ8DTs/:Ѐ&sz5\ex:}&&44'FⵃN̒dbIC(rVJڳ#:ENYgY/G,/ B{HB%*R] wBPRyN1Hc `qA,% |~-k;> I.Y2ܙD4gd_3y*Ɏoaw H)F# PETF޵q$vꗁ|9d8X`OgRHɶrQġHc53գ"&TAb]YBX0ghj2X'-2Ŏk[[T-Kד7lNKE+h7Wp2OHMlD 0k\2ޟrṒMJʎ诖w-BQ*RcIA:dBd!x(ekN͵wl [`XHD7ӝPM6W}޼ݎn}S\ o~ Lۉ~^Oפ[Z{a'Z-VwΩEiX".@rimBF'Їː*śbyYL쿌J_wB;IP.`*D&+P*$mdۥT|z+g&JI%,:g]9},[Sĺ cҢĺeduUzUk+ކ} ]\ޢPJ4VUd<{y弽+v2,K%[EUPUP-wiZa=Z}oh, s_LNoCgS2v@ydasII0r"d6]rJȒ q%1n'ט"SuV:F6j-@jR:R7br.tǥek XGcR["GjW6- pƇGk{e]j잠&L@:A(QJHT9HVw_q8{k6[䁀^ XbXkmg+VL-ڊO{hw\Jǥt\Jǥj6FD A{c+@YM$dyXŐ {p'Q^#QwmQ-QhүuFWKK6ThC{u]/s>Dl9pSyk V;]KVd&ey-;ohi*--em}=}-o~=o^xn7FuW&]aے7 ݝ`ݦV6U cRo<r[ZMGPR^wJ/o1aoSW˹: z0 U6;ʊҸ8on[b'/v|ˉT㓹풯W?-Qi|{;M%\JJf0 )E\o1q \] gWgk}[+cm c%js,bɵMg˨ϏqLն_f2-uׄqhju;?0m(B*x\4i]r.aX4?0Z;A-w UXČ)iγH#HB/:zR+ uJeafbiϑ@k)r@m@]m GX,i:$"9Hٓ+5C㞦Q)[&º N(),>3]!'$FQB>J0Lu܌Bϵ4m#HpYh$5BRJX3IggFOx3QĀEUғy])y'ȅ^a\8iWog}Z+4P?=]ue}B?:x6suO?GF_F 2u9E;% b,::_*:`%p6}Q `j2ZRRJ۽A5\^fTt_q8g-(OU#*tWyX^i;(OhO4ȱ1HrŜZ),0#ZP{)"2ll5"d$ ºY%ULzY`{n1!Vvî֦jU Jf?o 3= `/9J 7Cb枤YrhV2#V:m ɜR5Ko T)p'52u,H/B-IAJ"@t'mĘHs 5޲iGS#ǽgSY<7IwLW#Ii0rsX\3-;X;w G"WC8RBòW$/-=A|B+|!J }pܾڙcogjn;Uwe|A$8ݛN{>Xv^S!7iMLks8=<9m@[olT#bpB[~6ڹ gjgpzA]9]:U_s SKcS cʉU1:'ipڣtS_+IWcBu9z5?IM]Rޚ鍘yy ]R-c]7u|m@Ɔ:]BґK}Jeʤ5HAP0KnCBe2NB9N m"SY3s/_ ?}AE!6΃bjrf!o{/OYt\`w9=JzR6@#Rsy~3r]{V?ᵫf:>|M{HFBFY[|QNuVE`5VZ2y36wKmmBW8S~R}[GeKxMةR6ސFRHD0 nx_g& NsyB K6J;xN(I]C0/r,Ohz9!@p Fr0Zk:CO)~a iȺnt;*Ⳳ1rIIoBh$`4cEQ1p9.F.UNŕ=r!!VsS2|88;w`IRgZK+!1,H[M%%[Ƣ Pge/B;I[(ZDA;5Qd"u@ɼqVm:{*mU"$fq-U䆵Bb6d00<2[`8YG!PwEz=}dw1d,";[\fE "">H|BxEV5gSIo]2J@%n3DÙN8ȴM$/UdV, 6⨢8Ne5 .):#@ )aY#IZ- -!٠Vi(4:3#{Jidߕ驀|2u2GΉL5@=IxH,l1bx Қ߮e|^ZcώÎ3B񃱣W?~ fkc&r^6A h΄Щԓ).pصW*gi7g$b*I.\+@pY,$X)+w)rY^y3#8M"E)c2/զN E㻎K㻦< =n҆r~t]'ޚL?]imգNV&-ZBK%W7BjI)<*u "qo.1hO aMvxr{eVo{HꁓθY{r]D4Y# xG)J鍛U ~ݗn\}~Ӿ:ĮE_GwV1|}֑H{ɮXK²Ib.l$I\))i D'͓,D*c4Fʏ>(G[=y쒨@Ha0-,z?-rtu䑤V奤q)>a̛_?zu&_F8t@nMK|Fwz9I49{y S{\:JIhQYrgT.x3o˖܀=0Gu\X=(Od!^5ԳڴyvOpYaIVmk踂E͙?:z-wJZ t/+gy̋mRsK+:=oY)I偧>֏w8yغrI?R-Z4<EI,:hD{ZuFm~]XRs1Yt*۳ѲgN]YI`'=z)Zox);P'o%IwNjŒLG CP2&AB ʱJ'A9f~/ SJZM6^n=ԭF;bi몓R(!OCP#+j(<ۏZzt{1O f 1`F3L6Kh/Wc4w}=.u7m+Rx^}&ʃ@K&ۃ 7ʕ6R^.@m/^/<=sOJA)lU c G* KrpR!h~r&s9x'SAJE⏷ygF8xk#KTI^廄^)HJZ \1rܹk%vz2Fۏ蓝G%͝'8mO :] HG%*yµ59%ѝ)K.[mʤHA2+PXkesձ춤p=lv;~Er͋?ohh8: 4B}L (zAj5 L&Ȭ9bf:A~@JUfʁlhL YZ!C*H5c˜1m**emLD.@ )D Qs#HtVp"Dei=wbG<<ƷUQ{b>yIANF?r8^׬{zx57FQ:+lJ 2B9'Ȥ##ۈIݾsX>wRv\Xeɕ<{ZVaɾ;\$3L}z}zVXt\`w9})cѩw9 Fa3zn*j 9*>ō: d2]eW[<|ib8Qv$JlulF2oX*W0\\OJS)FF4P>/A H]lSSv=X`IBT)$}p"#DŽZrhȎYXSJR.CzBC,I0p uQUT m,4c *x6=Pą~}e 2!oD+?Į˷w6tQJWhxsrcyX6U'I4$A-I=Ax=Hr 00 fB`0eQ+Bd^"oK[mMI/b)26̎v r&4ċ OY,6#A@ d*Lj^i1+d1g: <!42} g ~G=qۖ?|n4I 5}gܒ)P\SLpF8fgFKe<)^ѷT\xojHrt厚2+XM7}u/x|ge_VkOVOw }FOZ=|FX#!/8gqF4'&[qz|F'`~z%wj$II:k'! ,ĩS9/迏vo"o&HzkoR5-|8EÉ̞3$Rx2 f%LVd1 HY@T99d+pc{"frq?l~j~GѴ8Qhz%i//~ Q9_4;^єMp.j@ezX}ؐ0EjQ!q;5廈QJI-f΢%)ƘL=0s)sU'_7'"3҈I{q%!&rB)i9HVyLW&}զGyu= 2&3IdK- }1Y`*(#Gz^S #儐uL<2f eI:Wgd# }Z`<$p^>Pb:"9>Tٻ8n%W(>@\Y$c$h$~{iXj[3H,=b=NhBoQ>R @hJM-Cfȣ}I*Wϵuĉ{5$_0͋Z%VX@BƗBA!URcH.j "[׾ہYɠc:t ϗ*0{:p3)$|rӻtVX-O1p J9%֢(}\N\&=U,.#ī&?E;چ]+$dzob)v}ZFW'E) u 㷋oɣw?SS9{wvjf}WMѣs;c-rN -p 0Í@ WzTOȨ`>ڞwrqՅm9$`{;~J$(<͠~fv >n!/zԚG0qo^%"3%+-IL/@l!D9\(ͺb]ok]˩\_ ̍o`%:k1J391e]g#gc+P[7'tMqʰ߾C/gL{Нv;#y"q,O&?ZX=D<rRD6TM,Cvf!CG4u&&fKH1)8U\>*_)6sRӭiG͖H@YY)G&%+H6u7M;/O?m;>ᵫ'>u&!^Җ[@O}߂ q?p8~ǃg=޳Iq;7M}Óonn!NOuлM?os<͘vئVa7 ?íQ߳j+on=t=_2Fwr>Sc1>;FObog+vWf@!KWt<j'}x M EVIj 'ˎD^(qD.)Vxֵ2*ŧe]L%:dGVHh$w,_NZ\g_L8D:?\ԫLN!gOI\**SI6.`ʮG͝Ms˾ࣞfb|?<𒹸H,p߸#bINiV=K%5 W'j2 B> լEpʔ$;*k>N?3IeUjZMDFGVR6MD0AaPk6%WI1NzZ; \Nqk ZĆMdTt+QPRFVpb8Ugd+2 n#2"LXK\3|5G!I[]Bޚltcg$|Smh=هx|.7m}h+:3C..뇓olE3ܵzlקwȘ5tV[Ym-fϣdv7xr+Q\mtcpmaw7z*{ͺn A@ZrM#µ]}Z{Wpb-l6^IE@v͸,~\}4~1tYf}| 6X\]_-W ^X/W o^ޏM H7-._^S ݒecf %F KtH RWvocP6:H~-oGcuw+p #tTf;}x]"Y^_NNޓ6 mo9vrSY," oְ1;ыJte8>H1Xn b5EQԮ}m0vxlg#lK =|0i;{&s޶=ók2[dq^)lg&oU:[Żc^t{߼Ng^7B>Z>v)]-b^~[=)y89+ iIR/L5@799kJYNYҎ"]GͭzcɁMNtVA:*f\;ιYX4O8l퀺ѵ쇉ԬR(Ԧ B5 ,oIMevT`̉3I-qDsR;ǾԂьOd9 jJZsNJ6SZ@4WAv6I]Քz%3te!di ;(3vH30fsYLk%ё&!Q5r򡄾FH aDk31O*_\Đu h(sZ{N)KBLt(ιa`a4!!!TL4TZ (x;Q~bm]L,Z= Ap9IZykn{n-8aT+ f[5Q5a뜌{ wFs kľ֐Sbe')&x熊O1|0!MTZ:Iw }^YXodTSuK͆¢47% CR|**@0`Vj 1 ҞFB.%Ͱ0Q)_f\ R{)b#K9'2|SN۾mEq uqDg ADYu>YEłjiXDՔe/yX(Ygqm1%i%K+청Q@0`!$S$8yerVwtEHBi%E)ԂQ"(dDsa99mSͤO\VjJiR$ GX6 \xNbhaXm#d"z[ vu<.{NzCƸ#LA@X&0 qNb#~V;*ra PAP !`9T|H4@zdi ,+TTƽ`gvJQ\@&ETQ4+gIo= dop_tT()eX7FW>rZ<3k^x0u@Kv@r >;#KJhrK IQ%Jc B<|B&_4+3{BxPKNҰpY~DkZ[ZPxX y@T/1ؗMd:Ynt ng6+"Y oTE|.TBdĶ"," AHd@ҧ5 `vWa}.OJUɅEǖ+X6S 0"1w@:nz Y TK|αa(Fms q 9TLAP{^6'. ђgIRcHAS#.b ]BmE@^ Xȴ!Bo,A V C: aYfEH VFB[?M)!pp "9>d`8A {J5U Tbd , % ]\Nj V_u{ jĔĪt7]6 latGӤ@MS3I)zP+9@`X2>bt.:)Xo9W R/,~fZ4-w-ZL(Y60N^!-9~;{p)S508gp=aEk~ߐ]m6b߈1spQxn@H݇!t=hϞRI$k$W D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$%B ɕ8>H\X"^# dq""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H "z$$JA ٌ:ɐ@Z= HWHd D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$Ы%#'Da<kN@Z'; HHf\$H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""HC,7YZ/7?O+}A7sety1G# .As<pI #Rkm' +l1'd8Kl=\a+pak; 2ѶypMl ]0 u;q57~?PH, 0! o!Ԏ7_܋mԻ\'݅G8-if"SprZނ2Gnl  6\'*pI=ZV¼IZ <;/{\P-;~gBq!fN38Ň֫yK/}ox*HƴUә~`M샖UG22SȚyq5`ps_JI2!LjcBL)KciVaʘ`NH C'#gc+ Q ['p:kn8puه+l΄(pz A-e6?8}_Jn|t)BXVthb)fǜNŭLO+n}~\]9󧸌NI6lv2bΖPy^E˭){cMڋ*hy\Z)N Uo={W1\sKsSmG!e"i*U]wa)J e2GHg}fXF6))<Ry Z}./ JY2] f- C. sW@ZaPֵx"[XUVCs:+B$cLRa٫X>jedBxej1d[k.4Л-X6K 6]Q,v ^z@=0rXTY,AA5Oƺ"Pa^"TFzah𫮆v9SpZд6؝\2E y"8s뻛[޾/^=F޽Ǻmq[}9.HlVìt7Y7lž d[-wI}˛o8x r/7ya>Kur2MgU۾U_"v=@nӻ}/ߏ.]$Нy&s/:ߚ#nne4M%c/e.6FɷMh(g8ٕ-=7x:jDsell|s*#pte[kWʠ{k/ߍ??1>Naµ o]~_lr~]/7 q\Ǖ+^r}\a5Ňrh4=jPNV}ۿ/ݷ)[)@-j58ZUUSs^߲=V/keiVҷ:N,#!y~3~  >sv]~ZdP`xֹq4 [ \Ӟ~OC: 9e3oLrLŗsk5y:{b"xdN Oܰ5İC]!uh~t@߰-XUfɨ\ħtᣟʺC*QlZH7AD{ןkyܯ0!7▸cvuq<6DNרο/Wvx֌a|_M>V!XOugWE A`x}SZc4!{j-I J)UH%&T+"$O`VέaC< X;;fz13~ݽKxpgRIJs)\堋Q̈́[Z)-%[7[q[ U3>nЕ,ߚB`U 28w}fvy93z[[/?E^N;udXR X=ͫ(c\$ĸݖJP\}hRm\1ZBHqFaXsFh0 VՁ3ʳUla'b* ^e[az؍)1c έ PY"4*NˆJ^W[pAJTtBg|>>gҒ<<PTAr X*Ym & W*9bZ@:8ss5%bzEIݧ|{=Ufx##dl8g~Ly6/?^UtP98:z"uWѕrvQmB|e[G5* .SH8zZqdxR$``ј֔W*j 5K ժyM2H\7 [[8E $@&8oiofؙC=}zmA7_=ORoiUTVbE,d}#y{BjaO!`pwsE^rnѷ² n7XJ'&3 |3vꧡ(Vk{tyǶ~`v 5\ھ7\L5}Cl W(-ϝ bv9ϕcZg͖퐸Ή5Y%ӊ,{/AH@ ]+[ҕD.NWe p!$HV)QtԞ$I*ᄔ_8X-X@9ӓy?uIӑ'b7lIP(XKݣ-7#,Nhm \䘥_t;&抇4.e.AWYX o$ M tlY$ɔ/u\WkݫzףsZs߶^xhJo-=Q I'>&pMz&ߋ_MҺem;/jȴq7VnLgX2)& s2I 5uw\)L1j3׬L.8k=TATBѥO&nx Yhi&#"ǤGx89MỾ~Œ=ٕ$Wm\I=S7+YwC &I}H!jd2 TBf:Ēl\3YH=>sm {{ ?jUؤ5KgQCY/]io+} ^#}0GލĆ CS"L  9%%nc-fzkz:JiAc qiL+tyk>$WnIA"LE. Nc,N}6Kwv0wxF^SEE@(K68X+U29 =+^4\op!I7>LumVzb{KjmhV{K/+MQw~PYj~ x88}&QsmY&_h6^ú<.GKYŨin-?\ae:X˒+QetܗcQX-51zFZU\~Z$U/ָ7?_E"X`T9F$㑅H>HԔ"1h#2&"YT?[VnOXw5_7Ջޟfk.]Ηf6lU{#\%vh5Z v C͝QFGWR@$ 5;gh4-t׫u>v<ɇ$$#,̛ۿ K "\2g|i`0Sۧao='ZCu v>ڽ{`IV>X?mf}1t/Qm tMmnn_}U# oSzc0:|ۉ*g |]n+M b0c) ȰeDg> )H# Gb=_uPlLYP:L§@4:&U"haQiƝV@D@u'H&u8NJq0[Uzh08,6Od`[Tv/TTKHs4aL,?O%I8 V)u&qrq%0p$A!3RMߐrK4Bͼ 1 60$})U[GP+;#G95N wR5LtZe'1ڃлƽS^S1YYp6{i,.÷ƣZ WADS(\Yk?ESqǔLޯI-`X)\S`_s)($\d Kq貹7W  N[lB\:$UXK.ńΚ%gpX|ZG_/9WZFg{9mIozyu&JIX cM#p7/F17dbȉTmw|Tқ&?ިi]͓~\uwT7^ͦͅUl 1q |~Q,˙]qlZ-v/Bm#):GlÉhfU>d} f0bf>ѓtq[G%hl]D"/Fm0GH:lQ,ҢǸN9ЩGYX.\Or/Wg|W|HD9@tS)TPC PX"] !C"pV7F87 #$)6^)yA)ıH1WOpJ_gܯͫѠn4Nw P qO E [Y-{xEX1l u r@H\i$oHو0p5z`pTQS}FCR Ф*|RXEdp"\B6x(DM7{,ռ?SRh\j1FF'4(i9SR#z9@o G$2fơ;POj+mk\ gr*Y.gݫ).}bA;,UdR%Jeasǡ Z&D43ˌ3Lׁ]sCuѝ캀|Zded͖&=A\QڄpR)IbV0x "Ǥ@ȧqĴA.S}rrΐ[V3ɏfsH˔RTYbځT>U`iJD):řAPI(a`K[Hd$uR#֎DBJ(2vgwNvg탉CFjG)Q+ H`G#b+Dqz!%o-g\( ! #XQ#R h- }ɪVgUW;sJ˸' 8fĄ7)f`Ea+u F[ yd8sr]XՀhF L Z+Nb([$[v,ל(.0s)"4;Ж2uڟ :cǂOt yxуci7PcgXl= ȈHgi N3ɂ6 &"ã;x,EA\C1:Muf01hRVMa?@JH'lL&ThrI6&9$`lM2rsIY6&9$`lMrI6&9$`lіMrI6&9$`lM /6d)VOo>}pmW8=pT|yTǗ&/Õ7fqa*p]ov#XH$h'r)O8޾ލZ}?V1QIW)ࢢL Q *I}EVG'u4ы&.$5\HZ›+ Z:6Moe&[=ɋiyq$[(DGZn^'RJ(O;p>CWmxf:'{uZٸa+]Lv\77 Y^GTͽ|z>6n.C|ߜA"]ҮVô5"xZF%3ZXdΟ&bO[Nmv 7-h6g`06gҲ\׊Yova g2 %vsMkt=\I`ۃ_+.SB1g+Niid6 8+W9{?uqU%8-W`t~ [c`7̾jbaO/@c Uu$OAhτK*Ljd<)V2""&ZH0<)c"֞jn^SQwj#,}J}}R!i0qtn~T^ôVGk^t?ד\w'ed$nd =!T Dc(@b΂wDðq $r LI᧣*hA+ S 4lxu TvO_%;uFh z '?1&*^-ZXz[ՅI>$RV~%'#i04ZT.sWCZJRw[j](7n*d401‹=Ј-{nq9=befp$5.Z&h[ _tmvmzLQ)MbI<͖]X?|}6E:z_qR F[#}t)u[n_,|e Ԛf='7Q,,qb eHч@  əeNN]vdvc{+&Z߷anOBh9!//s)TJ넆K%Aq=n"0ȦՁk"l%}V|JuuP)l r8UNʺ'WU3>y jzfY*â׷~WxrV^V)n=n邵[w?tNrwښRYI?۔%0hI<&3xdi9W3 R^"c>,AZzOᚉӫi 4p8<x߅q2 1G]d%"A )Y57mg (B.dKZ/iRҞ@(Qn®kqN>f󢎭`ϽSNU+?Oo_ M6.VJrUW+筭Z_%*4>b*+o=*z1m±U'{.]`4J/,b{4S,r͉u& b]dݮ7 RHt4T]&{DOlԨ[k-JO &y =Xyv[u1;CG7^WO/ #d~,*H+͙0 J=8~b ]{rVz}Ftb);aN=|DdNM:L\,up{ju4S֓ N#x&u 4lѲZE !5u "qNw^A3hh Ymtߴ2xboX9޾Yg7B(:on\31p)!T̊*kr \4eF釘N8;k}g)%^8K sLcst60s)s[':|e,Ȍ4"<ِI8g9!Z4YJVuFiIA\71 (T0$-b<$p4 ҅jBbgfV1M6Dn "Xa]o3]b@ L>čN=(@Efjz5%c1aeI(+ b1 d)t ʠUy&{; `,CE~/[v_*<1ro)й lC[ԻJ9pSΏop>X4hP&.bB]2d^PeR匂*Z+B' {hC),}uŢp꼝G'.f(<:K; >mTmETn7Wd餽6Lѧazp\~*F+gR 'Go$z{{o{ͻhyRM7҉dwDˇyK7fSSj*Rv Κ2zc_uRC=A*0fLל] M+OXد0(OӕVVYb~M-@mSō5O7+e#2^k> KX?+%0\Q("A R(IJ*>ƹ \gȤU,6fhd:3rPBfj}>?AMÞh_{'裄03Nd4O?F_1'TY/_xU|+M޼.v;sVJˏoG-RlGfB$5eIBy$ XY g=^7՟砋.PwHKW `)wΛ]mo3iyAk0vzssڼ4`w8}M 㠉$MO4Ȥ̲рqpEԒBzy2~ǿz tag=t<ą{^߱\¨ 9riO;0TF(N++UAWeJI x{Vz~N9x=j0YR2h8$$Ι+2Y #W+%7-=|eפ#Q-dՍⓟO۪?~LdwF!MR?NRDP V!X "JELGCq܎VQ:%oJr%om?aeT:jQm M%qBqXF*L0^~ }H[,[ VŻ;_NO/Οω\f;W(غAwG11-i'!>~%f^k:|<>φS(kQ ו9A?0flGQt?ѧLi֜2{+j!iSPVYNd %!2'J6ˬggl5 >VW/|}|194'cpM}3tJlL}hڬ]zJt{~j2X=OAhB94rXgA. BI.bR"0bt\`witP6EsrG][,9cnjY]j[Q6<"ʓ6i)VN1ZWh댾ZY|OPY7Q[*e؜N$tPKn-1+cJQ)VsR7^}k})Rǧ-fIKZ|NAeLSIrI $J3Je1J Sb9x֑i)|'A =q_+M (d-2 fB`pEF"PX|"0L?ZuUEVpsPfGs94ċ/OY,6$G:%[Aj_VA8FVwŬŜw\@/d@D/b,iNoNDhTǏůQuz#_q6goq*1%|1OxqS$czge1t\y=o(ѻG=q[g?>g$Lt+8Lb3Zp;:5ZJ.㑟N< 3RG8t =_1 AȌ{ɕ\,MW4ogd~4I! ߗsOfs;W _E,Z=jō+a掘^Mр<#boդ6]:\èdO4o=h#Y~mo|w90vCŗĜ?FyxvYgvI4G?],|˩ 06#aD0JaVY=lѸO4e*> =lc;`G]ڦ\G&?XבPL> i`'LM5'lby} zx3IFS{:!z n,$:)ڬɊ, |"! r4`>f`V"@ot3Ŗۚx↣mf+g%٭mA 6[;/C/U}~◸:;C}g]D,Da6U๭"ΕYC}Kgh B%Hs šD?J v &Kx) aN1"YŴ ;ː2,%g29qw; Bw;#gCۧ/!e4\N}tVBLWW⭵!~.K ϗ›w=}JKt~oc"ג6ǏX G;="2W뻧Mt恘~6eE-9nu;ݠ畖${w̛x7s ^Wl@܋lZy5%k.tSRf=R8S}S5ڡN5Zz|ORZgTRfKvrYDp8c0`vRn6p͛ls} U1UQJ0Si'}#%<@rFϵ,wJÃ&51`-Gd"WL X>䭍m !U"z6Jj{(QIר$O`]Uw@SX緟ixR؁ Hԧ(H؜ssȅ^(B&D2 $"䈈AdDdԤHIh)P4vY'dJ\/R0L1Wot'b#1MO|M:3#8|)݂J=+[cj^(6A%x.ŔˡrmP&qx6cSYpgl2V[Y qS09rX``|\3ku Tm4u!`k MCk  Y=([Oc.*5X@UԅY0)Kc*L50eHj3W.-pedc&(0?nSD.H $*E xɘ;sf֐G|9` WL$4/r/qd bCjC`(X 7Xpo},Rg2D2L.$5d1%0l\#g/:q߳ߛpۚmqvM+rK2MCts`C3W֋#u+΄*[;V Nlr>%^8|K΍ "1s`fB:Rb&y$LViǬQFYJy>SBkYÛB&e{Ѧү[v>It\K+3pumǧW۵C${n!oɜmj>l X|'\g#+M+ 4O16 [A$_y擘3qXՠ^"s`Sr׽&$ƜB x HitL*" 3âҌ;Y  $9!{AS`(`܋ e^LXK :Od`[.$j%J*%$r9,aLhExs-?Ҵ q˜d#`X!!jX%{C\ n9ˠӭ 5+H@o(D#0auZYHQW!FatC^"*-|_&zʅ\ ~ RY㾦Fqᯧُa78@<'dr61\Ypz^aNO?.M÷*M:D㲥5Ùt}}\V+pRh `!B J1uf8 W{< :]W+aWiC(XK ɥ06!üuy]' Q"ce2?>30CwEkϮMF+vWBOD!50y*F}ֹ?ыQv 1 qe%?Q_7S,Bug>'~]VΦ H+Ӌe9T?T({܌Lg|[PHJՎ⑮!p"jf >~LV~{?ØhqkG%h㗬]E"OF040Gңo!,)|~޿?%ˇuzu/ٻϏ?Oxu~~/0R:ٚ߷"@-x4g8pf;nD엛o?%[ 5F<2dĀu*r"#A#IxÅm tYA]/ dvkLs/M 2%! /=Y'5B4E3Q΂Ae!k'Qhj 1خlB˫@pΩ*Dl!Znb,FG.h!nc8F+|L›f<E !I%iㅑJc}J!LE N&x6#gʽw;~Oq>(@-VF=1"j~*V oEp d\DaǀZ9c=˕F"h!(  w8JA1ZC$6G5eXg ,PH&eVѸ*"S a#E5&jci ўʍj_q<QFO` xNIqH#@dDxeP<@"h+c{$cP}[`mMAm8)?NU{4e!%a_A |6[ 6H7?tۣ_3xa*:ppoGT;'o lGR8n+j%17s嶣O䢠L Q2BΊqIhiI\wc#YْcgmI 9Zr;Afviׇ͗3KON4n x&/rePP:!fH @SknKa.fEɞtoX?lR2 ZQ\;>X,&sp[Ro׿Հ/ [ais{&/s'FSrHwmIW;qIyόmaa "jMSLodI$^JdR$2Z*fE^"Ȉr,TK6Fki+ puŤ,g׷ئEmb8 (OVxP3WW9XEvBԮJcx]W= rlT+@14OV) zA0< Ԯ~hE_s:ri=I2IPDB,=#: sVT(g1q;[OƎkoO٥˜ߓ+X#՛`}} in0.hH(ru*RtP%&sRUt4=z |KEt.&L\`V 4rA:"`F%9AzӞ.+86ߓ|0.=>7/K5bՃ[aޑ\P/a>ږ>"G= +U%_I=QQzw['Yܜ-8[P~?#R99Łɵ 5B?@ GRT $R;m;8BW,)y`8쀛MҮ߯noΧ MZ˺AOG119(^bXĻ`xT} 0TP IUeUpܦD20]$'i?w<ĉmpHURO0Њ8#V :tTAhb(d"VJpB&ƅh/(fHM@K@aYL&v2啀Oܹ/o'ws!vl54֪zލ8{3^^߅*[gqcng48v[$F1GHڃ 63@o[ݛ.W ^ȍMCT6MV??>e>{s8Fly6f Y|vov>iS4thuW7;ue#薹>@- Ϲ˽1m^o&\/n횵huxOrg/PSs2!}()%"DV ! +ǩ3L;dJ'47L{f(J, '$3+/,h' CQ(胲9BrDsm*`L(9 4|7N =ԘP$5&/)l## %ʃruCyS|V\"bj田^9B+BB>x$,ʤke {-IN#4׳KFJ51s9;Uk#@Ld4 ƕ p$Yg Ņc KNg #&QEN,zVLzvzUak*^>DT: $V 4Q ,<%!,#{(we u;}G B%Q$bQ"xti6xY CbPφMϓN LSb-0 `1iꊡNG@BCYo8*>\lM!@ws^ FI\I\q-'A2#^˨U"V#qt_OOE-]OVE R@F VwR"ꂈĪĽuI-ɼ>RcXe)hGvڡNЏӼz;\&T-_ϿW $r" 69RVUBgS+ȹTj}պ h}s:N=  G7WXy8zpb=\\< ˩5G܌GyP (8O.~|5ZS/׆e{+%5\xp6e[ qyt.&>'=r;%\^sejs-לq8y[''*hmykhp 7țG`lo.7-zu t[{:^@2Y pzWduxEG.mհ y;\61gY2|!^ у?o n3W<'@Oagā ͆ۅS[LJ9U[xP{պ7bowQ=]ˊҩSl]V=WJ °z_ wOUu6}!_h~_|?|9Zqog-ouVw?!Mfv&wtN׺)U.2DSKƛF(=2 A*J`+7H X r-Z؝޷PCAk2 l\԰ gR~,%kDcŨ>euQ-n]1\BВzGV NBZì՚gUwTlf_Izv׵P| S@ Ssʫ67VȾ?6bk^o_k{M`wg?KܥB&Wd-z \I}DWuna<YGQœ6 Fe{<|wo`~jo~i Y&_VwoQ{$xvz vz|TؼM 4Q=DWQ:i?3Cr7?w|qh?ŻF& i!Of%/#1~dt~CN=Z=$]}6>j φgCPlP{=볡Pl}6>j φgCPl}6>j ׶>j W>j φgC=lPl ?@yyqeX\ù6{shp=}R"Q(1N+E"rJYMQƭ:ߐq|?78?NB\ MyvH:,FӎWYE{|z;|pZv0ASǨ5|zqsfn+=zy\j<6yNL_ul~!j``L~  2um]&nh /东HdR5]q+$F[9ENC@9P)(-GPo4#r28w1q:﫜BIWm?!^l"f/udӷgu59exqA7xXګ}/Z~IτRj1Lf0b\oGuegonp89]˜rÓ@ 35CLCslѸ^ D NĈD*צ'{l9kہׄJhXJ[T z5*S+ Q*!~`]H2 *jkKhec"$Di:J)$\;0Inn՗GZ7>7i.HuH2#ΗƏW G9*6۝!<1ң#QuXBM٩Xry.z,b}?`%^ݤbt-m{jmsܑhrK C!W0{LCʷY^SհqtjB˪ XU@%;r3S˶LDm[sAn4y~N s4vl8eۃrJI 2D!aW3[)E'l=a;.aYFQ$J7joSDe; 1`8 +{Fce i~YoyKB_%BhS=$IË1ivWԽ$^\SCLPQ`T Km g4YN-l`x߂6(ܦ[Z@/J1 lH|2vVIo]~L 7Iˠ.;!qVQ&9 BH VzGqƤx[!ڢ|#Nz7# 鲤.y^V距8>04{oF4jםqqLy&Ahϩ".DXPq*FRiEkk6}^M){n3wx:n5MTiUs^VFJ~Z2rzUN`xYG7ݜ$-,b<&NCX,g=_oѭrx[=[EZ0b4u`"xtYe̎0;<a>; m#VG&c',Pz[ՆɢtPm=@@$mqA;[θr;Nn"!k>GRfIz&i@pP9z c1ĽZ{garÜc)z_j[-3{Y=-~p@轡l2#zƜRNDO*gطkzjgN҂@tKI^z3O(oF*%I w@OVERz N0%SF136.XNH>L3ø0`+3|ߏNobk¾nG!ۮtꤛU<r.H&(LmKdp.@\pTnm$Z3-vYGNI7].աJCjQ+Ւ 0KVJk0thu&R ̃⪆ƔDѪDLXό'!\g'2$9bO1Zks Q%fm5p6+m)}P/Suv {>=ߓ;R9BS}AGA9ǩӽn:?!-Ƀ酮jA)r-,A9 JT>ԁiCoSzekfJ=>XX5$2gh:%"xbAs`h"CWѡO1 sh,p(;ַϳ l3?GKOD}l$w$UO3*x2a wF.ꄁx搣#SRǽ2H;,;T x8zcMU:(-OC@( 唕׊84ZЧ!pk21Kڵjt !U@yceS1bJ"+j;kRjګfpy-U3z{ ]{w^#).*%I!ºCP:n!t]缆7;˫8+ c?ǿCRV.c%OY~/[A /@ͳ#},u7_Zp&+TE@\,lJ'MpLޏ3&NlOҾ3cǓ[*) d5YiLXȱJ9%ɣV{ zdp]􍸒sе'W7v>ZNhޏ늮\nW}%syni- Nz֜x,;Iy $' >[`oHN;kd4Iv:᭶&#̞-QR%@Tc2f ^g:ǩ!4j}ǂ6\k)'Nz esNm}ŐGKp2yO[ 3FA!` $GF`0^i-O#6r_^\KcA {)9<=O{8VE!E-NYghAzH26jB:5!x!pWNkzy&4J 808XYs&v(C(к\P`p-[/a'dEw*&~<-+t.6LmzZP,hAΦc@aQEhK q`X AN7b"$I 9PI`\ߗZ#:5)(o!F`s^% R1-xgXYT?G7Xqt6b#P+W|nxsA?8 >!1ehᄾVnPlFkqW=_Ň..ǷŏXHg~At'c [:]=ȉkUT2\,]Lg/Y9yKuEl0 2~ Rwu=9Yf'$*@GqJZ7n*YDO@!̧pt9_wٜd\N7ٴj^DTV-s)N)$/,}:y2TdqNH15餂TN>7<.|ot 0ˏ~c?/?}tI.?Hƕ? Iw;%c,[@z팤C{<U='^G>YUfԲ*$J,FGm"&([8" ỠN+AN~<֏'/lvN>(7&=t#[WucIq?Ks? 'BZcA<ÚKA6Jھ8y5{D޽S9PJXj J"Xir}+Ab2y@)O"EHe=UA:J9!JqB,w1VX OY?dI0NT}BC}K1':Y}R:MU|] XSPBM`DYgD ~CBb:<ijkWeeB3PӤ"F8l@U*1**j Ŭ\(NT{۴on[mͼA;H@46hXWSu4+k:U'_u4CYWuV͋]-s|WFeT'@]sŚgyDFX\vYgЁqoR ǓNQ#L .0-0ο˾5{k5zjY*ݨ֫\r=GQ?r] CF8%8ZS+܊4/w*iMmXS?>ipDHӮ*cWr=O\աJOۂ]^vq,_! }x|>ķq.EH3D#cDDj$|,ux 'ZW 9+)@M$ykpof*p%̂u+ GBT*FNcp0uΐ$M8a)5̂'\22o`|Pi qA9P8a}d$G**ΰճ.?b™wh|C9(1|S͙UZnO=9BGZ?ȵC٬:z3u&~F#]?F맇&Pܨ֟f5!D0غ'[̼0r=>"M[ ipy;ꆉ{5!jN?m(x0-9d dc@>D wJ/Xnm~$ۜŨןT{Ei ޤe\>+/NBp_ @8 6cYw3a?N7RS|&q>I>_=.{ϕ@>hi?jը9ypZ2ר5/s7DiNU L#QɉOs)ltPM N| @-ޠJșSخ ]?W1 u >0֬"hEi/`^S+mxߞzR[גԖwݓ2kڤ"vհ AN*CkN>;dW?{֍쿊.c÷7ŶEt&FlkIwx$$Ym9E$:9!3p`3hYwUŸNNNGO}׃sX!̸?X6/.{bI8E~-2jFz'Ɨ,֊ܮۣ_[B`2+IM!6dT{)昸p",!'6Khy9K1٪w,B瞃*V[ k0qcཫRƢbf*Tퟍ.]Wи4XTrh6 L$bL6xX(}̜m@I]Zx^uӤ=hF x^^業t ^ƃ ,'FV#jʫ> <~Tl"ԘDI v?`wzHO0sI%u FQTSzBu+uHCTQ%Eu^=12T`H$j\QPjm$ T F?L@Ϟ`s'͕TcWk|$&MڑϷoOG>zGf.&Ǚyo?`y5%{0VLfITI79*,%0"h Q9jaZ)-k=g@֑xYfZz!lV>ipMEI3aQ j Ϭ XD~FΆgSszvǧ'>?=;0#6l3e+t'9ϧ6ݾj=mJ.ze Y1J]\ 1ܦ߷ZBFɚj^o M>bM@4z &P"UT_s&Y,T% ;o_.KqPVY[(+EWQpGzL.%+6Idt`-= fB{rN)JoB~C{͵}Wf ad*dVR" 櫬s3f&rREZr|[9E;Z=nT)đ6 s f, RW#@11T({%wW^r0.KN{#rrJJ y_bYpPEu?qwF3|m5l Ȯ'G( ztMI5c=YtR{bﭳJIp2xio2-xB3{6[AJ Qb"6\%Aaz3+(k fʳ uOFn#m=k_OO/ԀVfzپ6i{11M~N};>>KѕZFٚ~>|nl 1N 8÷ﺕUbe/F6C fbHޯI#Vch\fW,'T9O=OG =pfr2[:*'jse ,F;YKH0Ms>IMc_Iwi?ovjԚNktǣtx/_}?Ûo{F^y÷|W`J&~_EIm+܂C:547Zy^9قo0n}NyŸ7G N5.Vz=oᤫ-h/řRy_aY̏%oҬ]s8["fGVM&YQ'ab~M9yn:_}_ĨYe̠Z{!S3.c|-TJ?ȧ77aP0V|Ey6}qKR8bVLpŰ3 APA`*ZxCX2p˥΄[:kh'fY巵<ȟu:`jiTϽhU\5[>pOK7"ﯩ܃GSqF"M( 3!91x4rYeY==9OcrS'k]JRF-TG,%\5j'cE[9/? l\5`N׬@xo `K{Q+ԒcKQ:@GT:9RO}KWUͻ')WFQwvQ=+r]NuiC`2*h-!&5e4*kD*q1 $!}/)B<+v̈́9tTA7^wLmx,7t+o!8@G>Ɓ?k]Po6)_=aNZX_a)} &&$ 2)qza3@A!L,ӥH^3jFEĴi%=cS=gҕc^͙|W\el#A=u8pFM@oibeH vO+Lh ,3YQieUPKΓ5Iz59"$>zrRŜ0 PDɈRb`{6FΆhg{n_&2oN?nkqd֖z.R/oi2om7;fozv o3^חtlc"ג69IkN]^?T˭nοnOntܰqjrˆ[63nݼu{Ʒw>+ڰAϗZZgu9o!mMzQy-J\gҬx.Xs>yM=iwmm^J.=Dɥnwh+,]ts,͹})z?d)4>\b:*jJ6*CR?D!z_" J2*E*'e2&)Hv[C)[%ZOJ|iL:|4:jNI?瀞s]] krXfyVV[Di}1u=XjC7cF#uԄ.F1BN"׻˒E`:2%W#7 ۛ6jΪ'Aƨ ޳UwC6^G5_GgMS֔7*8!֥FQP9P`$['h Ox"筍ճFΆr{9v0*?ޙd iI҉62+fQӿ[4E mʤ ?Nzkv3@"l`~d:Q$D /"+0 nf"D`O' zZizK=qn'NEm2Ҷ@ tc"#oXcY2:2׼ G/N%'HB2"tguքY+[|47V 9[Sgaj)W]Xg{y^ ģ̪$/ت SL(%&dt@JX5%A<[D8z5A'g]Inbܤi,Þ@HTClH&E ̥M]7=w8oNr};zk_p³gIm/8yu^x)V+N/'/'MbU/VRvZ;)\Y:(R]BZg7˳ofAT ]-Yӫ%vg&?{ȋefM.@{l[|1:?=0Q.43=?OQz~i,퍸#‹vo8'ӳ2 g-wX7;c[.Ƙ>2]*[lZ+6j=k1JֶMjPC'j2 R % FdĀY9-$d%)cdEmүP>29k7PW//D68r+B:Wuڔ?6z8(sA|6wQ1Zxa} ZRQ0yG8z\pdDD!J>-S ]B d((DZP5wm#Igoqݏꗁ0d' &,p觭,y%9gIm٦l)f81dW7yC qC1$3agBL l"BE",YwŲ3rG'/E/-/eUhceTrz9Zӧ4S">cS&^-@"~O:jLrБq 4"Z a}ꅷmw4SQMxS98P%/|70axrG3 "t; $Rr] F,)xzJJ{O|fW@ӮHWPp8ٱHƶGZτ@R";GZWdZ$7:cEXTԁRY0祳^ \f2<(Jѽ1XeA"jBdk_M.*1v1?cN i1e'vJ)OVg %p) bTmA3Y`jDzM=%IwUB<4g8p/n?~yp`~¡ Df~tW`0*0+^nRn6v{h{* A{L5Z*dwn(gTn:x=^.B.+K f/'W<^B['P/xajauWf'һI6KOdGֶ eitsPI]Y5j3]mDu{[uPN7LS-3E.<0 X⾣`'H p6|iDf40IMyΝ Zu/6a=mޘOLk6ie8kr޼-TZ1֙.IVn7<w愷F_\nwhϙ`(8}La@TQ,0/qkn8Cm9_n`@Лߦѝru"-'Dma몭.zSG?Q{ j%eWG훪MA_3vXuʿ)`Uvp픖D$ᥣ\ iօ3ݟ;g:|=H-_OPfͱi^srf;E}1yz{c*~='w^hc"1ZScp@ :@"e׃QMnp2WH 񭳿ӋT6^Kh60n,x俐+F"R(M[FqQ ʠ*((AB F1.@)E:hWQxR%&w\lUՋ*zVz4\mESYCZo޹qVrsnYȽ~WB:wg) LT2W-:@f3Q ڄ ؠQB$GY;ՌRBrP8ez^ q,؈D$*'TM֌sfNllzc:ƙ3VBY8Tǃ)уx[ΎbR1U%ƔNY S2/ 67u߼*&Y4G 4ψ3!gBqnX>q *퐚s55ztR^M@5-1z*gaIً7؍,y> 8QpEGaxֻS$x(gaK2.FVu]֔|WO/F>KyUL?|ʛ=AS.7{&(AsebI(Pz0zK㙻 V|glm&W v[/*jȓ0BDB*,tEffNE݇bvcT#!Wwƕ=dwHNf% Ph+F;$LD$4tt7F+½re]JcE)"wxWpC\,rRQ1vy+WEqiJ᳿FW*ohuԒ4+b8~u|ۢr$[lpj.=#[s?230,Xz%f+ֽB[jEE)ʒKWP/5HF}0$鳫 GK/pW:|)|D"uqJZb++m˔{$k0x{Ƚ$ѣsgT:5OCŁ#)dл ] &v}Vv\Xύ$!JʘT&H@xFm>1Eℝsw% W@d U 闣o)@>-:S:tr4yjutx,]LG1bh Rx9`xT~ ͗Zt%*K=qR)g_HpKu큟{ A68 B@Tzjf$ZBVr]SQ⯀ 41)dKH \{` D8.DxP2#n.&b:f<]|'-d!M1fN}(nؼFOOW;ͼI˥5ԼwQIy$PYrܝJ7T$52$Xj#jkyM3J1&*yb;ȹ[oY߁OYj vV^xW}:s_{Y2JuY/*h-}'&Xʙ3Em\QJ#H.״ݏܤEv{Yo^ǖ^*%BUc7K|/}l`f#[-+-r/BZ*ΰ 4(9UBX5eM}^bw\HL(85RHpajǹ2Ĝ;CO`Hsg/|ps_ϔ wj>ELڻ[T\Se8>>h,h.^^MãQM( Ԣu`%,"_Y̐44S{!$Wx/~x>Eү(9]m2mȔW\ɼ5́O5yi0J3$};S7؟Pǡ){$UQ FbrBmO|:XSݧŸ_*-//ۅ ;ѻ7o߿??p>?݇#ћ~shq30> ٣IQ(u_ѵko5װFu3kkHCnYrzQ0qv0śYAv~f1oE.$t)7UKR!}1ӅiGTٻFr$W>Oແf3l{ ܶG]Z`a٥eʒ\LdF ?F[m9C|1>z%^v$Nv| ~5 cIk/$cje%cK Ufxf A[6}^כS\qxGn칩SP YO 0D %: Ƣ7ٰQd8l4+w.44Ӊ' bei~v ҋf='lϙk!< qYuxz#fqBʦӦNgw%O HlͲ`O(˂mt1YD|6֥D%E\b$MEj_O$lKVLRIkk y9l}n8cje|8QZӧ_T&J%Zi ,GVm6YA%z$ƐŔ,v$!`)'d)d|6~Q'<Q,v$4QgA?fe%(-]zj |e|`}^ شX4^ihQ^H@ŧQGߦ @2 MDYeuE TMx< ^YUǓ,6MҷKǿ4.<HZ "G*۸C5~WK u UQPGHЉ?rh;=8}6W-FZX ZAȈl{xV.?'B-Ѿϊg얮g]N>i!YȍoovXk_,b[wܺں=xK6[^;fں{o|vfZnև}}󻛹ϼ\(9,cק]z jtWznM[*N̐-OCv)rbA3=ؚS5Hhs JSuQ9ZR#]+ÑHG Nyp<+ʔU,`B#eL0bR1=8H9ź]T d8'IuJjj&0S&^_.K^[w MqP?_N/SB#Q#))B$T)P ơSN*SR]΃6܎Qͨs>vv7F%5mG x"O>C!"ϑUpBDOm((AHV,+N@3uv.#NsvZJ`P8;e hAQɜVHLNt]J0.jEt& mʨ Nz k!O3@K(?s;#1(rPj^Q(‹cG fZ͠V\!N:.~;Q9&)#` 8)]xHV$?@-88JL'>HR`"tsuքI+ַ[<`+ut&;[S[aj:|}_ǧb|Td_㳍ola(%q&lU"Ji3냤@[T4a FLbA{ v0vc Io!Zkl3.ލGӲnibܤ,a Qz$*dNH&G"4'/=&b7Gj=Q6Nuˡc:~ɻ$|4JjMѹ()e,*EB2zOF-e_8WKڼ7-5o!ᰞQUW6}/uEoQ fɗ v3Iv^}0Ÿ9__1eTX7ս-Y/Q1٣-߃e0 ]cWG6((ANLkTz٫ŴSmBhJm /i󂃻 hmMN99xSrϖO_-W(sZG7Tk7uw<4f`Ƭ_,lXR>irs]Ll9kx`f| NUw"ۤ݌wcej2\+|رN[4YW!Z}틾 S^?,|5vgN~<%.X|uقZ(PC4%s J m{pj`,͟Rfge[$2¦UGMxIkx"Jz=P[RT{vRY#S:LS7^14ߘ`6Ky;+,u ?VgQ%+Kv8-s9p^6uMRo_2{hb0Ԫ0gϮƙ~/dNIޗoG?grƜ)1{rNaT%I,Aa0dkuf፣*ڃRY@, (^2BYR`0kSvRi!ȭ4)FmR-IFQB̦xſYKôƁ2{--D [m%.F}*IBΫ8z8kd`zZ5no.׹,Z~}IY''~h|s6YS窽fCS='Ymmgpi T}U[͂$9 ;ɳE='26ֻR!,>%]OK%$,Dr-PJj3nF))J3/7CaL]̲f5vz^?s0|׫ڠëoWo_=>ƅ EV+%c VT ea h걕WE0PDMe]NBr=C\Dtn&vM7i)<L;0x?tuĤN@yڕ z+Xcb #U&Aq?,:!32樲 DG}_@J$9[Qo&vs2Hu'AJ?G#IyZ/NsؚV2JNC.E6t*9:#P-VlK;yK{vrbzwQ[W,VIǦ۸=ol>@$D$Ӈ#D2_ώzx\_oVYw:q[ouΏ]NJT&O.zɬP,Y3ʑ'8탥~l?iW_2EFBgMrNj??͏Z\<a_O/M;yv~ o5{6]LF/l؇v{gvgK>cm<{-D/f?~_ asPwmmmq&K؇9ٝb:Bcٙx~[ˎZ%;=8"*WEVUBSI)$kkVFԞc`Wݞ=cd1$tu*JR>$L9e$@'cJQkQ+K Z„m-Y¡ "6]PJg2hldID"a~o[I5P)džU+@]>xUU\.g5eSkӣfr㤲>H &`{2))= ʀ$|0U1cfVʢ2ʲ(cEk4qr=~6ԥ "QI`V%u`1bDHɩ)ٳ >X4: +-C1i)k&º ͨIt@z@*Xu|(:$  K%a4RiAҌˢEEy2 ɅXKҤGfe92) H S+kNyr)S& C2Yí ~CNox~˻b}OկQuvcw䵈j3_yө’1YYN0FG?M.?_[MANt;p=H utϔYuctl TO)XɅ89F)≟09]$OY<[J }"knm.o$WI@\nxC{5=Py(O*R_Wksry^cOftmmѾ?/[Cޝ^}8]'$fz~wi.{fb O.o׬'>@0 :3EѼy=_[W?O~jp}qu`k-1Lg+veלDug[-$OBt\H7kFfU~fMyzOu?Nn =sy|erZQ dרse @F3ټ:fe]_輌q*O|~h鿿T-;OGq|O}_~_~Ï?>~8ʑI M?E [0>gX%;V|Przk̎Jԯ\a2Hx kdx\Wݮi_aY?υƳ2_K_Se9[_Alt0Om]>cB[XqArtwn$\בQJ5e152ג}yX{3x #饽 =ic8y⥰[W=[4[|b!%ZǂY(2KFuz99 7_>Q=h/`.N;/`ޛtwW -sRke~/YEt@nP`q0p!yÍn7E90']u0.rY{KJB!˜=DYJ0Ht5j'~oDQ(y" ,'-)KCLRsBC!9swN_Cta^]OH |N[Ni [`۷&GBmJdH` w[2HIJЎ< a E;Qk@yKVɲB Mvm@]UN `\O*@p`k:W#rz_rE'i} ݃%|\=8Oz@;0ĨsΒ@qD,U(0)h., 5e){۲]G_ۀV^Πx w1ҙ݁˗lKeY1P<0rN(0$6( eQ![W}#\R)"cQH62£vb9_n:a\9i_>m`|y)Iלl.ꯟ2aEK z4̯Z*r)9Ԕ<Қ'!x$Y' p6zs|Ѻ,!"f)m𒱮hO$JR!CB)-R$} ' `zcâ&ӫ˛]g&丸ٱ~l6zDo xfzwﺿJDn%mv?6mN Kxܺkfz{.<>-,U oam[VZ66/iI҃}U=C[srl pT[z5U;+^KNV uU;+CjThפ/W"O'9مM #QL%4#3h~g>7=!d d2eU +D2=ـ(S$Ф-Ĥ"!-a-c2F`\Vfg IJ4*'stE^S% qɝkGTVUn*G3 k/oC0.*!R[H^mUV*]R=˃& $xSBJ`j\O%"JjK%c5 "+Dt{3/W9K (3|:Ba+.g Y Og9덜jUoaT~|4)(mD-flD,`2h˨ ?B+#S_l;˜Id!JH9g(Sp"lOYdn`'O,0 jū1. Ҵ1`𞀰HlƱ8lֆGWle6^6*OQ`!TRMĜ [-Fg.h഑a myDZ\0OԛHcXgzeģĦ/(1D/2$›6}`R(wjxxQT/C*gߣzVk[ܤi-q?B[:bUԕR\(t7Fǰ@IڞJT.Pҹ@HYk2-HlT,J'i̺SN0f]φү峷&ahM# ږȆe gljT'M/>tt td@GRN7~|8Eq¸Z~Z5}cʤptXpE )HVk]rKf`uR) AE@4L^21%c/j< IZڢV`_ Ǿ&F#KTP499^V#82majsz7Ɵ_E|ശ chhMo0 ]VeP&!˨71@ f1hfOsޱr69}cA#&gET"ThГi1bS7Y_RD;7o]FZ}?q^C͡®:nyE&WVss]H@H9;kV6I&-7$i+T|Ő3)a \}蘋<Uq.'>NY[ٺdo..Ͽ޴b͇KsL}.>2)fR@o/{* "=BPT."XsQ҅}QɒZWȀGީ,]& }hܕzlFVSA}&ݙ7/lŧ)C ;[kZKGoU`An9Jicnn侀wN ׷e8,~!vQx8w7E5b9%߃dnA0<`޳ 퐫 /,A&廊w^}J"Q*o<,;=i>*ӋF3 1r\NL[4ѼrYZYjyEg{N̘ ?X?~`='L pq9:sp$-*-Xt,Nuv۷݄ۤwc2ɢ:]Uc-e~ҚDN޵$_!uYy 0ݘil2WIkԐ`}"EċȤHJ@RUy"2DE_ۢi]&P^ oI^ULU&C d㌀¥ڸ`y|7ȸ憃0Ԗe{1QӍ-yKJ}MG龜H!?.)Ϝ|}5kkôr_rO0* S26H׀$qFZ$MQiZs_ܯ+nzC;tY&߬f9.h-D0Lnk T:RDҜـ.rRIGy^oGE_LO Adfx̕th^'EQKieNm?W^IH/+d2G]g:`F ܓOR_3JR93e!qD*$XH`UPOrKIYWv*BZvp0w ̲$;=ٮ o26Ӄ?kzDr7BF܆Ip\8 9CDs6T^3A% 6`d1ta]L a.<]L;ڬz`_*!j69iiKAȤԒ-d%$iA#dDyT&ED> XcP- 8aϊAc<RG="zhJTakIRQ] Σ(Q{A.ŭP"kdSҪ(ln)i]ڌI,ut۠ gviB&S ;臣@+ڷ?p=K>GlM8xNٿ?>:oGq7qx姏I~&%8Ah%d[z.zƭ8wuٖߜ]Lu:x-yVsDwm]"{BK_;QLnX͊=c:Eos]~j?{ӛ_tg~i f/jv7h0: ?ԛDn>l~76 oGw6[Gzٿ!*^B2DcTm3} 9c49* kiFFd^sMUʏu>~[؞j9|FNRRc< :Q_NZAx+EB$AHOd T*TG#u` i;O` y*.&&,|_>o&r'ofA,FWV 8ƫ^:}NuU.zilْ`{L̃^&D1vMw3M2T0dZG7T\/ zC x>@xFrB ֌rz**K URޢ2T%|2 b֩rӆcˋ<~l1 N_C$#B {Eonlڀ?v e 5=/`(¨-1MN탒V{c;n]k\6Ax 2 x׶Xw8K[3;qƟוCcHi.'y- FȜ-69a蜳G)~9' ŧ">NO77W2NufJ}AA%̷D-5Bj8L%̩wzl[-o~f7i澃3f;!0~^ @8=K pR  ̪.W2JfUɬ*U%dV̪YU2JfUU2JfUɬ*U̪YU2JfUɬ*dV̪YU2+N%V-YU2JfUɬn[! ẲұSa*;Le0Tv#ū] G4B;2rJXeEHZ#%qQIETVݸ\m78v~PyS2rtdIAиA<U7BApU7R[ZS 1PKwQ߼3&HCdiW8@0[ϱ/l[p*#htG~tM/ GT .AԮEB&NJ$a=RΟ<|}G_L8F(Z8l#)a F *8:,8YF8K+DJ\sMAh!b3,W 4@QpD$.}b"LjR 3), qdZ bGGn5hfYA `I?4OU&hBe$&N`$9X@,R/ޱ@֧jECo$鞦 ldM.: pPB́a'҅P'ҕTXPZfte)Y},]Rĕ{KlʠBԋ5K.W&l'SېNH^,[kaꩭWwm_Wqc~h{a! z.Clwe?zAÂe~n~| lʲkŲj~^~G*ӯ{[M ɶ U<~4sqie] QJe 7z(p1N$sԲSE; qԲD)_42;ֶF a|khrP3`pA1q]6 &eF2 ,Z#ՁHObIOڭ^{dvOigYKjt_o1̜/vS(fo)0ƹC,YE-#W5XwkM X}nDxeJ ]]ZY  Ad]-t]ذQQ'M<ʼniJo:{vTNgMa[/ ^BmKlEo'qT/wL^A z+\I$e4e ݿCs#:~V +fZxgc~{Dt,n^}wl0^sk3]L9p"u 3*%]^p_ ow\2ot}a{gNY47Yg!JE_5'J5jPԸ8x %N F߀]wT 'i 3*jEy3P+>"|MgRegRvK- W5uH!?.)WD.)}8j?W MlּK\c IIOZpnFRR꥙ԋ1K%A%BhcP œB=`r)%N'ibV\ۺ(ʶyznl $ 2ݟI@e$g$O93%4h+&z K,}ŰGӦ܁6+9@^gIv>Jjpc\O)6Bj8L]˘㤺WXN֡#ž4%(at\`R[54rg6xЀQ nF*CF oY>'>w1>.3o D7lza06CD0 5bN|߸`|Ʊ'3RqoFB3ġ["n%K{!DIgJj=L2*~(IQ&)+Ě+C,xcp+*1Mf|˛ϵa%UzI"@Sh8lN1]zAu7_ O|m4M"4 l,mJ=u&Fl_QO R*%OV3$2WYA@Tܩ(O@+ ?#t`q^JgZfJմZlCco'W&ʣK˵3N3oip{ ye7.LNe>NY%]Ԃ"9Ѡ'1R)hښbIyST)ƄܢQ%Ola .&z ~g; $?w5Kb6\W@ĘjS!OMRWSi5$& IReısO/??/J?f&m짏(.4HN( 7?&!JI_OLHF Lب 51 (( &oanq|33ٯSKC|Ɨ죝XTwq+`4>ԮL5Fi'dcy#ZȰUj")p "HO#qBXؔ%C4s4!x!HnMх31RHp;m<8v(Cd(x]P`H™bl#Juu̒\vv|RwzF>#+r[/;[|nIjãQM(@`FAZ4A Eԫ!*ST&% CH_>xqD937ː+"SJ1^%)XHڀF8 -"Z4"[({= ۆ'm䴚`ƪm?SXI!8 $I,E@Zinc, É0Dkm#G/m\fv3`GEQSԻDֽi󼻆v_w>t7|]x V baO?Og]rO{d`Nn$c֎$ #Z;2|D0u g0b^G|1gɆܮͣrݨkM$b6\B篣q]_^VFuS7t'9?gd?2:~?~w>PfNO8:{4 >MA{ mK ͇F0%z6;kIG^3]g{m ?~a߆j zȱk&Z ^F1?1^uQTI" N)oĴ =t{ Ax_!'Mb}tK48}ڐ/ʍ$5"BBS#.C|Mǐ<tdHzno\5q%9s &ID3Q`)h&"#.$AB(n9lx~,>pa{vzûWݽ"R$lk?K3xBMI{k#FD4F'DGtAT,*i3vء |q]H@NP:ֶ.*-68HIIN'F(ɢ"π"g;蛛WA?\hw pDddr*+"H/U 0dR rQow!H/% ,@&'CNbܡc i$H@Mr!<\$D :-G_y Vz49o|-. ":ȒeS>,~\iet楴B=A**nfܐA'4GMcQ <) jZݚV@|9Dzc2+'1lbFGW"<:6@Y(E74P#j!Y"1Z(O!Ҟ›LQ7/!p2ذxԝj{6jomu7"ګvg]F~ %g얮g]']~)ۙȍ:es2ui Rxb[wϟ[7֛[:]ْ57f!le3ݻ4wznYڵz^jrfnv_?.b T{)pJs*(]9۽~YRܝ-wK?jK1sf}T(/ I* μ+L9 B+B< XʤvkeyOQ8o8 rvlGrc \.p417..r Ɨ-VHr\3Δ%yIÕWRGL2(gya9+FΎrvx)V:l`T}0"Tq&]$S:5iHe <-!,\QV()/gz=g"%@HXN0j8("f3lh $ LZŠV M;Y yuPiJ̊Dtfʊ^TO^!Σ*"2l=XF;yDsaZm"'qi-1-/dF**U"Wi+ArNL^HPL9si 5^ ) NY͡Kȃg2УQd`Az;f%I$hTȵ$PEuAD uϓՁ39ۡmtHقܐFs{ΗeviM|sǯgX*έ$l\FbXўc'CG7^ʫ|pI 1D!8ϙk%Lo`eਂ#Icz rR w: @Fnb,Y&URexАH.@rYQT 6j͙Brc.!"1p.9qk@rMeqvG1vaoHYEN4-G;85 zPH'x7S\9͸4JɀI E>U-"3B tFÚ0&[#m.׃<-{u=37ȸɇWl{p0s`HhR bVۧv;(*-]: 2E8RGJnQA»toNeɌf@-uﭒYӛQNG?I^L_Ax.AnizyzJepqyw?3۩}coj_]~^Bö.UG=b 5gGA<zF2[>oO4.N :t~ZT!JFA@meWp2:ש^<9,%Y{nʹZy{Rj\ʵHPٌ L%P@<,K%D@EGCQͨN1&əTS F꘬gӀ&R)\B%UZ3#gf܌RSrg|ꤹ C KZ(#*T5rv(/,uHָd_Z֋Ӌ^ۀ^WLSftxM ;U8),Ņ Zd+ݟ޸] ϓkѢ*+'.2rn_ʁXv@0U̫*D5:( Z c0mJBuFMQB cxsTZ)[ZksLM3e\L rfT\SңGC0a$BeҒ,2Zq"H.bHds*r ;%MbT'Cg'k0cAKc<ȹ]@?F!KuqhY@MDd]Y,Kx8R֦ur&EVF[X5Gu׎q)iU^jUY`e'Z@}H+öA,|iD{3siĜPQ:oEO1i s ȖL!1t\V3>H@NM\24q` fE/%$khd-m'FΎ7O^iG>ٮH3?] i޾=gsugC:R*xrײҢwn[L&9ϑq45*J^YHWQ"fLpeRkTޅůQu|BK=NQp%|b }tޕHD^>M۫"jz4%zuvy::!.։/Ӈ*#ʼבֿhtycgsƫe`kb.ݗ0J%'Qo8~]51|jiR1'č=)Xnn@,[٘Z? Y,h8:eep ^lz0V2yH3 )GBizz+J㔟ިgQMpfǙIR8Y'> Lp (* BV ! E"]PAչj'<CB=v>iQޘtc%i &PJݍ`'+u7r~ںRwVs4uFPUW PW R{zä[.մ [J[T 2TrBXB0ú֒>mn{,@)s'IT* Yٝcc̦,z똹²k'v6:Y`Fu.jg4cY)!23TDDp)p.V!f| P) -L*CM;\:ltR@c", d)t`B&dNH>&dǖgzX5Zwuы^pۦ}wľ;tvI ,VZ)T5t5ȭ6d^{_eD$5($+ίs{ڲ/wYFvOgZb P%x5(cPEuKyw>SJ:f]BI{mHE`)V>f " {zITJzߦI}:7pCE}] re{ yM7&S},CU 38k^0fmDe) A*-T3RWl. Eɩ`u%ew^T,oɴm6!vj_e}>+zh@k #Yƃ  Ge9dT$2 " ڭ .E}YqU& ArEĐ Y]i=T7!NC:jLy5dc1ef ~ABɅ- q,Z=Pe sa,5x4H3w}CsiZBJV>oxU⪜Il* (07 ;~od:c)HZHFr'mT1LAeQAZd#~OF|jXom^V0&hYחI,ⓟN۲?~L2n)[2e'n[$7ߍ "J}(MřKiu^9hz毽}Ƨf6JOvQK;Dd`BCaBCJL1ư#RW]G `E]j"*N]=Cue *^((-Foo~lʜFFр?  **+RVSSi9w O-,#;?;FKY2 Jg/d2)MQc H!Dn|tgjRkX«Ӌ^eآь/9q/_1!\|?Q,2zmʢIi.Yv!m5v }ٓSiHUK]i'9/wѳ#W4Y6!"t)$c#pdskKH;_Lrj6w:QhJg}Cw{J!4SG  ^ɵǂ ?^+Tμ|x'pbSl>A'DJ+Bq3aS^r\Kv9{dE503cٻ޶r$W_>AmR|Rt EV'S*7V!EExmlͭ sa5kygoi9 >G8Qe uRS YֆRvj4 8RZr[IWPd.ޓٚu*PΥbT$ϺH?WfzyniX^{/s>Yx3~ԊcJD23:VA,YidowUy3z[jROz,ml|==j!͉֮x)cQU]NV;" q2v#4s mlwqj&Vuda~ʍߎ_b]RbN$"uB¨Fq5-.݉R\f<&?q$75:gwEf$0{\{ n) vCSwOlgJi^Ou+x9kYMDZ8SD[vMQX\.^@9DZXb%(rI;qJ\l F~K\0x0ք%*p9O'F2}֣Xg5NoS(S}v;7.2sU{k \ \-SKiCZ+H&G]_s)؅ S8p>WJc#*F_$s&jm1V͜ȸfë u_,\~ވ;B賫"`q/SOW Cn8˜͈./XPaD Xʈɘ!(d,Dp.8%pM#\[PSS%Bf%:,ln8 F 1jb ǻA2ҀXMgDGD\71(ٓvIm9d JP ȀdSMVPWDr%6.6#Ԇ3)'t龔,47kdH9~ascZ6JE쌋fukElR벼R)oqgSGΟ av:!ϴIA;hHAXcBߎq'^k<:BmT!HLdШ"'crĪbsAT:5em ILC!Wucs1YP%;S{'|DEۮc>|fe^9rkLCSu=-"knιp]ՌXyϯ4GLkdƹ:9ns]_._\~XIlך=ӲMlkU̎GJy~>U!xn~YX.Yםim'W(@vCFr%wH1BF9Hz$WI̽RxJG9)iqI-]3cT<迩4[4s 3璫dbƒ JAk6Sʃn€M994<ڊR.IqU9bY 萕cb: w3ҝ>?'cZ$RE?שǏ',|VX넔)+̧O})54NQ }*7 00$JbFD "||zr#0uƱ'xUmZExPEW5KfsA 2$ ¸e ÿZ؈ny~ whxy>Oc[+, +bǫR 2Y)\MUyn5@M_~`乜}y^]0JHd!yLZbXA7Rt 4{w>XF+#Gr\ mGqVgqx dU_-Tߺ^뿭NYR>Xу842ɨ'ޑYlėzxsIw'6яkqe`"ǺRY\mT+B8 `ȩn^0~o 2UzM3v^@w"Sl+knT ogh9awKv1wݕ^OKW{,OܱIK%kI7hyAC8]~bK.IXH_7pk±Goqsɢgv5.g_bjT9h康?hGKP[PMPZT6\$Y"U^>Da+Ht}P,YUkSSU2־B\!%/A2w͜}:D]ܶ^rA w _k:Ó&R#РΨLrG:W_l8+4wؐz(뀬hH6Pf| #zU*h/^Y,PXzrYX F$ʘtb59Fg QG>gߏW {(|q!OmEdpȭ-omG#;ՓEdsIC}&=l!Y+!eɤ!Arat#Q1 V_.aqpUC>c+#-,U*VY|tHe\'=AWPZS[[8?Ҿ־~01gȰֻ.)(&P;L1 o^ϱ|>Sm mj7D8dbkjKQwwlGs+[OT +ǜPYvNkhj @ީZzk`/NBB+Z;Q|% O\xn|ۏfnOWt2)Gc)L&J` )eM5pFC5LR$WH((BcTSikR Lbgff~g?\^۱_?v~`EL1(cdyy=]1(Gm B:$p $Dfk2iqt͛RLWޏK;3O+-[: \Kp>lb"6;%g64TRٻ6r$W|Yma.3`0~فA[H'+^,;-[q:HDMUŧIh/3b6FFRw[Tؔ`cJ@ҎfNYkKC Lp'De Pw ;#gKx{gARNdȶnquzvy^~%*^Y3}tk^˭%o)1Y\L 5NT2 bBU̘IYiYf6C^|&\Nj$:ǴsEYǰUvή8/u6grKEJн Ke(fc4!m?Jº #mt" AB !Ϋ@1 FHX*a{\rO5m#HqYvY@QzԌB%B(ɠ:R_rAU mpfm]uNk/|wyae`\s? GdxadUyKNlEeohOhF n%׮N-k1f+Yf,HV?Ogܬ9<^l9UAl^u~V]sl"ŪzAs fsrB2JPߣ}>Q&3l8^fǿ2:~?wIGow~k:,<_E ೥/X:t47_Zy^wy׳źAtyk\|֘ņ R ~Nrx=֞=-a\U]p0,- Ϧh]07 OU!~1Y 'q_R!}>%,%N|FΌFG ɘqks)!DE cN{3xsކkZs={^ǵD)1w+ |ʾpŰŘ " AP b.Z bN':{U3x`Vi*=hg#tk \/=O wtgifRHѦFSMmd0Ð#: mڲݣeQȋ1(g=UD.%+Q݃B*ٓZ,} m$O%`RdᔫECD]P.R$^q&5DB]wEon^ |N'WAR ~e!d۷E+0ؒ=Ѓ. > 쑤$ヰg!b#sFeqTd KPZ:|lBFLi)K_b>Ƀ=Z|D"XbPT>X5~2>E`bfs6Dٻ 48D,U(LmAWX E=1MlYr@[ /jgP|&wڸa<#Xg{=XcZ묛I$ѱyRSJ`M:\˨A~~çM2Ũ2 @BFYT"G f,j1&AL>YaQAD{YWۺ?N6\zVCgT>X}|81>^tqAN4^ylt ۆUrڂ}m=-"aR3Ȓc!1Ww-_=xm;w%`'( #l)4>h,:3036)4> Ox@p).'4"J$UK6Z|V6\Zb`qQmhKp6փ'j;'Dp@:HIG'wݏ3rUOqw;_ֶ[{DWSuڭpDuS0܄H][C.%c6Y;+N%CAq)')X䨗$aT3G/N{9; l-0gLs Ș:d*=ī@%%s5wQ'Dێ=Q(AHfd^&'h w x"MD c9댜-lJǕu|yo*m]BQYJ4% mCY0nV.ELQo Q5HR(‹Ȋ$1؈3 ZA|qO{;y~W6%mPYwkIc oAxfˊɑ[^1F+N^U6'^He`oڕҁ[sJVodorzi#´՚4H}Xg;yQ &I *ȜrLID[T67s bA{ ca{Qt+yt}Nxd}T+i!pP-5>nm_&l!S[;Ƙ.j( R|0 IV( }Zcj,# yBGyu9IΒ F ӒJf]HhNY7!*tk,5ɹUHIe <[MaXn&)dz6&ctqor3n=:z&tA:RW~\=uX6^X_kh_ G HdH7v)Ey]A((DZP&Jk4I6G:]͈Q5^J:8mV!I2e18ꌜP4Z9j=Yn6ήF-lX(;i?|Ԙf,s Yghx"6*I(%$(֗^x;z{!nZaM{<2x*e}L3eqL' E.JVPDJ61bSl,Ak7/JFZ|-}aڰֆv};(*~r2J.o=z&tAQ+*J-lcEXV(mPWF&QP nJY<7]-v/`siW[ܵ_3Al*2Mb !cFPi 9kB2 7 8i IZiS4%}BE L-ȔEHz6 cc?dglYw[Es>>tnJ/L\>Niox~19<b}D}Ч x<.xrt' 2Y>|158(٠8PE# ,L;Ǭ{er*YLHUB/H\ qf{/X3$epG4GY^+0H>;ӭ<)%g _.QM7o<7)Vo󹦭YJM>Xhl;&$J2U#JZF'v'{v /\#^߀ ގD/m:Hdk ٻ6vc[q?ǐ=_Shi%ג8}e+259˝f8&5i>K0r(Bq\I 9Jxcr!5ʥ'􌪈 ^W. c~Ou|.8˃Z{?\)8jQ4LWOq)pLӋ]>k\3LIξ y7S 6Kh'97^C]%m*WڂRR&bRr)i DnS18ɚњeetVy][>FJW ޥBHͬ$x +-6ng颐. '#A yb*)uLKh# t8].+՞̲ Ie姘<$$7FH%k7a]Ѿ9;4+:j ѝi9EdO:9n< Tm!Dg!5|Nq^ZU+o? ui9]t.LIJ:1}>2)"(XIvuȄ!%-Hg8Tֈ١yWI/N1%Ee{1W$ѣgGB,ΏE] >2P[έA.scчոc_}*=sAmI{Vo8?yDCT\}m-A{n! zNw\3;-<{nhm<Ҋ9L]DL ű@bx-SHH.[`:8!QbY 6r,=O 6pLVgTY"P;p*R2׆ȵ٥k_Abr֞.[9#; Z/>T@gcoRq:]3o%bzz*d/(ېJlB1/%ېf=۰P)6|ن)yӹMM>NoUG7^(NL{cN-E 3p᜶ER 0ё}Kߍ͔Y|xVx!6zvҼKLG^`|l)> &Xe6*"$'*r=zguIu% U٧ sr!TґPjT#g?^_M&aA^{m!n,dYY/Z˲\_Uڄ*6:.#mܔ|K UT ! #{V+Skqy _4m"ˬbף +esbN ("O1iɍR%"[3@b J1P4>HBC85q!X`h` R%Pж9Zk'T#gGxGu/w-P2$u[7UH]zqy{Km dGN*xtX]rR54t#dsG"܃厉D|!1#fJ,LF#HB:}ЄTP6WXa0F0L6AFKE"J*`@o?= +-C4NP͡u|S@YWI(Y$Q$ JYmDdgfznY[.X LXt# I.am$M*S9ņHh2 }ҫ2״kk,Yr&k}r"q2Ewop>; __p҅?I?V$Q^ꯦNJi~qw% rx$D ˱e|ே8IL\8H@qM1m8䒝-%MnN3R'idŵw,9 zcɻTJI+ m>Ci<+s?S8ZK\Vwzݻ¨ḑ'Z=^\kʯ׳ցP|E̅6|ޮ욓h0.J||o4#05bLk9suVH"x'ZeG7˅96xb:nr7h9C@pUt%4*xA"`V(kErZZu" kEHV̤j[Y xyl1Tf $4% m0UUYЕ^]RJ)s'sdX`ٔEx^KNtu6:,0#:X3V[B,ǔ⁖3DDR_y;١t9 ͑5yILEdiFja 2t3΁#rb?^#2,a3wDF 鐥O$1[ '{@=먘MBq{Qk1ĢPu.[*rU?ʽX1:[·Q(t*2 , d)t$T9&{;"J##E~[tNgաΠ@Z=_|)qƴ0=]~6YR4hfW\%+ېy}E Hk4Q8^=wu6JN4Y՟ϲA"-X wN"rHFM;OG_32 ,UL$G q'KP@I{m~Kar=BiP)h4農?~%S+q2kZV^}e<:(?G{W7݁].,+*n_G?oPkl:0.[iIljU#ux9+6mabh+(OՍ@ 8hX^i;OBԾIgO~5@9Z"v@ls镌$F2>0I%諔x杏)ql{͔5> ߳;{s(c}I}ӈ`-y<5L{,l&ca|6Kv:㻭0hѕAreEU0TN&q(5ImB @xjX/^^G% FYLA2Z#ӑVp'mHs dj%E2zcG^oX p{ҝycId͝_,ۺ?~LdZ&e]>kԷ/iM ̰%: (%O';dEŔN EE98˄f&җΛ`0qلIɥ#)E\)Ef1Z9!JwV:6^g1zv_Co RO ~t\yooz35XU,-C/v݇\1ؿk;$|Im(qs +"/+[>w |l 6IUM,d9\ _$ͤX7{b]'$i̫9m >Nb:Ul@eI}uWO$¤=`^9vNqD&`os!W}AZfr3! Jʭ{Ὗ6?\Rp2Ô)Mst~KVuXed!XC2gnD8&m96r1[) L ,3c#jeud} dƌddA23%,aJ&$ihs6!0)7x̂1X:kѰ9 ha0`4[˲r4sP\yuGq+AwָR o>8aVhFhQ0JHfP .]bDe2:):g4NT=yuXH&Φ=P'Dg x:(H |yhb0(||Q/9^ꍴNYBU!{nX;L$$Dx2V*u5svR_M;_!7M2Qp(:(~=Y^ċo0jadx6=ɐ mJ1KXV&K] >sTk[>o 2dmk0?&6teX 1ɖIoˁDk^ܼr-tg\ 7nyE۔ Q ]= %o2cywb 冽pfީk .#Wg_>־\NeO>\Xu|5UɲDKr5^R د'}Hg&1ͩЂىuɎ55+UpPų~^aN A <$F0Ȅ+|а s( \H 8 |Loe})\7}YbkCDQ`+2okR뺛sP>P󹾩S6)%vm'Q-1K]K"aShԺU ^MOOIɌRyO6HdzF3e,wVz=o5r6LkxPe#k1շp?ozE. 3ĉ@-LL&)Ihl4;ArU.Q/EmH2NK(OP^F53vZt!Psa5r/,bs-xF5"vӈq0vv Xh"J5.e !^%BU55҂qJZ`.z@|@dB $BsAATֈ٠IWI/NtYKEe;$x|$SeMxYFQL` ,v^܆^}X;ee},-5IQDv\{n=zQuя/({٢|ӵ8r KgzB%7]7r k/Y Av+ OB݀am Vۏ]x³g΅g ;YR©rW g0E]Z$K2&!B]D9!l謏 IhSV"KQ\"g^~yӳ\M,|X fѾ@cP_ą0ؾ`3ltൔo{ mU**%ї 6_{Qk49 q<&zr"`6 w}ܣVl[f, 3V0/r [U0tT=ڤX}X)97DBо[BEցȤW0DcHWJp zgUIu 8=lh(cvb5rKv+h4+$|i_{?9pWYZr!EV #mtƑ6nJ(u$Amȷ@6z}[EY Р6wIPNaޮ꣏-̷֊G`0"ȘQȝeTRLҹde=BAJT vb. JNM"Nod2PZM۾'$ 9ZkmV#gC|;d7R8DOui'qHካh>+b+SyǛDB+^ˍ's|Y-M@ZDPv ‚cË< j$4MOJa[ ua^.J9mP6 ) #A =!hWv U[*4LltuTN1g!F/̔Ȍ9q%&pJ,aLu"?۟kkD$eA #Y q` +$eK94,O5ɑQ'Ä =&3KyMb;+I9+g>9,U.54xzLRz˛W~ ˹Fə GҴՉ`ܠ)!! 8p I䞓G%r s7.?D5\>Bsтiz?s -(YN&%W!(P Vn2I~py0 <=H̯-7%'r\`,yJ@&K.xICi8-Wk{VK{Nw9]\F;Z=tttu5AXb:D.~gb\{KOOfT wcF{#sv»ˆ=h1n9u_<V]:$ G_ϋ'ZA~/[51+GNW +Z*HdcZD+XVrГ1ܮã.&js%(F'GZHHXhʣ'7ʽ4z}{X8 bǿ2~ݧ㷟?'@;qB+0M6U$(|2 |@z7 mk M-Fl05g=_ ƵW{W|vՆ {}7?'\$ȶp3gѕqe{p:DQ48@.St٬ 4f?vz=gIj~H(2ף ޑ'iliyCfU-F&)" f#w'q{Ӿ>K<^|soƍ$:֬RM?lY :5e_KS,ȫ꿏woVlHކkZd/^ x>EW4آb 3AD(&>NgL3xɡNuFOpN#$tk5j_0~ă3/]YU;>(@nP`qB`E+ƈ΢B5I::իqH<+XJT !(ك@DW6'BɣQXe!H>oIY1IQR -#Ogwgj+l6_z'*jl:J(o)l+dlߚM2")#9K]*)=┠ < a E:Qk@yK,O+dY\!Ҧ'Po&f\O*@p`k!pW#rX N@?ڷDy0_5ǼqSuZohXM31ꜣ$8A"F*[NfHyG%g)Go$r{{ߵ˵mq~y4b+IADj~zyCm5AǬX@<븖F]Ԗr\2>[RݧRSZc^<;A/qpS?I8>"S ҳY˺7.Ϧul?t`ﱹ1i9ՠ >@# Lay q<-^=)#>h#bVo'!s Z%JPR Q@P >*91%*(Mj j}r!XX t|귥Px|x`<|͂jV^ĿK{vMp^RCϦv8㻽ؼhUq: ќv i/,&)kO6Qz5\?xCGgVQ阼8akIf(aӥHE%5ki?#^onw+Ey:Uysy_|iÏ zZziXVHz| \mG(6H*4JltQ5=孃|ˢ?wGEl*j0kUlr.v˪#a%BY m)N~,nܢKڵjHUCaL&o*B*jƏ4]2aEK z4LoaGr)9֔iMI+!x$Y' p6_5ж|Ѻ,!"f)m%>g}')Pd}%"UI~'IcyAk)_b9F56a:ų/bjwznbvTp̟ `mU ^mtڅ3loO 5j͹]^??To?nu;ںY{N7m2 f!leozÝ>yPBo~y~utjx]GOF}ϝ inһ66>׳xG[T\J6/cK~vo37|vD)CHS ȳijA9݉ύ|vr`c`Fs^xE\T&'eTNh~ekrL}%`!# 4*'v]ؽ38U) ui;i_ zZ~{&=0BUьA yUGh|]Wz9O3,D ) ]"**P'i/-{D f3y`6\>іa^&^I*GPRH9^Mh"$eL 0mv& NlMV֝Gyg=_:#Ct|L{G)M5(1D/L2$›6}`R(Aj#{<=6& Cv-:Auc~;L4mM|+(*5d0PJʀIZnyPFK=hEIzL=@WPh7qU:5qhV}WzJ)k$j0U4qU>tq(Q\Aq(  ]+kA E\UjAIqEj;ɖm+Rnkӓ[OtA8D&kۯ78\6M<3^FT'`O;5m}ǭAHq_]Ng@EKp5<1,OY{r:t,pI E\1ZwqUQrWD\ު$*` Wh. (ގRw/TM3qUs}WFcM+4N!+j8 WFh黸(iԮޤ"-źOwct#Hs"đ0dvލݍ5_OfI+DԜWXr.n1d֫l`)!D`{U@/~;8eW"O"YE$,nՆLU`hRAO!Jȗ>LF Hf ` p(pEzj(эTEC:ȭ_ $WZ'ŀUl`U;\FK{D37(i]vhkOŢ%?l?,vbٟz,eY8u mZ&h!gR@58ű8׽{`k+43!LcQZ~0ǟX'omj3wY+J(֢QDY('QT"ZIڐ| E7l哦ҟ5Ae>;;LR*Ls^뵨w,GO!rP?Ly{g)b嶨<CLV:3oPwMVŵg W5S=2X6ܹRi=n`cܧ:> a-++K_]θMuJ^*x֧&( i[7]}]c/mI說9|iz˲Ȋ>P|s~6{U[Dxhi^j+Z9mjyϦ뙬]׻&O7/< j"eœx`MA0T2[G$ADS6JZNB! %cJQkM CeLJ;JICo';R:Aה&$K$ .8ݤ>$5o;;F7GUNtB"& IIQ$dSRQV9VI[4qv`{6p#6.$O֥k2XUhXX'cq:c89"{frEr`/=[bmm֦ݲĬ/^lQ/<-t@(P V3Ghg"IoD K6F#iFҬENҥ/]&d*Qhg$ŖȤdH##uHT: Vyw7D K8 ˥O(dYd 2:Vэ;7*]&Sc-?VFŹ'?_҄m Z c^?[[Nַge>9/~x-?ɪam5v;R@ry¿$\MgiS*-SI\Tg5IǦ@ o8pt.NQJx)̾-.Y~:3jV:#G)HgA&huNJn6\4K/}3?*Jܷbm.OZ9trr|x0ȏkmFp8`\d=bd/!EHȒ#ɞ[KZ-{3jfUX`r4?Xgt$>Ygオ[fK-Jgmyw7Wنr2Ws w+Fp40{:|˭%cM#…6$摮a,aV/D1 G5,Xh8ݘt 6 lE6ڰVR7㴠2ѸӼ( 8gWVYw*SMg_/8ч/}w??w$qD;0MHBi M&`}#vm M-V1mz{XKn!HvwK[~0~ߟ>].d&i ^\|Md~ yvVWf!~/1%iFv`^dݸ^{~%6HwE(IJP߸dS_#f}a(e052ל}Ĝ"Hzx3:%鹭 W-K`?k]>R'{L-U@Hc #Jh@F`dV%ED2iRg7;19ڙX34 \|[;Q퐫e^Z5#%ԍMF?!TF_ڍ>5__oĐ ͽ\LCR>%-¾jǨ>i}rx"LeŻ?OIߑ,DϜMJGd7~?Y $~{Qb al[RiJڴY3,T ܼN_Rq,ռ[h&8ʬɳʚLÆDuMyգIG߽/^kq84l,sStO`cbӃE']@=$|=o#.VRQ+zڮcҹI\OKnsKѧѰ4Q: ʨb%ו(9l9qd'?ݩ'ؑ>,Ȍ>ɔ6k͹F)fV&If$MB3@$¢#U) nCL>h/!y+\f̵ ?[Ξ\౺>y#$;e׋~Xvh+^es UM,1hDaAf.D4[Zr/I8$ېp\`w:9$y&[rH&`sZx"̷j8 x4xWK_߶k :EQn+:4Kk7z|>Um\\H 0*+,iy+:"kY$vc9vjGT^f2F!s%e&ҏ™>462qnZŝ&%H/ H"D!8^bt&s#@ [ƝMgOqƛ߂81 )6*Nire[r;&{td2ٝF> @]ϥ]uaSΟ=ɭScT׾]^?M`.ٕZWԺ^i]Zos8͎:}7eE-iݿmƻ;J(-n7ܭy3O~{f_\w٧me[ -bf--<8U;Ԫ/[h2a^M%ڟfk@z>a&ݞޝ14oJIJRaQ'J*/P}WalWء+8#, Dh68͙1L^8Xw&H#:PN(еj! M&aԾA$D)GcHqnAzm۾mnm:~R&_J&_/Wr4rsGE(>*w^qYH!xUNF0F3KhFŤqPaP9;Qnk[$}o;:;w*%"RZK4W"U0tHRťTC06Yj Ўq:ȉ[Y. ,$u21fl)uHy'Z֦'^VwX-LB!7N"3 6(E0 SIIL%r)%KVe(f>&  -`:o]+ RE{Nf8˃IJ 󁔶hqT V^'!] $9:IB 3S`JFgfDGmYDhn7Dn'T#T-ߢs$U'I/X 1\':yBK0t{=h<:LaE#ͤVfVjRM5(q 7M߸G81Ey? $qX)u1m0^?%VR2U4URIK@pɫʄz*٪LIx bER*}FrS tt)*'CIe|V=B\ (%au!ꠅ1n~wV#`;c#wR7~ZݎZG)+)ViGɋ* -9;pԁGN]{rVx>'IEcvS,pDP2.B輱Yp $[tF'It(K11o1x K'\Ѳ3I9SZˈ3i3z$tW\YC ;+=µGͽռZ<2{>\¾ZcXJYm#GQ01cXR峱D΁JdL IiرFq7 I)w rAA6J>rR@Z/o9,L$ %: )p㠜{w|a8a*K7rDkmH@=8ήqI>m PV U _C"uUu=*Ѝppa#\(#'Ql&l/ oجEÞm+t!8$ahQOv;2SH^>T>FAQGKYO 9^0 Ad<7Fʌa+pG4GCVZa^76ujkvT'Fq<<7Og ~A(y"qveF~X`g?QYo7wuiJI2æ!n5Oō魎ی!^c6chnx8pa-Ƒ~jW?葶JCbI6uDJtOر? &.v۬D&_{T;(zTx .`1㛛 ׫7u VM#uQrCsϛ ^ݠUՇ-Ww*K뿛Jv4)M ]Z/[_RuuŇ<Ljo2Z7<0IuyYjUDEQygƷۼu**6KO'sh[jud'"P{Ʈf*h|6ӈOtL9eD™b2K k.hDPaN)vyDqd4sN{g<]8BReӘYN]D1D(A0D 0xt) jʸƽM#Ke~T$:t/GuQCSs_aCLE)?+ÈP1Hj_}H K?I'E2Gַ1KO?8+*: )~Vxr4m]2N~RR;܏J`8a_Zei nrG 9WgUً:(#L$(# `J$gvht5muxy,/flj["c_*bvQ3f^ivS>Lo'  {۬TNLQ2hZw2nǷ9)#$Ł>kW׷Cx|^ɍ K_>Nt#e!.9-"zc*ڧ?>A ?LtK}:c0rQ -Fs̥$EHBZsc:0.")Ͱ3F ש~.D(Caj3ւ&){-#chnE*Uܱ~9-7B1,(حrJ&y,xBS&0טx\$(핃VR`?$L0WFaHjXO> & T)J2JW(zWТ$]QOv4-K%_Ԩ6Noƃ&(o_fi4[ʞJ_1cO%MA^&Dk,6aHI " D$@WC8u(0[F(" /(" @Opʩ$\iK @DdFВ5sf"T9ۑ;Y ]qƮXº 7wٚ&ɸVzX:l/dZڵ~wA1+u0\jFIהF`rkDǨO%zqkPM !){ 8!j-$& fh/#vHHLBDN3r#y,wڤG^x(bAcx'VƔL@P;2 FH;wY1 $C ap4Hr12ǔ M;#g;J),8t""I="nx얂 wX4(['"Ml ,r:ED0BB2Ѝg($^2$m؟",iP΁& #ֱ3rZTK -b1 w9&[gg\+.q&'VYQP@s"*mxHFcNNk0Cagܱ+pǠ@ؖ*.<#ZO}+77\e1bHtX%p\\ wL>,&!8Z,s,4: VhwGHߑ#= GB55d]4ǔ6Jϝ\r 1w+@GC*]eq sVA-m=30 ^ص9mRaOof2|VǛ}Ŧ|![q9:G#^| 0Ue!W `~B .Lr~J!g\HPX\%!\%r;J*tpԤgWD'W)SD-WJzzp鿤.`ДzRr^`F%T|[0kf_/_dj"W@oxIEF\ԔfB.~9A/t`XNf]06mۑ RY l< pfv ?0q1b6wW} x. &c0>a+y4/.3˜Yq?r m͋8(7K3VJwOw-_i/9v3|%gST;^s:%&AJHy*2F}zD%C=GCԟ  \%rXpD;\%*iop~p%),8@0CcpNb}*pe*Q)pWj:8,땕.{vc&EbKxX|F^-Z7%81f3ms)ͭq2̆)3Knw>W%cO;YbeLE"ݏoyfP$= β%wWg2G5$uz#Җ+U X1e `cJH&BEe^{-㳸|0 XGf69U?(D]p384.ڭۋ_*;y~?qzkShIu!J T?`}Ki{/Φxī^fzz~vkewUzb3xa?3EM\fݿzm}bӆ~]̴jKz2tJD0G'#'r8%j J{9  ;=[`}t"g4`r :%&8U$h5imRLNe W3>#k+;دjE#mC5e;.vњ4iSL[M*ί7+cL]r[\YsZM` 4vQ4%)'q;$+MRok]wbS&4ٽZQ| ugrͮllbXK%)ձ_I`{9CO$p)DyFo*qc}`8XdP|ٵpw|U^*ڿaٿy9˦ߌogπw3i-3xOJVͨ$c<x{BmQ+iMJy4i*KѠ̞¬Wg1bٳjSڶ>?d_'g 7#mb) x)53jt U{A^̱O;ΉuOU[Ԅԓ2G|&[n yQõroIa:XQ]aWjmۼf >'yww6clGsa$kTq_oCڪM~ 80M S~u`jrjKRCQ6n&O0\Z7<0In4`dխO ~ܫS~'Ӡ,aBF.He:U2dUxV@=hՃ~d݈(Ejm*P,s:R21љP+KGשּMltbBJR!0,grHU%x1m6 ClY=ڃ3L,'P>d12%'AgƤlB (UG,Ͼma:VL9u!ſkYK?3 PSVPl̨ͦ0>Ы;cXϪ?şzƧCRɖZUTZdvvqzrN,aW_VYZ8N=]{_JeRaGE4H۵4BPeI& P|aA|"\l a ْR APATAEO+ ̓(mf-(/F_XHͦdlUf,me~;Y4yEk̜7ywӋi~AO+ؒȹuU״HU;dLYAD l=)ESHe'BUl*c`0Yp2:Sb."Zj,MaN&銊9݌:ڰ;jL@%"FTxc%ڒ2?&0ҙQ`P8*Xi!32樲uD4aXYJ- Ju7ͦGpGҘ(+V(Jp%hBM%#)Ⱥq֨+J,Br@2lUbdM*o%FKf٠AaWX.NCu6m"6j'wr>S bH X 8D᝟*%-C#ytR:KRYO!"QǶP7<|@R<ѫ#~n/8qF;n~|GiJx882&S4Sd v8yG3;>Qx# .Q (Q "Q!~)_Vψ| h-+Zd2j#D-.\T]̌ˤ.J@]HQ3|d(zK<Xw`m򖭒;{ݸCVptdDu&ZQZ$)uZDi$nƔvn'JwWDRF!. eQ![-TbX$YSLv#[MgvC >z5Շj2 s%zV ׶0\v.\D>'֨ [\Iu*N&ʜQj}& dL)j-Abr„4V%H -/@.b)(a`BNDjo6 sWt> xu&G㭕W~zr~a}MkM㵔ښr7-7K=qDL-))k{2UTٌ)[zUFY f K{i~W&E`ժD$c!::ԄXN1Ǟ=4OX-/ͬŏٌɔJVj--)@!iTEz1;jirˢE"EBrqK8W sN2(4je>2)Lݎ26Jp59 &ʥs!{,᫬ ~!z-syw$K:ޣ͍TǏծ1Y6ף-&^+f }~շk*;,+M)0G]HQ<+Hz5CmL>yrk2Bc_ߓ޾V0Mc~(29ч~w7ïw$Їû/yS[7/ڵ[7S/|qhkƽ>(.)WfjHf=ƚ|̚=?}c՞9;(-k5qEꈙzt; ߿YŽu,>sOJjU5,IHԌ_K)W῏woo6NIznkRu=1Dt/L 1D-`cAA:kf̜wN&NAuhF[eyu >[73݁vK/HD?0Ș/9myce«ͫyxRͫ~Rţ&=:a]ϵ1M׸g\ߟq'2814B\H3~fF*ُҷkf^|oצ ^{>y ҫ.bb@}f)%!C^Y|;5_ղuiseò?^~ߺG^O^|Q˳yV8gv~6΅k`QaW;TVEFL .{tvT _#^=iOnyi7_sQf5T=uf/:% ɃM.Z lj@VT+_]ZYCH<8+B,%*)fBKt,%T$4ɷ oc$EI93>2rAh4$>ُQMg k(^ 4OoT!K%7Ҕ2l0 M2")#`%=]mqJɎy E:Qk@yMS(mzzӲNZωBT|&fT9 9>8ܚvh<w?dII1L31#Y'qp"F*[ͤ 3Bxl@NN>M_ԹGy'4=[>rm>^A]g@%+b6n;HBʐNgئg,]WOact"ա >@' ê< qPjg_59Ӱxc2X?ZyXB TmCߡdm;j-A)r>'QtX JS\D-( UwtPݔBCeG)S  z~q%PY7y!cY'>LOyiIZtѮY:DԮ;HA4KF^vYb-'\tp]][?Avh0'%/װv1ym!aj[Nf\”KJ,Vȴ31=ĴckyLW^9_3zv~(FŲ]5pm A=\i2w^.qMm+C\.$;t*tJtQ2{lEfC(\6P6D5FZ‴&T#B-C?rN2 J6e- tygkbc ]CϪӮ=i폃]5&ˎibtU{y!o1<1ҟzxd}o DuPKyKARb?8tk+W ~XO66Ic4UA-͆D)aH.RE%V78!{#_vIN`Ad,fLE EyS͇(M-Lf7oݺsJ]e¨1l&7"fr6}L'2ٻNmin:=mq2}nt27dp"z$=Q "pL}-IK^juuReIB afdX|vӿ5Y.|_#f;NV}ԻG k}y~UaߏY2z=cJU6Zȝ]M.}BqW'bo(9pw?zw{P՞Ayvkwv[Zawoo?b눑7m {SvclLc\)۲x]nw>񙶍Y(ϱ6W-}Xݽsrw}?w_Do0#75t\-xsnoNSáD)+J%t^Eh ڠMKoKKPzY 2y,%i"4QyA)" Mڊd4Z<&d|-PqYڛ 1FBXJ+M3qCS9jRkuPbU˃|}?dH^sUQl%R}DyF\h뭴\&w>zn3jf̳ࣟ=L9\=%EJjq%k~IB/0̝;2uu1W\ddc?k&H?{|]av*.?J0nܲ\H66*P9XS<1|=.=_q ?eJH9gQ+ TcQf|,6D>|L t-ayMrz Q$|& ?csa+:z3DF|N㫇IkvݥARy%3LBIf&wY>,/]}mO|PQ¿gD}pm.o{dE?.V˷Rv[rQc2tzQz|1Y ~s_8pBNuj|O,N^|ѤV)8Af]z9C\M(__s)P;dL=pU*p5+4ͧiEPӷo&i/=_ uA*};-#Bgd]ޑ)z ;rqFv:Y=z;U|R7/6ƻ+Hg3a,+Ƹsg8e+ h?:}7Ym]-Fa}|9~9JSxQxZmkV[2r</.(B|eD5'ORaYcQr/X~A>a1&)/['ӚfgjrYj6ůg:Q_ zo"y,)GX}p/~:?1t2 |2@^`/o5MY ]p?_wqa3h8mb]<c﵉Ę0-1&#7@ɟmv{[ ycԇĔio1dq ىzҴ! N?0T,BKM NB`#ٝJTv'Z=*1ɯpwROX߳`KJV},uEיxƽ=}M)cs;eKQ3>ſƯ FY*R{2 tPvؾv{Hčlyaͯkk7%]ԷN]'q7]po6uDD7~hW>zC#48}w?3qNo=-*o,n]E֭1o}>t:VnmsazO'<@[8-`zf-6o2S$?ؼ7RV7['{ϦJ쁮17jX)]p4v1CrU[]D; M}"'o.FOd4X80h!RR*=36Z F=@=åx/40e, ZR †.Eyͤ$ /xKcGk 1cUZT%R^Y9_0K5'KV@l 3MԁAs)3g\D-1 `Qƅ(,uڨ9(^UkPEZ}.Vk82RIfEcB' (mNBECb\ErzB܂NǢd2\,+D,5WLEH9D6  . FcjqũVG6$Ti )L\ 8p,]# e-d[D/3% !.wLIZwCB 8|B !o)Y`@:XR*R['(< ;!:(,|q4u0 eK&lT 謾=` lXX d V$n_d%' Rߨ 1 J8 92rܠ`50jV1MRP6ZRFBj=2ZCbM)s[BIR ˙3V3KvdQ1+$0_e!**ʖ N!&G%3N2Aϯ뱃ɋz6]q5*( o,]:lS!jPx #. f&LuW;I*X@M*K*$Xm ANj `(ЃF<@Wȼ"\*gLt4,Xqd  :t@2V#F@܂`mmGM,TG,G'2Vwe%,6MxY F`; >, }{949ɘօ]:XWXk=p`Hc#.MyH! x— @yQ,PdHaaBYBH{0Zj1,xW:vU9M1H+JF-%( ΃j9Q x R7e#S00qVإFt gՍ`"y@?xZB v7H-8\NC0̹ 1x氳XaH! c=h͆vӧ#I]7Q?8XG*FˌZHA8H5-It:**` i= YP ̂ը `756*gUsWo),^ש3 H@?3M:qPcx1s6Zw ka1GT@$h%a:-tq p@3LkL[6:@݁aQV<u `[V,h9mr>ݣ=7Tu>sg8`K75Rczys -h+;]˥P4,w`A(0yT*|]Z M6z=jֈ&i@ '˃d}E&bʅsE4ivr Ch;=u6,0 a o(@(AHbQ(4F|R50]:PU:c UE ѳdtNiuKkLr#S55`cuV g'E;byPI X S5%i\4'k{H|gPεp^?xYk}3^r05XûePX˙XV2iC3~Q0+{!ܟF)a;|V2L;=`zJ8(s.0J P/a>@p`~V;Ms6jKGa\v)&#I XX$0D&zS)j`y]0u 3;g{jۺz#ϓzōb+uId%uf*AT_!fZ (/u0.EeeT*hУC[[N4٘3%Ϛ`3L| ;&H۟(')EzL m$&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b2cΪ1@`/պ/L ֞|ztr++1^Hq-1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@/ $7OL ĕ7L <\')JzAL Pq"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b2˰OL 0fU7L ړg2NL2{FL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&astկt5V7e۹uڝJ M' PXKF9C\q q }n[%qE>zoVF-`#TݤI-]a9`Z39Owt, Z 0̝.AX#6~r[\h45}0w5ŭ} 3 ˭A0N`B_mc>˰kQ+Q:f\ͻWհyGQ׍(}1Rc&H;'._淓 eߏv_-ViWNB _(J WuX&`y>ks4]+r=x2l}*b7`]BC[_(e?OA \P-D 4tE•4#V?_+WJk٩JIG/oH?&EɃ0㴄07a6`η/7jӖj{7*97UV \-eCU/WxY6NEbz_ԱTb3Y}nC@\|_ Z:\\}5pŏz~zo}qzJ/ WK㤵գ[I\q=w #B= \JĩJuDpr c]c+K^x]ԮA KP&{W(ҽ+WJkJ$ +TnyB̞ sŠgv71vW0Fq Lch!NQJAZKi 6ڟFژCR4pZ̮#eۉl,d]öl5O1V(R[te0$^ +k{ׂ=gKk~w¸o`'tz5^˷ﱿaB_<ރ1d10MI#Ƕ-l㺎҆ i=ﺶﻶ!,-l\:"1αd}=te0ߕ1 Tƣ|87o[ z;wy?Kk~묀YǮx̮w8<|3H3|a\OhW]bN>xhɛ;϶Bw٨7FJH"SNddKdf Ɨu]ξ|+bWr] 9f_X8.mN%c(; _l]m)mϤX/'ڏK}yV3 Ue77_L@|~Ko?ivu==vDY .ub% k\ZcE-Dҙ%TaNIj5ť" p.A1 GQ+ikr5ͱT3^FjWݪ ]d:00Lb#|gpmLnɧKPõ5%W}Dj_Ql6eeH^؜y%fm$a \,B̅\ t<"BR\(/iknVɐ9BqLs7`Rx$yP]i] t08_9z#oRc8t/<8mEmYV}g~sN~bZX]( QJQז( b?BȎ.@p[~Oa\GA$Y2R¼Ѻ"L<աG0xon,r4 +=l ].J9\bAN%YjKGG>SrH*lR*#gn5#faolz9xٟCVGz:,i16N6kQL0gl|p&ythួ)O?nُХ׭X[tYބWh+4;10ޡy7ǽޚG#=͒zQOjHLk'j9 鮺a;5X?f[?-;̗5'hv-3ȭz6\xWE~3ZI:}[fM[~~].6WD?5@/҇-~~ٙ^rԇ4zbmː.IWv~9ԜTcsX1VUDoF gR/ףbU$3Pƀí">j[ߥ{i}.uCk|lh~3,i߂6u`,.?N3DO'e00nS0KL?ݓIh- To7Mz |7cqY^&Մ%Jr^nio|>[=<n/L~n5A^vZ'5Lao23JX4݆2p=/ѢIr9o:`^w5~= e 9;BN忣3met]nott=+Õ^@Wx?yޣ^^u,'UWaEDl˧|ȚBűf/o֡~:|nӣͰI<*uH9#tl-k,ݖ&G¦ŠPN1sPLE9O\h&J+ͩGB)B/0L=Mqy.xQaܜ-g`:Ù֙Rfb`VV]ĕņ)@iZ'GS У$z@9O5!pj\sms- jL){13d{(>5SB] /%GcN!F0U,meyc2xtlw&Α)B)YٹbfĿ^AZ YȐ :-{.tahmL.K%.A)eP 3nƯ̆ ڧk'w\H<ah&e.uP;B-_j5+7n4=Cն/z6Yn]Po~8h?@SSs7_M7%N|^5 eL-l0#bAZSX2QI`h [8[\Ȝ 8 \r\(r( D-`%8*R]#cgFtΰT,ba[as:]ר︕Y>l_Vnbb<Uƣ+Gl%h듂iR`d&yEdE.;Rt SĖN =PQH)MFE>l;|-/ʛ:%:F9q:[&橠vg㩨-I3jNJeW-C%>9,lkq23}?T'qn8@;.YT$HH*`y de3?&0Ls0[UzVXC l!32l樳*n"cufC*5-z߳>썜5AaU1LL_F=kD5hA#>A3ܑ 4& bcb%>;r+H cC PWHN a,mbɂ*UԐmO:ْV` zֈ]#~:!:{}"+,) bH X QxwU}ʐ`tu$%Y'^|x(7WbӇϠ¶Kmȭ7q camlApC+kkz~l.NΦMdCZ)Y4&8^1ON(k)Qx:xG}@ ( cWHz=ȧ`=?ֲq"cʀF$&~tt(޻ 32tLAiLBU8B%r2 "g^a.mm>w/`I§ yiY}=@F+>@wLb^5ܝ._*gt7b)6'3@i;(V`M0PkRfb7؝ 9B $TkUPH3B$PDi$n[PwIHPˢC1%#\҉bq&r-tx=tHCp wӝu$KI_XEh5"%utQz8۵Hh8WBGsφ㍟6\_.kapnV1m* u yc7ї cT/THAH; Y&jWuu@Q9V6Ihw/i+\G[bz4!e'魌5xokPUϼA%ɰ ۋ.wb,f ֛\Zl7p;V|;7dߐ5h;IgdK&ʺa'y|Nb`R:44J֗ yq6Gm~m>NG7vZûndշ>tG-]fY{C k^¼vQrڣw˜i):t!rvZ$-ŭxѧ<͸]0kѣy[-߽_IG$jofo+gwYadY)>_:_)m[kp u %$ym׷ķRkܡ'T!Icѽa8 0]_ddhʪ-RC)_+ƃr~'6✗$Go(a"8X[{|6}}[`Q}H볫YeiǼ.O{Pڠ),QcL1Nt<ɬ%HePtrѠ&r"b'!}Φm{ҝ<GQ=d >oO>_\NO19154^GhRоI1^y3ʼ*~PO1RxJxz`_AhQW4,`\t!e_t~+2ʃ,]b3W8goXi:\]_~O޹XpdZoZ bjtfJK˥e &Fj HvxAH!(a0 ULJ1{HKpĚK_fLTB>\^ZF= U2u]7@//gÂ}fǶlS̋xte:%1Z3 ]d7sI[YE:KJ'K?%Mdֺ Z㻿6SWJ^j Z @C(cnt IsN{Vq EKwTCt/?t1u%iH%L&ʜ"zT78S&hzr QSh-Y*% lS Zc6)+$<$K$jMȼ]'&?ņ@}W-(Q3wKqk7Ux#mA:!b;*i)= $P*Ab̔EgEmUx+x?hfZ/<۶mQ,mRZ%uʶI`$6/eȞV `*q*-C*Ey(Mu}9֝: O2Y+DQ%`%~RCB#@UeniABeтCPt BK҄IV;:#,G&%_AڧƠH01YiO\.xD!{LV)í,ƤC|ss~;1ܗ|<|镏ލgɸ:}/yә1QXI0Fѩ$~yp|e5ݮsd4f^z̷Zqi^#hHGFut*Ȧr]v̕M$p5괞E楄ԁ篗ױv}3ͣ2JX??#;;)X}_~~Ͽ~}|?OW`cmEQO&O"'`追7 M} -vA^,s>gq?n>4I9ZˋzS=·{no'#+L63Pd8r3ͳ'ogo3w bg/\+ oڜ؞rsqθ{`e:QZ Gsfiw|S O51sZVX4F.$lh[P&0\ <A+rvD-!˜=YJ0HpJDQ(y" mL>oI[ 1IQR 3UrsR*}-ͫYra?S@IyFV6l0[IFM)(X BARHR:&(.QgH,(F,+M mzlBr(L̂ir!JW|>Sv?!'P |$=^ʽPx.cyT֛ii&F9:K 1T٢ImvXX zY>YF.5k;0/bLjC!]Ր@UYP`( RIY',T/%Q 16ZAV s uHч@Hl$$ A="#!U/mHTz5l[|e{-$^Nomswb_h*[Algʳft؅wj#_ų|e<-n|Bw]{=w#&R1^w*npvu3oxNWm< f!leݻ}ly|x{狔#;q~[>}z.<9)y3_"amg.6G k.lA[nm pg8$ {as[lsKhwۜ9UO=MG4Jh7 A8{KV;ec`As^xMH\&'dЪt~ ,Y䘌K0.k]=0yBRVe Boc9c{#g]u!P/.Yvd -޵6"NpI^6YJ=SnɒdeO+QlY,~UdA;>/(I*rH^ dIZs$KNsML1;߹vnf: }vbGrx1\741,$ Z43WՄfUw$9g ԈՄ%-:#&m>+6٧lTh8uD T#8LHr^3ĈP~%45E+~ WoE[E~a"%@HXN0j8("f3lh $njZa>:&7^%ͧLSeV$<&cH4p5HWhGqTqdĵwCUG 8<(i-JWFQCb љJf"h Z%yyDpl9# X6_1w›oAa{T p4(U@}@JKa^ㆫV{h U{(U}=prcݛQ\_oᯓO?nМ?sÚ7鷕Chr5^ᷞF*V#7J RBݧ ^i|8j2;q/2c<4g*7N1ܣ7$(Nj9)Ʉ̧`et[O○_k M'烉UuNɻ'@iHL03Mg}-/ƔO4ZrӛkWOg橮BVޠtUTGx “7[VGɏ"“2.POKh0E~&$AHE" "0Z_ec6Rcx G* ԀUzetGᄌ>pJF * gGtGuUaMכ7 >hB"s8}a_O}SR:Nv=ҖHtrJ_tJX$rv(ʃqWB #Asڣ؇PΡA+{}$䙕j,X^+`IIsи(.QZ=n'7МGcBk*h 9շ"&H1DK1sPpvjw1e(tC"wj(sCi>OŠ`''v5v8БЃE9yj.>HRpԯ! b*E X{pԃGxǨvLtp ܂ dYVkb %]ЮjCK3.ar9aC3>`2F~?_rw]K߬?ƟǓqGw6Z·|ki-5Ţ^yPzEѴ,mn[+a{N_s*<oütOMr~#"WV(iWoF]*AyRum ,d<9kC bґ ц h E@hn#W `G7A0-d85>Kq:}Н}}\ ;2~4Q:wN''qLa:;I.'S_U^01tpqZ%¸O3)Oܨ2p8g]䓞!wyU=eOFps9bhz~iKWgfmwuS$`{']bխPMKշOZFٯzĎp י̷޹k͞=Mu+NI/c>5k#{+`-Ikێ2)~]LBFd:V1\y1uq)X܅3Rh|ѡh^_{^s-I!X=JeIB|r8/ZdbZdqGUqK[~Ol}Z ~7>Yfs86j9+J?zY |B~ޓE!~50 roL[iT(ןVh,Y~YDݮ*iӾ^Rs3./Қ#?d+a.Zws_Z9 ``s?,u/4 xXxv \/nx9p^sgvZ1puի`|98z=jz~|{m>_l46"xCջ-ި jlHߴ.qn Z0mx!0W\xAm⽌m޶Ǭ\gYE͆-<.-;e+~<}5_\O6=b.!V'S5Jg0-67cYt!qZm(׮׋ЋǢIǾPևb?}x%( ȭWq I &;@ُ/~4JTE59`vGWFz!Т:i6L =[TJ k*15 W_W̔^CIU]{o9*B[`,a. vg H&:ےWLzHfW|m#qɋW/ZڿOk~@~ |uLq=-ִ:[/ݕ_R9ەKX(3BF{2tۂO%Mx-X/`4vp9FM:~Ipjmާ/"dt,?];swr}>I" MQޏJs94#0IQ<_6\!ge}1*GYq%:CbLnRMZ6~5zfW]:';q^GE=_׎AѲRPY\A 'i7?1oA۔RgmZE d K&ח`^q7 pUF+DU }QeYFy(mr0G^Q *A1mlpnPsaΔԧ^3cK dЯ*iԍ"T@P:T*Uʒ2ZJH]d>3؍9[&5_.ev:\@~j7as@K ] cOM th~WzAdB(\lBQ@(.0jE$r T&$$w 4z!Tb+P{]Yh]l({HC*]3}tf=dTޛ.Lwi9pLëVI+DϥPy?UI9RiTdxff n۬q'vj|w՚TINꘓ3(}u4i6[O ̉#jͼUTOc&NhkI!J\4qI@H9rAJLLB%|#ҲMHA[+w2y!Vtq 6mCd֖25|ֲ,Wzs:tV]ndW* ܓ*pD"*1#fJ,Y~QL:}P|Z_;*E&$2NX6[ǕWف)m'A}B {^X31^]'a-[Ii c :s%w^9`5IvIACԂ"B\2ӳb=?Ѵ ieh,"_uz1h42mK}dꯦ G% s4>Q2![MiVYq$om:uuVyO"h\O4e)>^,'zz9l8rZB먂mum+HDףN7--% }<`z>MqMqsםjSCi%Csgȭ;=$w} 1/\Ƅ\7Qh' D: LcIlĠmr:@2d/c{$c%]q){޲nem7zs#Y8G4CnjmQJf>KuUITJXժB O<] x:xP,tsd*DR 1y!Y%x}a.ʈxIB_n!zjuȺͯ2ӽv|q dz_W-l.:O(O^!J8'+YbCmsPwP@ͺOޘ57k +7Q8QAIPIQ6 A t_Xw}vǛū)ql kUSO;.=f9tnCW2|瑠?j`L5B.ř=?Q /F&&5LB8QzjiR; Ď*&AFjj̡ ii! >ҍ>RP +H\Ù1l0^ Q*6(#Р(: ӐI/WJr0.IYRyV$VAyc%W]WsC=trΗa^dzyLaߔP#DrTmFy9YPYXtV挚) 8)~J˥αl^r;ܥ!Ip/g2 SR;H  StTXRA!8ҠyB(2.N̳R`9qJe"+A'u(2Q\v)YgLHyYgl)g,yɏ Fj՗Cd dB1 x2#ec"zu/]z}&`bJIdJp'dd%j OI`z>o] JGy93AtLD/ӤƑ8l҆+W%ol{U3x&5J!:"J )@g58<*I\Рӆ+:Yug!JuŦu"w>ӱ-Sn!9GRxRFǀsx*1oD̻Z ĐP ӋGwckQ{Q>+t؎þD#|:iz&UduD Y$4@` Y2> .L'o6iRom]io#9+|*A2h`LOnAcaP,y$9Tʖ)[R)SH/`߾^6;ړvYHlɴͿBQهwûrهq#$g`\*0IqЪ`}%0SN'Vw@&J t/Z=ѿgi@چ?|P$(RxDIzV2?$=OS>IKLC*K3rTzuוNdNܗ4:Q03^ &HIl%uQ  'Oy- k~{+hnse"h:E~Wͼ4<@ls?ª"eV P;D+=^/ː3:pqz9(H;W釳 W83łOTN&q fmP$|P+K-V[˖^5RJ߀.r9`FIˏ 'ce1Wⱐb2Nda-ZvDp%9*O]:\+mW/4HdxU1QY JuW/Prs{>-M/@w~u?X;*qo~'Yj]̣&ځ:iL,2(c1))~XϴZ#Sdeʀ\!מ'̀#$2V}Ve`XVC7GR#,d0h8K1E +9K,rIrVb6ӳlM\3;_!ό,fDX4g ZP$f";\6ԝjl_b& L$Qi|tEHD0@7c,)"J(Ʋ섍1rHx"W P!UtGp qKQ\99c;\> v _,2%.ύSdKGvǢ13Xhq>f^/O_!6d}N`$M䎨Fu_4 =MF(u:5Xq*"ćbj|Bx P%#yb$K(]=Pz4tlR$$bE爡I>Zלn=uBNuOrN"r#ZfmNeB5(#1Ō:ZEbLyZ5sj7H|N8Fi95) i}tM_ $kS"*HN6^ "OʶNOzrH1ĮR9Kb",yar\+H|wCszo;;v(H1mzaZZ6.]I.#Dn6oR Ѥ<-4(iAӨM=T&4#o{=\5Urۛ;~6>ƟG+X~wD<4,*s8ԍڷ_Kv@&kTd3'bZ[%y!N++ &%rl 1SiyO*ad~e8؍C6}};SD`<(93z)sֆbD "Eh%RfEW[ Z€ʐcrYFcZ8̡L#Ӂsټ Mdk;)ҢR=1</f^|8{C"buD['4:ۙQ .U`h ̪$.G%jT t.125:aHG Bˇ'  {A`DO0:z Gds$LEDd [I ߲e9p7oV{0Ҥ-ƧKkX_L"txMأR5F+L<3&6Rlq! 䐄t&SxƓCJBE^ /((]pAR ,M$ %{䩂<0,_0r 8<1rGe]Q#E#:CYCw:8h xLhd%)J6Ab^+q(dhQ"E=3@=ėjxMw+NK3q!HKpŝF 3L#Q7cwܹ$"j,KDz?j*LRk*d2D6E ǘc,d%"Y]Y[ʷV\vwl O0,QOd&0O*яoZ0ԼM"neo߿<- K:6eǟp#57>#N/gmՃy=ˋ77[<qs_V[8(QncK׻~NVuUłޮ5骲|P3{ؿMwpM _V;W%EٸY˫ Wc emWP JU!6}4T|&Oiu\1y0Havrٱxɹ 镺_Wn|uQy6n- !dIwW-Uzp%CB7KrJtƒ2ڭ.y]d$6MBe¥yM>*sYE۫Xٝ~'.iy|>w>]U9>Y:b*փT]1AW6Kls: 4=oҽ{WbLMqg6|VuvzivrFtֳ&n}S~wۊR R_onCZ> M5nޱ85:%T+R~i>8٦E>3%/MVOX[IlZ_:6n(={E}˥ݍ{6)6jdMuwmޗ7FoWb鼛-5ZvlwߩJN[0/_b߬xrO>uI?#l.Y,zP %x&>C2a9SIɃЫN,SzxE#>>,yckI\6ኬ7v>$ ƪ'88o5.3.aBϺB ԩL4R7ɨyNEgؓ;p94uprhAȡ9o׬W&fQW!C֪H$G%z.%tpP*=%@6K8$.zFk296gkㅮLxyﵐ}Hk;&D ,]DClrSI)c2\@1]ωi\e1`ƒ=4 IeY)&'Ii(\{1 Cm1Zl j&DKӤ째Ig|PjdG/u<:M^VAI|wXOڝn"7n%tѵW޾~,)捊 @ M\3n}]xWD"EΘ6@qR p \p@Is#@#P@DM CZKvExѳ#Q!B<4c.bFQqN- \<<;vCUa7<m85?ΑWbamAp}E? &J09aV_ꚿAȣV9ihH}&q mk& U&M>YXGZ畎AR% FYLA2Z#e} Iz!&9sYF˸VZ42^R TWϬA= 3CQqmYuzOD^^MT:TIM'^YzhCVƋR`:s^+SjɖZLJ2(!iBQ1,Q}(LD弚q%qtI1eq*'i],+M6f:C*EĐ# 1jl^&%n!3H_SY\jd{{9F\HzńZ,3VF!ZbM!#g=|w"X=TP5; C.z`Jȵ\١F3%eSLrt1Y~0dv$&X,ζgݭ5Rۖ&"U_U^G_ڭnhk\LpɻgNNE+ z+ N ٗ c2ۑ`?~~;j#O4b;*7$Q.*HY*䁲ZHo33UK{ɶBҁ2!^U'Mӛ<[=ɳiy3{\+02{â)(V (ƪ5[d(}ؔՙ $EvZ53tCȍ`2WoP/qyߛz%k?.dh7ʮ rm<[$C)Dn;6QBSh#Z{(zgb3ZOER4JTK\4%K D !)TV\D|lV,K;^|FMpљu^ ;O BvT:AAY̐4(hb> s J8r[J\-֒'0-@'mp"+$XJѡ YP*qT+B=XeAMXs ZuR8E!x!Gj4pu3kkIdܻ#Ȝ+c ο&DxӴmh20IS* hm$sPþ L,G>OB[+~ӾXiF"r< BBR#,CxMHZ#on:FzjgT啕EÎ׉I9s O$I ( XD`A(Q("Jt{:Txbü``tT|!o=t7VuQe0qvn`yֆ(K!)Et4:!0>8M@ւB7 Y$%Dw̚(7BC)VBlL=UHT|X%vy$y1DR%Y0hU"gXyu98 N{ QNST%bV{EоUTPJ IH:/%*Z|@&'CNMZ1FL14M$ak=H x&9>)ϼa"6 /1 Asɲ\jlTn,Wƣ~rcYgU;4a1іq*8SKL(0[`;^ԇr݃f^A;܁h*hCϝq{0:ԭR %Q穕}k"+'ZaNF˄]RבVH͈ƒg#& ٣ <@,MȜ7˃ec^ZvӌX~sYLR=B&գAYj83ۨt* dR8K<+IOдtjSk MjOu'k?eT'<9%0X?;zɃ I0˼JZhrmxrs#|ry9~tU[Ly_;&kBOcP.yؔ<2O4) 7a5uFT2'[tϻ;\ )d!M(1lbFȾDxp^62"R[E:F81Pcryoi%ՉKYJ{ZFeS $>וsli-{^`4}kzV ̈YfYﺽs"W6Y6lKsgfd_H-غezu84geS1g MyXg9ɰ{G6, ュr+4_\ыݟ21RY-L+SmWD !ÅR:@EYi,A4$_:8K86.ɽ.Idt$D[ QЌGChZfapV{!(R& oو!55!(ђ5#2Y" 4Bs[#au.??ώnq)j\$&^ 2E IH0눥h-952;mMr'Xa䁩h2 K 9{(9Մ֗w$9g9HÕ:jm RGL21j8[YSv/հ. F;](B~IuY( {D#BYG'%45ى@"pZQkg+ Z I>YN0j<*B$IϼFIj`|Ko:;yngKW%9ˬHLLjhơu WhE)VNeo'^I`3΅jk9+No!o1xi%3RQZFѼ<"H[ EvEZcMO,|DkGJM5`-R")l2ՉG{1c3P;Q|-T:tҡF'd4K.?Mʪl+S2 HM)D(F,ؤ9v''i{E C~*d^p^yWǗ&G{7:I8=)^ lԣy7:J"c)QyrR.c2E8RG)|ԢU+ rfGG"6h7Mp^RJ⅌>Dx>J$iz3dyz'y6-o%h\~/,GX ~F >Σ4F9^U,/r ͛btֻoMLp+eM^P?>?-V6IJ ,E 4m6-0˦bs}t[ g^Z|(}}YD>o)S^ȿݡ)b QL+#9oij'):HS$$E ]Я!9aEQ>;F l]3 >:8tZ_45 n]o@j~ji5̔ d(tDɷLr2!f($̅dWI2--b*^88rN"%~MLBkTJd 0 "c5F2z]d&K"C}%[ͦtr|AΏ;^ˍ/:_uL:R BR6A BU\͌Ed㥘_u`ֲrKf/Q֐uI`lNB9m_Y`D2爤bV̢B:Zd@g!*U0fT1ZAEkڠO4mcHȸ,Yp(rʻ ]F!T<ޗքYV:#Z#%'ԯ!=(ƨ`xͫNQ)&x>RP$eJne{׌曏R[0^c!s&tR|EH7l=)Xco >pQ5=t&x oja4:F ONm5Yw(w=(^O1J' Mr[pZ> 1.G\#k:錧:og`*G o"$jۊ?sgtrMm)ճyȔau9<ϣN#myۢvzpvvqQ0ҳ]M#p7Z%z1/tFэ;Z͛Wں9C{xu`A 3&0g񤜎OV'M'?^I|rohOigk$:G9ԾsT%uy]v+S$p5:ii!u`de^5yIPɿ։_杍'ˣ48fu#OFן?ÇA*_}釷/ ]"h(|_ w?~оyC;|q;|@\kzKfiяWT+t|^fThwd(92AJݗ8%8?{&;tdhmUZ`)QF#Jєꑝ@ z Gya;OBXb֎0bҁ9bA)_KS"Y\͏vm@]UI$X&t%꓊*:ekpW#) P fiC؃)|]kmLWLV44%gq‹تlѪͪM B6G6vI-=E~־S|[96p3hp>;/)0 NlU\o_JalUJ?$G +Dk߽X!y&"uy_`- i0Ebl"ՀIΩZmإڙ6om{1W:}ώGWDf"wU/;ZZy,q#ۿiJ22 yIȶ4YPT7 pYLMͤBt~َRM=ocZKίNs~Dճ<1MF3nW'3oCn9mZӤ#ʾ)Е/H>Snܮy-ovʟNNs.W %/mXF@YJL x *5bVm<@1~ is bCT(3$  >(^!=*w˂҈g1.L!)X?# B uRhc*j% d^= Sμ4E;of^Wuw:f.%C7m^Lǁ~gЇ)-=IG8%(A6$}S@EKCWNTX^)QҘr@k&3 %3db.E"0j4w"bzS^.ÛAQt;tuhc6yP7:Ĭכɵxl-V=Nμ;q 4(hv ]34N3yӀ:7C=scCF12~2"K ^(%/?b:hu$U )m>K xE'=Q u)eRE2uM-gskq3]~nA;T'&2o/>kqt6zb67}5.m-䝢M E@k'nzbT n￶ndzO6dǻf!le~u;_D9nZ˻oynt}<3߯C_EV;kw}G@J5lmnMsSKiPi)* gsps-ts5k>_ϟ4aB1i;0%T$&\itiChjm$G~T-h4pFj:./tifRfz u;2pgJ:}Y. o*t@V6]h0BD<201U0 \TU?>)nF6|ї;c׽;žVХ5;}q,֫|e}q.F &H[(P2FZQaCt(2N&G}䅫k6Fov(sQ>^S N\-xog^2Q6# :;iFiHJW /9(iۊx^!2i#5LL{U7x%!:#A+S;޳&ieVR\Dʡ,5EX=X[꒚/z/כ#k_I9Ǘ ջ ?`^^27/xMU9̱GD@U'-ZCJ0)G|t$Ti!pm e=zs'ɀ\:D84fSQ J3 Qȧ78]"TQMtWثDb7{UK{C8 /~*EQyB#L% X m"C<,h"7^AԡbpY #GkJgm(6S\iK `DdF5sf"Tid,fzd,Uaa1 qY,l4MbK9y5yQۓon`0x:DElhdR Qk % 6N0+D{!CJGt`ZD"rRFbGl;XPwڤC>%JXPDEǘ3zȘ2J ~`-a4X$QBgU:hj 8\g1/Ya\.v U9  hnQ_of914` Si:\<. ]Clj6z1rlI,pcF7H)`h~C 8Qj];SO{gKJ Mjƴ7fBD|F|;#w$GH(ߑE#EsEMMzzf [iຓKq,tXxB¶jQGC#"c0=t{ah+f֤x &7)wHXx. S<7pjI4?` g1FVm:gź斷g?7?/7S$zu/7Nhna?s}ZkhH.00eQ,=%+B4ؕh|y*척P:{s"M`rwg۬<7/yZ|()`b&9:e*_ͫtUSL<$0kժYhHCx8iKx@O?z73`ͳ *u :ʾG Ϩ쿝{k A@Ps.ܱz:݉8K =>=Θ" ̼| ySM>{X2s6o; =-cgmI 5KvJ3a`Y>̥T8FcJ; 9S+0 }2pU' cd%f\Bt*|BpU'îZȱUR^!\)<%v+JNdUvaH Dwp JS %<Άr6Jכrj XcF9K8x†е Q50X?,EՒJ UӒC/OՒtTP5aӳs$~g =3\mg.E WY+i"l;+Ց-3-upkcA'W`0d B \c+R!++X+0#~2p=v*Y)UW&f'W`0d Ooh/k%W`WWL#SL\&O;\%+:zMp;mxs0&3ta͟hd4t U Y+nq]Gۢ/{P1.D>mhu,!][nu Oʠp_Law$m$OwMhesۚ J_&nߤtap| {խZښ-jm| Dҙ\{)UhaST;NST;NSTwPf[oWFi0E]yoKXix`,W X, L )Xc,vSCCډ5+OՈӁ[)5D9 JausͬMHR5BP0vY4a#A2 <^s} *[3gvθwWnJO|8a7=nl٧c*KyLhرgbGR3Ǝ4ţ֟ƿ_QZ!(Z[JL-0">}G:r#k. 1REt& {vӢCM=+Jd =!"}Z"`gj{ȑ_i琎b 8`fo;} duQ$$,_%˶ZeʖQSjvçbAkΤ3%s Yr:`Q1q#-zSj\ӆ%87]54ܜf>.sҌ&ꠂB!O&%Lqa"iƕAJ)Hx(omqhڧjdZtaw5o^ȿKbQ`%WsR/K^>%WD"i:r̮dOߨ!_yZ\" {|n}ep~>v٪kiO v.,:+tϵo˺svı[xfg44(wp-FӲNaFKZ *jpe>l{eC%WiM q\lLr% "Wށ1ZN'Wec=-'7NFCп1(I(H0xΙsBCҁ 1D4t"LQ`n6ay܀`Z$b]A[G%ʬb%sGjQ@s&پmRkb:zSOJǴbCYh7oMP/v/>O񜬦4콙6}tFWϷgK V&Is 3K $;#\$$#V@=".j|L'|H 08Y\fEU6QC"MR@L\- Cq+XaQ$b`=w:4ry~W3dpTGji5:$«U{yIee: @$ZClCťK8X`Z& "Nxd:?=tBO{]!wUݣ4ps>b&OOoưy:5;$"uS4f?.5V݆z+['Fٯze89nv1dX/=={T]=qTN|R\Qs宎쭂7w$m;A{ swG~JRȆ^{xcbfrCu[tX+#GUo}=٫~\K@V=G~}SYw$"! ~*n#5m|i!ȇnp]O <<~-Ojh+MpƳiF'&Tɪ~5a Ly kr_/dZ Jt44Է7'njduӦm/Ics0~%K8LixcOWA?`c y⧖iB3[:j#Gs/&ɠ7E[}r/gq01ݨ~kKmEVf{7jzYG Nק1l$φr[&'FnfّiS],"}qK?? +݌{fٖ>9ܓ.b:jҶEq Զڮ3[:6a7m{:k.lb%aţu*:mtj:7lxnS>UΖlI%& ύFZ}-OFM ϔs SPz}ّ|Y(@7 dFqs%K]ze@/k ˋ'_bqgzV/uGUG\f ;^sr‹D/B!O)Nm~T.4ܨvk3|*QRDJ(+OA xqȫ7ʣިz:QuIJˇ4Z ) HrdsZAQ«L jc ^P8sHR`u<\bYJ{ bx{#d9˳=6U ]`܁HCT`)(R!j36Vs'О#Q.µ?B HLBt (<  ^kbtRl7XzUSqS4_9/饮Rd%vۼÊ:9wM]n`{=B;C^-) `G%qP3QEׄ% -Dr 9e)2:'&2&e=rh$Wֳ" 1׷R)\B*َJ1,,b!-m=2f1̸cX&/.̠+~&ף#fĦX$)syK89}b!*k$cL;RI0dg32lrA Dз#&&lV%68Oa<.6QgωZ 3 xy同GN)eA퇀ZNUp)Tځ7 R#m67Ef@!q̋8RN Nublک_DYd`MW}XjGz %U]TBKk"p I!* gi"tJTăP%FeJ>Ը7PML h2㽾6~;b՝Rŵ ;={!gjTM[ߚi"=GLzMN욬UwwM~O~t_/ENe2yh}|%^NiJvXiliR$.j(YK m)Y=OgIv?;Nf}ˍ&q8dYh.nmxA^k' [:99o'A"[g ,^Adr:5{7|OmfikrzZ'\90@mL. Ц+%_&i_LqBLg_SN' W%$D w;ղN!d)OA*/"rlƄ|V\gRt:')d$R{(˜KܺH뤭M$B Zc $ƿ(&<:QL:~UG$716ΛUy}=.{Ǽb6Z7C{/k%S?<>r10S%qUܥD2:lrD9 QOAHD*R*<9ìVĨIt>i :*@XLwJ n@0-d1qj|.-/93'7Ox)IfJPArD<*wQ Ke;E$Qp&Z 8|kRLj͘URI1q[Zx:Sy{׷=n[S4ujxΊ{ ˶>VPdG9rU>VyDe%)jn?^Ue%A,IKn;T ^5dH(mR[dP@cS-sP,1a(RSF[(HIrK%7I#[Qx[/i<3j6qJޟ:{n׮}i^Cg_U܀b 6Q?!d|5N+nTF?[T??w_^7q^$s>rtn>`<aV_zM·46dƞJmz ؍jc7S*Ǔ&e/o=st5=18)^9?^C׶e]lrԋ ߢ_ MyC(ZeÇߌ㷣i(ni;rrfG<*WqW9FAX4QCTXS῔G6[K&#<7VtߖGb='#T"l 02,#iC:wM>fÒZ{~oCd@'n}ɒ,aER<1rp!#$)E"N#g:\ژL0/k8/ۣKOvZ@.;U}qv$4 ۹f m}CgaB+2sqkt (o`S&"٫]<BsuQ9m%!D9d Hh:=TG#]5-J`-ܫ(ܶ4L7' 2r ŭ ȿU)R@ ,sAj.SהiƂJy`|IGt2|ħEm&[LǽPnU *$ Lf eOso+)d:-Oev>'U[xԉ7:[6\SK#LS!Ȕ8i΂NFfRHp zkݤXK-/[NVX[]7#m* ;Xr8rcQR RBĀ27rO{}y ` w2YNYbB"t!~b9a cGrlX>i+0QĀdh'/qeo6Ô.[~m]DRڛTG2 Ju8jkot=+6˱|׀E`Uō:wWoȵ_ P;~IG{=se8)CrXp#ڒ/K(^pdʸTYMI^xss㐞x5%ɼ=_qs~jJsio7f3=k K "؛>1} @֡;SZٕ;Y;'Xaaw浀%&b]Q$ACe1SYgTŔdЕd}&?.J;Ź#<ieё*[Dt^m11uXuJ.%%?00RȄyoYgDepVz=W ;2bQ H|>{BƣʏƮ};to[4Zooz;}6n8eMOgu~6)]|5'M'}KkW.7!?!t=]駛Oit3 G٬'+zZH>5|/-Z^{iy(7Х1o#dmougqϿLËw\EzmODiDs9F=,=Ss0}ϑ~~u5ϭJۨk=㥎5.VݓLW¶3v~Kw~sY.8"ۂ1#ɂVX!U2R"gT䩊Cݬ:/ǻv0Q?{Zж{Z]҆5'v9/xebhc:'c\)K.#OaPS(gbr8t\[.AwqjmO s$F[qXVPXJ6X+ffæ`O(>' ,h:xR4bXL"ʑ5a-7Ajb\Zys0Rr |$!J `9հ-K3l9GrXBzJX|pD|Ne$֕B$&8Uͩ[qhEߦ$ڡ}ea rszYg׆D @Y$& 6 lFZu&mdkEWFZdFBi#O0mի/Ɓ@ݼ ɸg`@‡fX[UK(mHW?<C;`8jZP϶S=hKoK[ F_tii77A( MRelpAkY:?v>0|2Wgƫ2\ʍw)dͧH ɭĩ4\W;ZZBjoja %j8"J.׬ zɠ x=O(\!./Ɲ~o|͜9lS|4֔=P4mh7Wl\<'ʢ9`{lSmT -71vC.SM`T{' !*5 ]w CcdȞN8C&ds.:CW.ڮUA{4z:ALv \ ]Vc骠uut`5BWUAS+kZĵP<B_*\ՉhkQ u|W ϥ8qwd64l^Y|GVWL @Ն ڷ_F(-tuteۥ|dw% \"X骠d+++;3-޺*(ׂz:]-_? pd ~:%oQhӲjRFW'zxEtk%kZJ:Y:])ʥ4mwTƨKܡpZ޶m[pӭA+QN=w;f8t΅IqM+ $i5.Eh/ݥ%ɥD{Yϥ:VsNѦOeKh""`Cq5tp{m*JoOt*YRW 8ЕMZ/RCWC^Ȋ_]81_-y!<-j']۬/@0?C|t?g(u*MߴU$hofyI=^Xde% z\1y?dXćAqu|wؠW7#s%[G -JΚR|qYa 7`tJH"J`t!j-Fmr)fͰ8@c8Nu6;cHtg?-48Ȱ&jJs8,AeV,3VRd2QhCwDK{(*#Hgf8hѵѦd@W}|I5Kc8W[1ևሙ43\dv@I$Β;H 9в#b[wHf 03sƖjhNXT1Zr&Zگf$1toM6Cĥq@=hÍ`>h2 A&.e`0L!Khdèi,U}hxHBBqq<o2seSEKG@֩ X2|,ERg|B9 5*e7-GpzӮPG)FM dC\cp·z 8'nws=1#v2n%Ǟ\1#QQ ~F_̷2*) R*KTJJHf%@_ "$U)֍K/-@DM&"س.V[B-}(N!'c瑇 -U&dnYB QD>I qg-àTGBΰ# †r@,x&THdF'> rN j Ac h**u(:k%ݱT " +lBGkl}điN +Q %ENyi[-Pb oDx,8(&`FhU%ٕc/R +Zq2M!lEP ! BHP%D&T+D2*0@x`?k;S1JPk':sU r 6Pl_ WA+l*8"(J8|3& " Py'D),d@cuT mݕ/(!P.g-UuAT"F{m>SGAȼcIg $$Wd,uR":,5s͐(QԷB6jpG=D &O ɓ:EwvߋqIΘ I(NOcEUI=PUitb'$_r0g+.M1tü &(!t_15r6Bڪ[!q*p /=.̪d!:T?^ s¶MZWAb!ҧa-t./Ȼ:Y;ebG`dNBfȈhP2ǠA]r%6莭(*D* 5y.x WH% dp7k ) 1 jcx* mM!g8Z.f"Վ a:<3DC7HX] @Nm1Y5(#VF/-QhH7< td+!N:n4XTЙ$ fZJ:QZ|M!jPk b7wfp&*b,<(cA8pgY355+A2Zu?@3,5լ›Jk2饷FW\Dc,eԿYn$ldHu }>fek9-m/`z=ny}b ϗeڛ}Ϲ^Ivz?H`0u gl 6l) l\S &= ;S-eVnQ3yf4zex31v3@9p ͳʌ4I!)a!/Q{úSiC&:`OI#'Pg䍡"ʛ} 1 /%Baڢ JԌ*#x$z U @RUm*mD*mǀuml"s ɴR+k֪U:SJF6]Y dĒ+Z{'е-9tz˨Aц?j o9h ŢWqdAiAҬ yaAb[̀z122.DčrX<8gf(8qK]RZG7ЭuE< \8 J`Yc6\W!Ҙ" khBM_pe. V3P5λ.?#t}#'$ wKt!ߏkqn)vmX XIFz6}(g !ugwJflO|Qܠ=q+mw%opKFϷ^?^\||O/?>;c~Ǜ~: rvt:T{8w=X8 {Wqy~zmg?):ƳSCꝟݭRWW=EΫ/`1z_Cq_u=~͗8?[|7_ <'tt{ysu|×+dж{e?۳Omܶv}+M3yv1 wnX*_6yæ'>l#ִ,y$/ Tʶ.\CHHyHIxF:Ώ i\ p {W p*+pˤD +B +B +B +B +B +B +B +B +B +B +B ++pJ1Wvp9ҋ:JIz#pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\n 2pRɏk5=x t#p*+k"\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW#2͎9G\spUЁbb\BJSAA +B +B +B +B +B +B +B +B +B +B +B +O0$o+ɻa-)fnPR^( Bc`ˎa`i0R^v -G$W`.cs`1\k?t*Vr ʲcDr~4rU̵X2\/WJ"zzJz] 5p8aihf1 `SƧB7WinJX?>*DtLaݰT:{3uV޲6IHieIVNWrJ%NG,f4mV,O?g}R{p~(%yiWqAIV8+[Ӌp#\5ڮUO H`U1YE$W}t*VZr qjurU h䪘 vXk>]B\B\p{L,<sN$WZ]\B\S~Lr3eFh֮Zo_rUdvҭԊa] nU4M*@=m' y.ṭ]7h:KmrȊ2眜 { 1q.?Yʹ?Iޛau++KO4 _A|ꍆޮ|7&p4r2\@Tr\G9ET W vY.07SNɗa(bi&M5MJ;OwVrz㟿;QK2Kj)` s2r:/kKu=\>:YV[LzSǂ.S,tS9k58uԹ lL쒳O d 89F`{⓿,`jsap/ taU:x[m7+m&WYUYsUw9&^\(ֵR hKq++\>ac2gxfΎ9GoejMBN$}Lҕ'X׻kxɵ oq3Gguna84lWZڏ|6ukLxwOnW'ן6Eʫl@Ys*䦗L> Mn$"Cr!Kp̏$ Tz3~+Sc۷榋AhҪohk*lw{Ŵ!#)GxZJ<+OkqXGߺut1JPGJUJr^ytOr\p~|Zx (®~zIVW,z6?C1wr6_;lCW#\O: Tr sC,GG.X9\* (J,Cs2ڬi֭..7Zྰ9w?ָNc*I~ԃ 6q'=wr79^:}7uG1.<tn>ݽaY? }6L$">vF٪1rF "V&XM 1d$A32FVdB{QҖf|@  kG=˂YтU6S^n2$r09C@0ji[W,;. C=*jߜ -g?Nۻ`tQ[[bs6l( ?¯x܏i &)qki9JHDP[rܻw:t#<2Tn.$(K9KTi).R0a9CQrNx0%Qy:Q⮻0(.VFZBU!{nHCw*83'iOtRJeΒeM;˅uk:mbM}хPmF=)zf ?Ȏ.Dt&,j2 +F?^E=K=K[zaSsy?ӟF} Pj ͉h[ .٧z!R/!uǚJȋ}`:Oq˒1t ¤2wߗ&z.+}Sv7(m_熜:7۵%K )4]];6jQf\~ gyOn %}VP{61I69٥߷@B`y$.ڑb4;[fq F(U|`?×/ޔVXCMi4ԯ -8+8{[2DE6hrP&LAjK'b__tPG[dl elo/j)U2uJQT i. ow't"!HT0R%I1#'W|G'X9 )[.E+-0j7OTS.kw6n-k/k{,u T?fe]ώƒתK7iQkw%AZHunk;BG_OB*h):R\nPOgLO<QRDJ(+OQќMB- Sɖqa( SRJ$fvUDޱ"2TDTM-^#P&U* U-&2钱et >x鳗 uPB07NRpjҔG-X.:Sb Ʉ3i&4w|+bglWO ڃ.NㇴY+UEǺQQ7xF%'Gp@$oJw)g%?hCf. y[Jt\5Q 㧫ܙ+vEk_vpD腼~gݪVǿ%࠯! x9s8L_ʈ:G/ISؕ2gnA麘-) <0 R$TA5158E=5?dd!S֟L|>̈́}M>_G}AE#=d0f pNhrSHy;鼍Lhԓ5R;D`{k¿ o6t>s{dx `uYJ22!U[ GtA3}nSu[T??#xWG?{Ƒ_H~sup яj'ԉpUd E[CirivWTuWUWYe֚1YOEz2A{R*'BB > YJH߲͡I̬,$jMh!jP,EDx9`b]5P`7r6(8TؖY-ݬE({iVJg%LƗѽx|jmBG-=zb:2a R %VV@v" -#{79\ؘlu8R"kp@G bG d̠١,E;Ľ7r6D$TKIJ4%wp" 9$J/ĠUlH_PtIae_PYl]ffH0M>&KHR:*z] $"e+!TC1Y+@EUjvK7.c E[lVkVlJ(yX(RbrE+1 BeldR]u7)a}gU݁kA{wywn|WP}Nxӹ P[5u^ rV䖿̷W| :L\^!3/Bdq2~89B^^ȝ}Fۥ=JXqaӔ墥 ͔Qҹ,KU9Kme/f;k.3Pb. 7n;*9[uQx-ZJz:26{0h6Kt1E-fIh2|x_~hk o{:ctS|x0`n\i|]/b\sW_Egax?a&جtL餙X^Žbl"ՀAgIޫ`}Eiu[:|NVDp4/('Reyl*,lcL.0R,wJt]4GyF)|-KŸ8 n~͒W?{7G+~Pk2gud[r Y9Fo=JY4;t2ۤ lWwq OFy"61DՈ?M"(PcG s/FZL%XPg@U 68fqEWLH&$(^S;(/RZOEr ]퍜w] u_5ډS&p}/ҟ%>Q݇^y"iq0“4Q?5D`>=hC 9 I( $M4LTёkC_;c)=S ΠM8kJ3db.E(:{`Y{]L8aϤَNWoF<|W\e[upmpU'R7JdZ#Nyr{ImR 51cjul*1IXzܖ;%o%oN:"@VPRZ+5ݱ%kqE![0.`i;tsc%~ylu45V]k~W4_fl}zSثOb^oYH|̱&So}G)~At{7%܀ :Ad(^bOܹbP?AD()XRYJ##xYX -J~*u(-}5oVP&3‡P~FΆ \1bW|}x~10'iuF~ DM Yܥn?>^ s)=Ȅ:aRAq cYa6тT)-+`#&DZBI |tN),T%k)F|x<-]gVmrwJsv޲|˲bh6* =!aRMݸ5ߛ-4h[Rr}[+h&ڤdr!o/Y*lLVKgL*(L9'cD/B7:MαV[=?x-7/,lQM2YKXAV 1_eU̘IY4,l=[ؾ VSr~$%ʵF#]Ű!I S%o'\yT]{,}GtS FR@"9R1+ɢB::aCT⃈5@ݳaoͰ^x#r }>k!TO RXLՇel:##/ԯ uJuۚPzͧEd!{*$HA)ecgXr# ]ٟ?sĐN=_uro4kgiۧ;vd!pa3%crGa1QE9 ~qe~'4{rjѵD)J^9<hogRAk@z6KF()QཱིZKdBg`2-N?[뭠W%a! A1@Ky\-|6L%UѴ/}\C<B[ ] C-rj*兕L^om&xELLj9 K>'wjTvpcFփYZ_[o']NZU ƁKbQ9?]q Fϗ63riOiΑ=Y0sl0E6hOu)>^-zr=l8]39sT%֏dרseIJNsR6>?/RrBqMPx蟟vjN-?f Gǃ4livųS~'>)+E*Gި"t@2M0>4&*e'l'4mAlM%J[EVs_CZJ#k iĮAT:c`t]QYsTV&I%P{&D'@))߽87b HZ#mg=X@N u2ccЙh{bhrGE ^dieX9,#Q( PNZ2,FBԺ(s+*+:%+ԇrC'mvtO~ӎ%u8n,Cm3ej{"WnԗʚBB l3d I5ۤ&@U@J6j`wҷ{UԲ{&i\,h>"LmhOA@$ʠq*eݳC:&9RXlzҺfE3yBRNd@`SsY]J+^r|,_g9Q١~C`bFB-bp1ZrZ$6"ȵrIFഓڔ\Ii/Ins;'䫑ཛྷ"xctVOxYj}19[JmQ F'D*&Aʢ$=H'T'59O S'|D[Ig9덜 l7a9k bBZmtc6Ժ::e*J .BP+)oOP%f"R(u"Hq3Utl*0ثZZizKvo {`6('E 9( q,+^FG6R޵u#En?T6wCfZ`b ^6ZrRw./GeږXpL8(J{U'^Hګv`DpjqooqN֙ )nB33z'"D[IXi'"|XdWӕoQ i 9(ipC $,:E]x0)dͥ;'KclkqȮa*KA'gԤ4YbT!Q( @B6yigU6ury.q]DQr Ftq]_a\v85yM77Zz pK/)G60ُDU?IR > y`C<;|n8yVHF1?ǧ?w?~j.w^BN )Wl3lb_gzV৞0- |Ɇ( = xb4Ɂ*PVX/KyZ`gXJ rz68G=ߣFd}3FG'{s8[A: W64\+u456%KT/Z*i on \9 KG{dCxt(<ȳb%OHNuQ&LuT\!x҃JVc4h$&%l)ʘNN0 6T>JFIY i}Ur& "g#ح]\:K_NrփlusN#4tB_F- Bלvw+R&^~pxV7O%0n_+0ٺm0V6+ͶҒk+klpm\# P`qmD`ھB\S9Xʯ ]q X >L ~ՅW)x,˴?9 *)r1e;SLIBrL.FɒF;r3*\6zw؛MT\p_MLOvޟub|"5%A2.qmtجJU9К oSg=@j {>p/V]PZAzO~E3}~t}8l77X aw~F֛}qI6p9!A,/ ;OO6w7Y;_hrfҳ lz|xzެB]C|}mpwe6RW^xV@Xyh{t@0&sZ܂ń4Rd֦N d4ᯙ KL܅[hnZvj6lx67 >EvuIALu2AV{ 2jˮ04(._h;l^qgީ{\0"jt' %7˜5z.iK%[A̕R5h RDsSIQw'U'{/ySWxAKt 5q Mυ%*ǒω +ʁ^W80^;Y QGܣyEpuY.aKkRIp3KD  HEur>LJ}f~ظг `EW=fЗ~AICFg̻s$ކKoRW#To,BA2)r7\ĘS ed$]`ZenIXp)H!E\陜 rw; knFΖwkU>ܣq]ki3 O\h/#1[r2JiЀܺd#AxIL{Hf:& .VgSBAHHH(aT 0kQEhΚdYOg{U襮Z%OꥎgA*_.VisZ'n`kxГk|\BjK3ڮA3p]<Y`>"H>Y%"+a0ښRRI`<Yz%iTmX͚RMV]u!Յ:]H%bz^,c6ys$ |>}?  `sqQNJiQҖ;& &$ړ1f=Hkm5#lp:z)0 ϕsMd*!ebh\75Gn&ñ"!咛j4Wև٬~(qonkǮQTֈӈF\2ك sudUI6q:39qIB`cP!EPU,0$6 <=S0$56F4(+kjlQXi?OgոdW(+EN/x?X L!NiE=FTz%3^|/wUe}(wӇ6E=ri೸Ơ;nЏO~TG?6[ mG DMx?hCC`K\?X1,F3((((XNJ[2<,:<CΥbJYDk@g][ d.E3.T.$"YNqI\ۀE6i}' mek^0ڦFLg>^^]ZJg.kB~Zbck>IzM{PNt;_Z3O3;Y`eT)$|p<#`B-(=Z1bJQ)u$n wCre xD%! M(Sr ̥ANǴ2fM؜땝BWRlJO[ d|~dxt(m9JŲ|$-r6+:}J]NI xƑ*TČ @(!+䮑9ڊ^?|R@mۋT)rDXn9S:I~%r T=  U$H9^wWylX8Y̼橅uAR7μBg@2(QTE3V88*$i#" ? O.Fښ$EvY4*"zI.\$M*Aʑ)ACrS2ht ].… 12iwY d y j7kje5vܒ^~N}ٞ}v8|T­SWf&w9{paaI<$GW_~䶦I΍`f=>RFS m=y\r$80p);82ZO&x) } #c<_e"3o%WrInyďIi<%oҜO~‹ީoHӢY׋kЫWg?e1y){iD.~c`B{+ˁoOٵ\+#͚59O̓7ޮfc,m|C̉}0lfٕÑh<([bqZ2֓=lt oFq*ǒs}4nxhb1ѓ>')Z{l}A2Vr8W^'R:V>꒲~{EiƳcyJy&eq/ND㻷?ן>y#po? 8bM᠍ME{pr]E-ErK7GKNoA\n oF${V]l-@ݽ3i4v_fhjkH%' ojGґ}dFIt(⥊W$Fj͜B+ -2O0U|ȣJqcjL_diGfOMD/GwH⸹+H6Wu$,S 6bh HM hb)y h&f驝 EC0/4%xbAIhS>D `cAnkN3iFtZTHĞEعzbgxlv<o9o<RV U ̚BÆ$ v(ydbJٲ4"lV^ZRbvB=pDQJЩN݃<< yMtt>io4oIr+;$|NϮ Zgrsz5Kc) A` W \z-t!/584'~sIUPDU2 R{޾ tHSh#PxўfG*b4HO@M'QtlB者9dCX}RSB@ '_TZ={UWWc`:\ t*hH8;#i)t &5,sXt&϶:^z;96p3w>kN[պ;p1(i/F8Y$ocV)#Uiv!z+9Ek}ksgv?Ս.dy8܎m> ޑ_o,*q<]vn}9PE<9"x/"r H\xQdG_sdHXT'Q >ذw4vNe,{v{N >_`J;G߽o20mͬo:R^7Qqw`|G%;@kUˑÖpm9l ,.Ki R4G=^Od0OXW'iJ| ROPmTnY{s$kG:1L%wIFP&4@dnUSKt>'^T1`!"4IE6m,mgoM=TPPrϛ[`لfۃX Ʌ|_Kh 3=  `6/=y.\._![fO?8Iu*d5C_p~6;)V?>PW 飿/ lqLO5SM>L\Oؿ?5+ r^I/4$Y} Hѳf]#gz0ٛ9o }P?gZފ& kz߳N^v˛󂞭d0vv8f? xbOXh23~K272{*{SfZf ۥ7OzIw_hNWyG#n3bHE M5)kl;XP`~oXoIYQ(UL^-5z' .aʤӥE&DrY":^ont;ǡuѻ~~[a1svyW|n[wʧdR@Wʇ&Q " ENDb\G8H u:{yK]r:X,湆8JFn$kS=nY8 d]\V\`FV'B;Knsu`}+aªww^\\b>|̖*]Cg[![}%|>O>4ZgUbf^Fd)H_JbO׹nCyx''%+mZQoͭ +MFndMDU A&Qڈ`PE RE&$ֳ:4x&z_k~&Ύnf>i>~ m2*mM=t11;yOn/WO\ ыt&X":A]H25a4 IIQp@ pB`w:;$yDoS[J'1*jIJPmoY/@oivrdu2oyE,Y:`gM2yNxi(3n8s2f4eXW3Ǟ9̱ kL S .8(5~+䃉fŸY*49LmLVȐJ(<%g%մ`#Hlqgkg%KstKѐ7!M<~b6z.nI\e2YH#M ϗG»W=6uJtlg!mRS7qPص@k-A?T fTi-fs4yeGpýTH%dٺ{u75 ?^P5|x>T򷽾Ӈ:62igu2|G*sgZ@4\4QtרE2n*NUݟϬCCט[͟ un>4Zsz,v}GthϗWm\)Q"Sa00'f4Y|8]A^%ё^a7qMQZ悖LDFkJA`A_3*ECt ӽ}? _ZCnq¶/jwN%>\>/.+StI5v0:Zq!L˔N`:&^F'} IBM: BaNQB(|lpkmO+!v19i}t9ٶ0ӻa񖞻>wÙ "O&\KWS R=v,õT)õp-B+:XJFt9"ت[/k!^B~1ٹCLNZK:W"-6%̓%K P8\:yp/ HЎ YX @+u6qZ#bs6V%Ng5zY3QMTzܕk.TQ Z2D,Q;/,BTm2z;a̎gKH9g(#++(>e  l$<^yQzԊӫ.MowGw{zW4UJ&yw2ݲ$j"# N:|!a^}e0) h^=S%[Fv6CӲvԅ5Aek,k@ҭGP Q,$M hNX\Ew*{k':zJiE^!ײ1w\SO\UqusUo#Js:)]&Q1WU\8sU_RjޛWhnm_cҿ޻Oeܿ]|F~;k!ـWW_~f<Mԛ;  ~'NZ\}{v5v;^5 tL(KGFׄ&|G ?Ex2}xF\P'ijC}3>Nv{ ~ڮzo-s*$Ԕ(VOe &HsCBفjT_&F gR;͜O`44~d=lrUh, , #w,X!"A|(Ie;D9`A1\;;}2!NUiuU)Q3mS}W{UרS1W$usU\FseZ边"-V]WJJeo^w\ O VqO]w~J5+'aeuء 6?=7:-a#Yu~uz&b 7%t 3rvk~|sousaƊC^L~.Y͟f _/7>ݥg[ϸ ܃IY2,MY &cg1x&9*FX->{qHXZ܌95_Ff&[.wۛgsy!?Wk$RW4;4񧼔PtXQ&ouA`laђYx6CDWw gzʑ_if!wҍ 0=ݗ (lMdI#I'[;Z{)4/_ |m,D }JNŎueɑb;ibרT $W6'"IE K5dɃ\+x-K8)%i)Xvw߀m͵o_=7ǯrg<W: :&˼"6 h@hɣRxPu9o]7,u6$[\m|5i0଱5/~7/H!}O*=}w#NsFFeBW( O=k({zMiSd\ϺrL+4)b>J;QefԮLiFxEAv^py!`-jf_33k0ԭyFV2ѮQmǴ^i#δy*'#W))i7HDW ga) gL<3&6Ba69@dM OMf!ϛ$] L9W\%9PsDr RC*S:A;{|=x*TD9":=tU"T<#ivdٔWS̝?lb?jBv (RT۳R2_ŀ<ia:<RDQQ+?S?(VrcGZIJߦo"gexR}fQy(ZÚ~Jȋ}`l&Ʉy<ޭ|=#X}.}/RUeJ) gCO~U f.v54qS8M+1å]%ƖP^8^j根87vpѷGG~, ,k ,j-uoV1 %H4Zo2~/*Fr0}> Џc7ӳB|FwxI[0L0u~LtJz{`퇟~0"rqqaJ?>ڝ"Lr\/.h|<FzaauR⃶ Nh6 A?YSBC|,tS&(gg͙:罵ol;RrW߮GUi>$㧴r[R1mq84B:w?,ҢMsM220S/,bep+zStGmKOE#MO Тn?3j3EevN.lruqöGNUu"DnaЅuw']tN]ˀ%^a&d kU'}HNdka!GNTq* 脷Y)Bd7<ien_.pg\3|[[%Jo͢ÁLb!(uqI 0 M|D APn ^/i]~V[j+{'r,W6cּ T OƗ8]9h|'mΉ-@bk_^Ǔ7f(t,^(jhZ\WR-H{ځOWU9n̕||ϺBά7VF3I:x.tQDs2U8 xN6TGJ_J i9 Y=g+LPz!DT`y$L%/> ]7jP)iXR^rc@EyљEG')]?&Ξ'B)-Y3s  6<# ׄ2EA%RZd`]ZI/P='1a\i5Y@v!l}V| !'#! QZɵWl]8 5]LڒLEIII>r*jEUVjx%׷= tr:=.Maג(**W t5 3) +Nsod% Q"\ y] 1f--gӁ'l!WCDBd&r$fXMXTjq(X(yG ٚ]faWf~ ~Ymveu#yA:{ͭu.2.H˱k\j bbΪ@GVY0gރB2pduKRfʈ]M݈Gøb jWӎCQ:sQV& 6)FeȘE.MNՌ?D0R$#ȕ3e o!3 qXАMı,&>撡d+a5qva/ 0LL-8EeD;Dܦ2ٽ Au`U6dLN,s ,)|g UEDg9cH򍓒 HgJ6٧NlI4 |8{L=ty:iɡ(+pm\Go= >"pʳL3-)3Ȥ Hq`Sjq(x(#@؎3ss5UlyEG\]3X[(sҩяTBEJMMlfkTsspҹ[1te x#;ww'(N"QDXE?7H׃1\*TLRi!Mu tFp۲D=(Q*j6%>-Z\~gW%g[yurzyX2Ӽ@St[f׋ 7zq1xfURgZߒuQ | (kb# eu<~:J);RvS2/s%SD5:Hт3Y]tbڔ oJ"Yh- TY,1a8!KqbpPBB+\f3sg5q<>d7B>iޞ}fqhfZ*g6{%n/W/XHx y:ˑYl.?_OBD&wVoEQ!o ]]&LS8*A:dӨ s>H040a8vT}4!Ά?w _zuz @?3:{w㛳7>aN|z8q+0 &7@ 7_Nu[Cxӡb+ X|q5js '^*=7;pK4Ef cLt2?3@D*+2WwQB]8spWd!|JŌ4KK!ܗT~~t[x{_B2_;O%321Ũ, IO:鹍 S.vzD4MS9(DN 2.2x@(uw:TpblAlAx4#f;ܙtY"Vun;/#D[rۙYã-9DiG`PdCYoI9h*GEp9B 5D U(J^ޚ!t [ 1SۛvyCJb #5:78Q!(8v)&8&-pV(SWanvMʈ'SD"O Nx,▋=",P v_(e(r!$Ea;G)5#Fk~&ਢ KlzT8. IYU4.٤,HR!`dɱ6$Qs-9˝R 3Kjfec~ټbȵȾ&$zєjWXgY^b"ozy0lJa^iGjd@Sknl>؎[e;E'^}y~ؐX (r%q ppJ_,5,rۻjIW{r02hpH/|qrOɥ|1--i2br-v\i]jA6A[rV@W9J; e4@%tk",5ho4IVTdnchA0EN93X?pV1%w3ud㙵2r,\i `HF## d}tJh9 Q=uKd^˅v)gޚSzpy9<^9{^z_]yݱ=dG[yBemϚA1쇩%kBs-c1s)[x LqYmtny$<$&VD7bNjD5,:}01 ZG&]n)$킰wIw+ hE*^#N^]L2^5ZкjWsP<#{,;9kl]ۇ^Əw^{9Bˇ!ioai>Up4՜zէt\WZ׶ڲZ0g9骎˻66y/n»JCQieaͭ_H7Ja޺n>7yvQ5jtl mPW[:tW#{?zoJK<@+,;?mLR_X-(Ɗk)X@ W&8v-趼KY~wH)cٮS|./,5c`fѮyIƥ{[wF7Mn{L,c05TRp TQ͍QmQ'(/${;A۷b+{<$vBEUEUNRi:\is sJ:8a)6]i4IWd$HjI)F@#V&&Q2ӄk.k{RR0yaRh bBIK˙eFHLq*]E[E K/5yHoz\?5];JR[o8f)X#81))9:6Hc¥TbʣO!q(7 3ّfsHEwR@s)sŴRTHh z^TKU:3(* %B# b)K":H֎DBJ(2Lg"^aY^L!$;,Qz$9B:F$,X"ALhIiyGmokF~aB0:" Pf+A0^9'NjMr.߶o z5+J˸2f oR8 \ 2@: (z!hZYpV1,Cu3__b Zù(?.dьJj0 =onҙ\sIHҎڳl'|bW5YD7-yAUÁ `,dD%'Ιxa ёG{1fYgv:ZÐuCL5At:oN*M,̧/Y e9VQx$pؗ]IP.{^Ox\mA[eW `]%"F]%h<%w]%("<;vzJ=C,r{ ֔ J2/*Aծv+8FtA] ?\|nԨ7tA-jVF@}?%|B'pR5cd!rL1V@&DqM`lד^(*=+י!06Tr3%B%W׻{ 2tSaJcKq3p@7oujEM(m_|{rYGH(򹖜N)bxЄ> O*Q2%cM-y(AT'cBC">| '%Iv]%(1+dW I],m.sV|zJSB?H.N^GM)[ϦZK=Wg1gS[d. AS$wSp㹆CyۺN97l> Z J]~;agR~Bi*Iʗʤ4(i ujݭX*]\-bW]]gW o]vE(hLv}VW@Kuv$cW]q{mT:F`? ߆QQ^AV}RJA`ʰs8A]`қ˒,Axr(Q$C2DXϕ@33ADK#!956GPU=wVm 5v:^5~L:6E?w3MO#ަݟ%&,*V~[fN IVb7<0}seQjs~>mzbR9c6([I1J,rk%z^gr'o< Rj $rđ7FO%[Ial,v`Or$/YR8A)vdɣȔ-ӏn{ͻxK,_;7ݧU6fech4+[!RVR6mZhרǃmǴ^iQkڨՄeqx-t(&ptYJ ](J ž /yW?JV7Op?`Nv+L/A\km#~=ol|lCńMJ[4P0U2bNՃW= {\Al$;u #BT5866_ []"tOjZI7!Irʴoڰ? %ӎoNx5 ϿM[]]^Lfחo0z'&pztW]6d d5^]}o~+?L \l" gBpN M΃:!z)H3̮O'〷2s?lş&󉟌s?$>tq<4{7 䂂f֎|跿~"m[~ex-NefdUjt0v6Omx9qb%;m C[f7+=mhnmӿQ:Lw< i͉Eh]f=ޱٸzB 6mþa8V{ 3l3޶LFMtUw]瑽q^u4ʊ#(1s;3/p{yȥ-9 5)g2 r^^wLqY$">{'nDyufA*,V`i fGCۭ?0*ѡ\^i`7myb#wLH\JQR ep |˩O7P)t%׎j)^[h}IsIX}iYAt[O%=-\ۖh. 7~)Yooă@P!}(>#~?h)?;4 N)}.ozV@Zgy=r~xH/u:6(iV_Vy.mr[SAW 2ײu(*_K@p qdL(#q&`F ̓BY #J!$ε#λ(BOLF"q"zIYmB*ُJ1,,b!B'f&Tɼ|Fx;3wg(ޑE##)%lzDzg-668eW KA IH,N:/[ "/L(+r+$Ri]JC ׸llNm~;C݆ծ 0??ǯ'R϶u\]%1_}ʁaֱ6-A97`d?whZMu?y\[y?8fS iT#Mw:^ALBsemDnB wCojh)Oq\>Jsz IQ1wIyX@L8Ɂ #E-:kd4$XqV[zI{SK^bH)&~ ȳ>;M&%ܫ/څܻ٧hVDgB%Ug1󪡘ٽT{SYokjkeo~#组bw>0Acoy7OWg|?~_Oi])̆9E-$ %W-e;5q.қA㲽@],aۈ袭mж/my"3[r\ƆE} z?4_n:g~X^"U@6SzFzvɮtˋjc~r7{?C\Zo:3D؂.djg_?u :ӷH9}#Hװ}딓B {R(N:)$|&e ewƏө-GS^T:fA!((!h2DSGX5t MRNFƑ|Q޸ܡkc"0 re CBxb8OD3ro[\9jje˒ }||>n}_^S;>$0gZzw8,kKqwA""Jb D.PDSdN f:JpA?k#SQ>WyyC I /kA'h :pd!1%I ISBK@w0 TԄΐ1;Bp8׎bͬAǔ5-[)t Hs&;B-o/% YٟZM6-~\#o4v'sv/\.U]nL@˨6(|v#>txߛFkzơ!8=ԧH0y/c[],~MN0zQ(WmkZOQԜ>w7l0 X sk? _]YsG+xȺ1^ٳ݇1E$! QRj(4a]GwfYYW'+] Ɠ&o|ohGi'cz{}n$Wݬʯ0t'4Ӓ+>>/l䤰r$z+HUӲR:(>܆t!8AY4.67+wbG eycA\8 w?闟_|˻_~D'|(G῏fG ૮~C׶V߲ka` Ԝѯ5^;gvVS ! !4R+C ڲ/|o3IЩآE- ~p:&Q"mUx%?tn'#]K;M,M>;mv}$J: hTՈ]0!FX!!gkH]hțG:r3D:t DL\g.9hS2P#DHT9+nj2q` V;Н;d;KXղ 'P2iv,dEy$ L $Gчqmд%TK 9i"8#R&Z"OfB+2sqkt-R,PJmʈ=`9{3SDW7'lFBsuQ9m%!DFsL*Y%y4ҥUj g{u3 3M#:Sc ŭRPQQO*h8dmR!HӔ3/*Y|RM4+ ,kŅC1{9Em&[Lǣg[PTDqLf %&7\k#S#u.[22hO9ʃ\xwc8l j"F LS!Ȕі8jiU: :Q!mo){ݺ]:+mۃA-L@M<)j8pR]b|{R*PnoF>$ H0;,׸跅=ǡ7&tp7] >; %͟eC˳y,1_auF30@Gtḻ,|oD5QEUңoM_dBoa\i߮ 6ƇܭsxC;xr7,cߖRx^WQq|熳 73`x7\BV/Lm1J R#9Z̊MC,]WUAp G=sB3 ]PmQ{GϬ~,NJc-Cr7#RK j\jߑzsd]`)B KfY}ʐq]RgcBkõTd^ywpP=\Ωfށ'7{sdy3{G;zqgK0ӣ`hv#ؠ3A?dt2JHRwPpтˉrieVА t$XKoI沰qihuzN%\R5)Z)CU!gYF u:g&!hQZ(2ӑ!^{b~qi0]c_ߔR=,ߓ%ȵxl=,|z$W]:R10l$k:;Abboσl~iDa3?p P7c7sʒ0*gdrF{iqh+<<"C T^7zS q<%Ǐ9S Nس) h*g1djZfn|y34h,ʛMƩfLua]se8îΠ(ry0hxfxe WJP3MWH]O;0B )EvK#1rʄ0MN}^J cF̈ GM8i< EHn^igl2ȧhy8F6VYC|Wbh)Bx*b]͚u9S>\#FT DjJ03("Sz]2Rcng]Tz18g),̍$g*:h1]5Sr))Q ht=0cp[Vhuf"GwV8+ҞQj|m~ sw xHp캃7ϕ d|k[~Wa{{YzO`=OڵYb_ 3H +ӓYצ_>H=ȝCNuNW.^ vjWw/Wջ9>GȖ~+ ^IҺW-_x==Z^r<N9>2~~~# /\{'ZzǙh.t߳ۛk5O0-,ա^ZHBV ˀ;7w/#;{n25psK`T| zEr3rD4بըD (4fE "mp&֐+ 3;S85b]0hcxhb{A%81f1{ 5̭3=gIp<*V1XL"SC4a-7Ajb\Zy>vqZc7)DJIB$R˹ ,cK Opk q\UsKUо s-FutK1=5 Mi>:ɠi s҂ЊXj )XG-R;-42r$.5ͭ3j4م#\ 1uT Fi4&\=@V(Ք㡪9\pe)CAN@9ߥ)R'-EkxQMJ(哠ΊzVm8{YW_^u:%ِRE20M;f$\)@x!y2cAd 43G@MZPN?[Di1ijufjhIQMxk.x;XnP 9i~LJѕ d5Yn!S#эC]1$PiK-Ui}9rݣVhIdb?V%ѿ+S_W{GA^ 6y(i-JPE(olqxi%7R1Z&2բi[m"%_\|-yP#j,`%, >8TBy`>BNe$%zw!DøjQO=V`lUuv1aZ kp2x .YUۥ)(( (˄lL&VIZIK6i&-٤%dlҒMdg-٤%dlҒMZIK6i&-٤%dlҒMZIӶlҒMZIK6i&-٤%d=M'X&? xY@B\j#^@0j5nɹ;5c9(y搎G9zfK`Z%I_zɷ^3#.8rS?}p+1T!8JN/#ƿ%+cG =/8;άghؕ2ga (lR54yntLIH,M~)IpM-TU YL[9M/~ԘfʯTc#h9ˈՃFBM2Kt`4 X'$s*xܔ.+qh霆4K;^ wp6P솗6 $2'ł]kKaZϝr `7,؈5Гc]" =XJ/ӌ|8:-m3%BOYE,`BcX 9` AZuiTT Ū7@u>/5h#k -ogu* ؙ%;]`|zL]Jϛ`n|*>@{wz<kǶv6&%?GۺLJyE+Z"G|5C~mlcL!H]imrB(;KS\.ԑ1E"XWR]DO g \]y==wKz<) Dd],\%)CQR13ː4Y=zt><5l(*#ƒ B;=`0ؒȨp5 2:jBG{dLa-2T33,0,fFNM#+}W'8]*Ҹm!籛^Ty"׻Wj(b>+d>B t"zG#:?9=b >yxY9K2ܲgd^W t4:,G-.W0$8N´6{7_!rȇ[\ҥ7I拓 H;:37-fZ͙<<Ȱ%#ea &XL@A&k%:!{[?-Ld:dɿE q@["KIbKBZscN."dgyyjAH:o;c-\,`2b=6MVl˙l!p+d=r=B|őOq be ĔrPYjs@9 CRC',`QC ̂proDNK=.EZO1ˣޗMOR7GRg弜tAIN.땬|zps[]SAW+}Sٺw/T8„Xp& )2I[b+&xXua GkJ#J@Opʩ\iK DdF5sf"Tnd&ndKK3B 4k"yӜz5yw.rmⱻ;)00i~:9bcK(ŚLQ|1j#AhSa"wA7`'D%$ؤB:`^e) Ӂi]Im9!池v68Iڽ1Prł"*:Ɯ3O0)w|3+$f>#r!c2xh:r1Md04&3fg7AehVsǡH3#"GmRpw0 ᐆ`Ur+򄛠2X #Pr-W(+"jAl8XG!1p&q >EXҠH0IXfD& j X4יMKEi=.nx9X"g=B[aWMpyG;e8 XcTiǡx3!; vlj6zcOb&=64 g?>Q+I>Q}r6G~=BpXdnjVHHw| #=r#9:#="#= #Es񋚚z0.[iྒྷKq ն q:tG콮{4I 7_}Ubʿ[Ǹ3c ιY׽ l~92+Q0PRgQaԻ_涝dxP=eȲWe  hUKYڔl82㣢V6>߃ NNhJh&iYhvеΝY m#z򻸧&>J8I86b8gńE Ң"2nyN1.@ ʝՂ_kDkM@ǝQFG `T (Grs~ۂ>;^2v0}E;@),)$=\_UѮ;2đLh( YC\S3 ~X1} JH=5VϽp8 (&Q#E,b ;g1˛ (WK Y Ocj,j)<µ)8z|nP6q hzDb+}7'I&z*/.t9ݼ&NHxJM:~\\rj %O1NLcH'3e H6C14paf 1}%;*#R"%1dNF03,* !V N O }DQX+TZC $ du/Ω 3#{ 6D\*%$r9<@LQԿLiSzm8C>/s)O</$D 15qWC8(2(toHy 5h5Bk )j4*(Z`H:.M"X.z:n5e͚ər&I[qp;ԴtffF1z/>Y:L˨QJSvtN-m(ڙȈ!%+Z4?o?(bo AzKf}R P M  8d8GS)( MUs;VKaA{oW  N[lB!RL[/&n\όNI¬vx*%Cz_O 0R_m}5FWlҀ($%O6CO!hpUl1Y?"Ɖv1rғ&מ-]4[OQyYxv[w4̞2ZR%n鼫 'MUyRʻYYSKx9^:jsr>Yny8Nugmou]v<+HuU+ I sPo˅KUe,h!.f߳ݮ- Љ6MfMg0xg}ǘ󯟏_|#ߠ0A{˽j?~@:WުiM+ߣ\maJ`vnm@ {?o&Kg\vZ+ijlX!-\WܝuV9\[E-շ7_);5(&-@TkꅗNϔf|5>K|Pv1 iځL}:Rod qG~o4M5ǩȉFK#pD' RiX9'KSc8/!\W%"oNǤWp?qOZVWfOJήfa vQK&Du+ Pz&ΚϚdp|Rb·i i}m05½,-z]6Vpo=9ТsWͥO롻h*w~OX5zQEs|gyU ayѬ?`f>WdX6E/w4zMex5+`<  ,) ^;za6ۃ%ev]sNP!bC *pCc1:Ҧ !:,b hYśCa䐤XHMNt,1>"'^qC]%8{$Yhgt.aR2"Q#VK*V Nx,▋@=",P?{-,r޹CH"e#€vxRjcI"O،]]՗yX$a}})l߷zDJij(mq=3u*XVɲJ^_PF- `lk=@`jk[#rz_ %i|pC5Y5Pswuxbc5N 8D,U()ɖ B 2vD26w;G޶C[UA[AŁv/HXez._J~ėqv.f!D5"Flӛq0a:@H3M ~lǣt̺ ӷE=l,O*G,ht{'9mć!zwwjr};HˀܜjT#صrtA\B3 8{Q`j) Ձ vo4ͼ2۬񌥫 ]?yN7`jЋ ]ۻ3@lxhO+e!'1FwBF2Ojy@*P%Lhzo`" JDc+(uz"ZgYkc F=FS-tL5v1X kJLPd Y\kQ3׊P"b /5 VS@B>'-)RTۖ{M3썜-O<,'6ծd ߞy󸅪׳[[?[Lx_}q<l*XsV0^XsxG5ϒ6͍@WvX* n_G7wda1\f;!lxd}~7͓/l1vCoyÚICqnx}wVV!{\.9tZ118i_vh{Wxno~ ѻo~y"gf'116n/}$b( Ri0`hҺQ= 3x {߉#j GcM!rA*"5[t9Xa~-EjGPHɐD׫A%PFT0%mHSXSi[MBGa/FΖ4L|="~;m \]֎9glCqtgUXΩdvOjz\L^zIkڣuֶ`.4 FG"JrW!1%4 8( $H&HœQool{Եr|v*+N9^K|pM'ZeE]{>YZu;e]V (|7KÿA=xJJK`g46ȶd}CJΙu,'OU <^~v(8=Ȳ/ԟͻ}d8-]RҊ;;9/Abl d`rJ 8P)`*_BN*\65ɉsmvsvpah 5b't 'tRK؛NU*NæS p2ꪒ TUURàތ-_=}$~`c~\k~Z-~Jw\~@]Ѡv}IuB QWLTUⱫ+W\A]uh J'Zrݩ+VI8vuUA]GuH*<%uU vdNE]UjQT[jPW]U騫J+UѫJ{TW$уZɾXMV=NYrӦL&mܭrH̀p$o<^&~c~9P#e#H_H"2JXxAK_uG3?I}fNۄ*&]KGz{l¹ݺV#L?,yBCxU6<[$/9mq7*<=NݪH'1*Z Ĩړ 0 ͱCJ%bC5~JJ>xs%לXcWWL%#zhҝb>|&מN(S)- +^!fo74j=FOr6iq9iBSiuhku u3ŭ O沥w?vI6BX:!K{h)\x8Nj,EV*.-u6{*IuM %24։`*J+,?"qʔEMQ5S~SwvI8,XlpZ /SVErبtDL@): GbR=[&MIRYBfp2:)M Q'FZUQ?^n+eɝ*a])WZU&/3S/['rD(OMΐD~0 HIA)wరH%vXV & Eq Fg]y~2f+fE.Nf+y[H/V Bjg59ǜ [-Fg6ԣh(]Qa>MsSp" YBRR/X+'c^JdHEEBwL EZ zݷ>Cvӳt45qwgiҴC[Juۥe[ Qf/Ȧ,}=dR_OϞ~А lq!:>ZOK㼵 oFB˰@IZɹ:5c)yf(++D) ةIOG"J-u3a̺Q_'ིV(0$B ȶKZؤRXN4)6 z#gKtDJE}@hժm3_C͢3ma3ȾviXzk] |c!q@GBGJ{CG^Oo =\ ({ReGiG8:,8"+ߟ?II#)kzKH 9w:)?^GO</L֑vbjA{ ]oGW|[ CA8$qw!k/kr(JgTflD4kzU]]uZ]G 6>n;0Y/PYF'K'MjpfXB@f04࢐k Ae5h x2i* 7}I{O7hwL&F'+(NWz]_o֎eHڵKض 1Mc?=8ב6_|2X zϚp0آ 1~,tm7\4A_λ i^Ek?qȟMqGH琣"se{y5Y')އQ}gy kÒ`6OYk.]m]5Qa^?I*"5B !J ՛ M=\i{A* 0_'0-o_0ԃ?~SQnɛ8 ^Xo:@~O_UF>Y׽EoWO9:Odr MLeɎh8ǧ5*t@3ڽ/{)(;HٹNO'』2s࿣7l~36"d>Qe~)^q%4v.xpzPJSjo/u~G5v/aBWËu!4ϲ8Y{FciM5I u#e_.MHuQK0͠>UEN\@ nxlW(&[{ۦU~doa#Й!(LfJͽcqϛmI 0 v~{k$0g\]3A]  ocNu&nȦ6 0ջ.>wʺQV LD 3gf5_B{dsK=9"Ht*̦'!!5sH #HFM| )'^=,EE/"жX/@vϩ7R$Q1#+|sr!-+2tVTɞbg]L%"- ^Wp1+\ᖡ8'7Lܠv< kHз\5&t{R'SDSb6*V(W Ex(WMQC[x$.B`D yhG*SR0E]O%J*uH,GkƽJO Ҁ GF)hU6Bc1*Ab18A C%"e=Չ($Nj˅o1rv<Qeˢ=6S_eho(-Xd$2JTr c+8HD(\EmS*d}p,i"0p($H$7FPdpZKw)EufMMqS`vuPɞV**Vx>NTI6"OΛ Ƃz_rQ}3Tk.-|0Hu A`PB$$Z(S9brci!'zMWֱHtD+IqT HJkbl׌J"ۥ8c_]HBA#i1ˌ׸9ۛnZUcO~۠hn4矹ƦXAj$ۚ9%9BT\e)t@jlnx8 CDs6^ Dж#&&lV%ٮdVLCŸc_͎Z`?>҈hI^S`:EQ0*Gh HRiH*BFdH:GmXd+ r04(]6F""+ƃy.jD^X#FO> r(=լ[mpcF;w~/Q-ׁi\.Ά5b&B塚ޕ(M**!k*oc#;wd(ޑD#')He%!G="ⅼ?]F7$kzӖ[߶NrZ^ {K kO5aQh\ދR'mɘAw&q1E-ڠGr6[;zO_]1RK/;~{vؓ~w5]Mޢ^:_7] Z?+_d9[*֫W<;{C׻)Da;QBSh/Z{(K3Oq姢FxR4J88˂ TK  ewT-M\⺥52F'N5nphZǹ*D&*4KB#1OԦdvze/7) fr:۷$!5Vb~ǻ2 ּ=%.:.wKby5$dhGN) ꄥ@XD "d1CfģmQ+ޭU|iz^rrDdZKІiu҆ X!5HNN p}<6 8(0oSuS: )@4'#sN$"wZDH Kp!BH9QG$__BGn8K.$y-$f P,֦$.wiEsPt4Dأ }s%quW"Yp"uXN SJilAQ߭/9a|z >}!-up~=Gxя#[u"UQD1'XmrʄXpކf3rs\g]N?yzShHӢymOu5zh0>]!Zd4y0.ޒ.q8٦f0*?uu{m6._;ϛ\ϻ OW=hѺb.pFóvfHo<gOS;'u#v$!b0l0ì˘'~*>]_97W ̣EL¯"@pgCRCx; mDɧ&|q-)kƽ+>\szcrm\s7ݰ9"HA];"+H6E2{뢡TI"_F:o@K=4< Axh_CY޳)7틍M,A6}H&fGC.Cak>wx#vӁmlXpnP#r7gDod:I4BK^312J{:Etj13-܌>־;;wH[ U(OeW#\YÓ5 $ QVB&R2:!0.zM<łB7ȳf3HJfM)yN+!6&ASr*$*L>l 'xfNy A\%)h t:b, 1*١Wg^M':0/)B{儃*)^EN=a"@*v%$Ic߹'DI ;HOVڈcODL\P>vUIi2Sgc0m0&%b !HIYY1[5G}2 XA& G1z-%?{W۸\tѕ/ ~(.t{BmuvEdGqXHբibSCrpf8g*vFf0!ID0z4Y:ۢm۝A;w RSӺ;p5(8z?.'WϊmrJB$"\o&Y+4qf̱PpVK&ưH:D$K=k{v򓺇;77C"4^^|q+2u#$ݛ?Vgo.R( /qqc(MS0#4 ~KУIyEUje &cQl ^`{m;k'hr傻vE+7 PO@7GApiu~cva:4ݱط! ~5ǟڸ1UɃY۽fO4յ@Ubv(S[>:KRuR1k哨TA7KE.LU OH 1<8YH2+0mC=uu!/:jO薵e&"Zy)YB(atTX1<I'ŜQ'[y}tgށ'7w6ц ̛TsWޥם'fqk:^e=vAH u2lV`Q.442"Kz߂6$3'hˢ c0cX!E3q',K6SpJREdME`Ng"5*ɭ5EHN{&\+Eפ{ϜF])ssWE߇춪 ?zz2ھH /?XqN@pbIhICQȨ*{A ynzFG$2#Ř-)hD0F31SۊiDGY"TE) #&6:Ƃ8M8_2Vg4JL[YY+3ղ585 n~9n|tGtƠav7_{r=lXZ_;Y,u`Fu\fT],:㊲٥wA> mؼ&BEf1_ C(KEǫ.T.jqxoBJ֯}P] j\)<`}@ks ^4݅ţ{*.3:YXi#k\5Un$jLxSKIC4TCXb yGmie6?$P jH??]M؁dT0&iXġJ<H=X隣%~&$a=JH6%2BgRq UC,V1u!ZDx;mFYLr%U}Rۿ,t`8T k|,[4~_>KGEFӈFI$UD4:sٱV6H㈧NrnUjq.# $xY ڇLD o.9SS)\pEFIv?ZWyh:qvTtJ0_ǕB{cGh`],gmmezf-zv`veZ)GYpԛ,W _"ZPj2d=,`VKN)7XH?X wh>B9**ډkTQϧؕg1 iO_ ?3U#QҤoQҸcs[xt,:ڐԚ8BETg.r2x9L(#nek"x.'*m 9$s%:>^;:1vtqaҊ&8Lx 'ʒ-#o>.gi>gϧ= A1p>&Xơ`mݾ]DȀ|v:2(f\K|}qDG(`\9I5_S)eȅJkZaIhuDE<Ρ}e/p0jc)=CP:'b}oZ>Qu&Gœwj= ֟V#wW1vpB`G#*/EIzlcn4D M+T1 upF$cXmeUZyu._/F)tFhtEѯMCS$T΁Zz_ruOϲA: d' 5.;"Td ,foʋJ&Y׊TrNR|2,%`7\è_tǬ eS wB-¹:T*>+WO-kSWKl/\lc{ Wy[F&c^U%%y{]CnWqM91[A?D7pnɌPEOJO?(oԖ(!ikz=١ڞ&aʣ ,&RKe! ,8#[Ӯ=6yܥdwcw1KyJN'd^x 9q("BYQ&Lx܆S˒ʀyсV}*&wyTyLw"-vgeDqoJ9P6lMcdžVVzmظjhLFبaCv6|n9jpXl֮uzma= ~mKP^>%{Rde!ϷߍaP.:UΐFX%Wrlmޙ6tQ/X飩GL%ʜrMH*Z"ZmT3$oorxn!lմg4-6SyBSkp++s=V5C3 c iF|. T9ҕCUbcTm*\}J}B_p({2y)e0-)|Fx#VAF2ID'l[0Cv\ڃE4M3'ĮSy˧`6uD=lv=h ,wκY|GM *v/ʼ 77Wyz5o,TfV. ,o#ҟFd?FN!59e-#GU3nXg.ˬgre!\qhm3@I+8; }r}`#%!\M|+@ p QVΘ tu>te!xDW ++9h;]!ʊtu>te yDWXIo oؾw5^p[1]5k햮U3gڕn@WzCR"BҾ\]`7tp%6}+Di@WgHW*#|ҮBZ]!] MvGBBW%ΐ$1rOP $rRTb4H t n_Uc$'y*Gyb SZT1 q*bi]`ι7tp7thO}t(- JsJOfO0ۢ+˹7th%}+D9%].(5.cl!Z|Dcr-{pe#)o o4bgCWЛT6lv7{;nU3sPJte:t詐DDWvj_ 2bNWlցΐЂ K½+vhOz<]!J1hWHW\jGtDFBWΐO:+,]24C@WgHW[ I?.vD$Jf3*&e4k4oѬ<-<7^dPw4YS F֣AMF%c(1=KuFJc|کRN:wf3oD`qZ=#>!`/ v}e3]] ]!]iEil ]!\vhYe·x?t-#׮%]!]YW ++ D|Wχl͡z<~=siIf׳) `|vr60>lr XBɅcѻ+,]_u?]#.{Ugg!ܨEⱽ u[s+ )9;+zv;IXݲ.xlPQ2ſd7OYûw7l>b[]ڮtf(tCj+l(}+WΐVw6l vJS9]!]qChR5| 2o vuWh3j0ϑa[  ]4Cz]!ʾݷ0ՓЕ$XTz.ldfa<ܠK֞ĬÉ;s5^#&1ፎp7gD+{pFIt:2@#B ҾJkOX2o * ]!ڮR6Ci9ҕ\>vtGcypF]e{iҐΐ?'o |D@Wυ(7(!Jw~)K3kUCr ahW Q~)OWEفzFd#BFzCWWo vtTCB tut ^r]!`]!܎S6Ck;]!J:hWHW~dJ) 7tp-%7%s+IIlbA]z)Ha7gnnne4Bӈc؁ϑѕ3b=ְaB.c-xV4Y)fa6F"K)9Zdha8(Y)j"Rna w+PV]^r>V'<ؗr[L72* .z])e̽kдWk* B?RUUf"@_M&$ZF5"]?@!˻+ho.Be3zieL@+iv̶>'3clkqƶ>5XٜÖmݒ֢v-SI#VRG[GwhcV/LYBa-kbm`5H3C_*$7a>\l]!z6b=h}|"ӹ$04q.,sR<r*2EZcC q^KvoYg _VUK`S L&Xb LTTPtV:M>N|kBE?G j$(Q*TGPyk2xȹm JkkcK`uU6Ņ%<#31זh^#5+`&YS 6>k5CPѮZj)p޽EGUTfq-s5Nx XSh3to@:T%0V7drcMhp} bLMyM +QJjOhbP*Tf47q(6M0BC~ XpyطWr/ۚT./{;U0} +)0Q GAy0 n;@vٕ,g IJ3Bk]\e lRâc:,Ohv=UBWE{0z-4`2:URu4 F:]^K9Zdg&\W]dMxn}Gm!B@8>WUS]ߙޒu 퀛|h"1J_2d _>7.OaJTM4Y`U>b-=ɡTC\j`=C^s` Ѩ %@_k*7a@R@"te(]"Tۖ8ZЮcD;v@n!zjC  DVՌXF?Qgσo$LB")kDc ơtV% ^ȚfdƖ $CA &Q ÍGy Ki}(;,ꬒ-f`'Z#iaF΂ZmgR4m[o ^p"XHLI؍@ax/n>y+]:1AZ@2X6;w@,|^۴Ws8$g= fpD 662lB F>@(xbZ hu\uҜk-)[7[9/-kFho1f PN Fzy6B}UeFnaQC^'$tyXǏyfDke,wЩz`m!%L +t9Mg0zbql!>oD>lɹOKAh;U7y7TyǴaxD 5x %f$70rT],kP~x"mLu1|ǀ6oXвR;kծU5}V(rd- &sp ބRmTc]Z.l}~=4t);>@-r-@wU dAe w]-4,0z侐9?Д n F.a| 'a)Pj⎙$u7Q@.`95[z̓vi Zm0hCM~ C|Y.X,vSVZ ]\ v> qIxTPsL|^|8[/.oΆUɇ}.i#p&{G,tGדrZZ2 zQ;\#hs뷗޹^^2 /\_|{5.K&Mi|8wv2UwHP~;n. ?}~n1.Rdw{398Qx#_O]{yW{ye޿.J?]_:y:~i=&5Uw\+]]_|w'_'u8!{x)vwzK9;5zw8*{'MoOޤ?Y>q^:3qK+]-p6گo+Ws \IhJW4p+ \iJW4p+ \iJW4p+ \iJW4p+ \iJW4p+ \iJW4p+ \}æ-|$j+Mq3`@GJo4pW24p+ \iJW4p+ \iJW4p+ \iJW4p+ \iJW4p+ \iJW4p+ \iJWW \ Ͻ܉P pCL h9}*ʶc*Ex4 \iJW4p+ \iJW4p+ \iJW4p+ \iJW4p+ \iJW4p+ \iJW4p6m+pʲ Uypkɑ4p+ \iJW4p+ \iJW4p+ \iJW4p+ \iJW4p+ \iJW4p+ \iJWժ&@L0rWb3 W7] pN>;| G1k>9; %)m[ ] Zc+AN|!`kfJnG] ZNWN#]3iCt%7d7?YPޫYCW씔/dr5{㽭ö^n '}??./o~c409w6P>11:ӳÅCyޛ =oK>aw9(~hvO_a^9+'i}l~i@]{ԗwM7oR]giWElu?p{3ڷ\ #K#{9KfvpmǮ1n8?KX/mp (xVJR>v1(]=Cb*9nnˣ~GPZVzt|%Hv3t%pf tǀHW٧͆Jo n4y=2r9!lg'etҧtGt售t-gJ?z\ԧPRzt(7DWՏHW7nF] 쎝٠t ]`+7m NW29gHWO ܒn7Gѕ߻Is`Kx(L> Ɨ̉ (ۙOE Ag^&ܙuqg6cwgB|ξ;#8 "Ļ] \ЕJP&VztчMGoVJR:v1*]=C|A(fJƴ ([uϑrLmdAvg6o&Z{6ع%t ɏ? nx⽫%H]=#ЕWҩ9 '+ʹ|te!]9 ѕNLl3?[֭%fLRZ˷~ҊIAp5e[g@g}p8zz#\9*u,pErWJ{+ac*"s9 *֊γb=\Cqcp{I3gܜI<+"uύL+zpe,+sDpU î\g-]be?ر ce7 Fj?s%-Y {Ц67>\=| XX:WJ=\CBfyQ7?Oj/scb-ʮUR!\q@uDpU hઘ]kMXiUWH]g쪘UVUR!\Ihew݄LL9Cyb$e(|{6gGܷޜo?k*O%L#G0`Ru`c2G}cTQ:W+onԷrO ۧe>~}} FhԠMe,ע }b K7*>域>}"j6wT+W砒Mb| m sh?sQmMkRK{PʏӉOMWчխlsm=Ήg~.o*2q[ic񑿮~xaL5[yc-BVQsOHB-K_c[Q <2m57@3*`biZ4@=&1XVj 6rd]oW BŔr47KMLקp\PP >p(}&# H!*Wt#yj[du9_Q3T_n=]*Ot~]VX$G'6}YZ19ɄuJ(.sex#Z䮨}Zʟ%C1'(Jbަq!rLC0^T3Dz9*ˀ9ڣ @꩓֖QoMYYL6)iO9+Y5sv6¬ucJ_#v eM0s#+*E223 Q!P3_8oEH%Ʋ)% ý̰b.&5 M >NRT Qyd6 X<PYl#Y=oG] \{!#Pz%Qs΢IiŊT \aSҖ*1KgF;npV^'1c08.,櫘M#_.Gi$v@| 4iD <(i-yWoɷ/D#e*3h7"B]Mng] j.rhCx)U$|p*?"12R%ew!D CbcMƚ0JtTCCWAfc nt+"M[YY3 ۜhqwm}Yvۚ;M秿Ep*0 B1p5x6XxkAkJQ0|89YKVPjYUq6HyUt1_( jO+_JEl4M w5BO'[*ln|pFCzIA+yHD[>2HO B.d)Yj\\h=ȇx _U% 0QQLF(\5 'Ú+Q2z4=}PVdƣj ; ^Lk#X6BvMk#:68,dň AQ@y@dD0yV*T{JJf4+lJ >!`f1Z.{gҫ2PV3i[QpxP},iN.pkBX=Ɩ. J3Ʀ Λxˡ~6Nyp;>F!s`h-Jb2ITL@c TJxsb%Ww& -2<(lCp[B+&BDrc(/ךP9PCZz^5SiZ7TrZOLN7YIϧ76()z%'}kH7Wy5v2[Õ=pYHQB kPE\P0@$I%eų"D.QF,'R zR0re=FȈ,N%)p@62V3idU!CbᲾ _ g_fe~}q,0O5nfa>m&_G;Gl0䤶A/J^VY!ı2ƤeAzǪ"67< z %#ĸȱ&W:(ᕈmԙr;flFa*YЦ2bW3iqH1ռP{xɠA""WȘq tNQ?D0`Up'Әu5*Y*7e xҶ 8ĜYF(N՞rfx&4c*D""#by0zvωJf)Yh"J5.S, CKBU`LiAqJZ`.8*)<(F B=2"V3iDz=YW:yɡ(*"q<,zYeM x1(G&d0Nj\_ռP<0<|[qel}Nw:? azFpq!h%;z5JV+|8#NtڶMp||#GA7,6BXxCoYP2}zGޑwH;w,G l+Get ukG֔\JdLB*1M:;LwC"^2&Y#Y+3 D6ez]58&9rlso]׺I\e,xfk?^!1=_}#j|u(te 4:mh<,V;ȍueɸ+!IsYC,vT7B|ѳ޳>U`Z%$̒I#ƵJ@n1Ei亅0̕&|4Ġ+iWFL䑘b6*ZIbLفq6.k,\βZ٦|iO燭[mT&Ư_r5*}v-zs۰#bN>x[!aQr]'G=9z[r$wuI37r"bd,u1r0`ɣŲDvE*(t,B,lԚBrTҡˌH2" p+j2m=ޮB&/Y(6mW*فoѴ~-3Z׆hnOζO (1R_qAW"m@uD7hS)ГѲ/װtV~}/{N}[O)8F/ͻ^|cz<)xb6stCL1hC)B4EtIh^.̩6ml&Bf2crGmBG9T`ƹlijW񔺊r:픺4r S3zSb<+vEWDz>M ,3c#jeudYN13b({I&"wFĐe#VɄ{DxD(ܽ;, CxV laXrha ^Zǭ^aQEa10g44|,yZWjl6/xf)qku(G UKqE!!wr|3/b=[G.ɣ)0iD Ai).(\3Ftن8{+/7T?{ƍmUF£b*qq'n?XxJ\6E>佌W/I>U.{ !͠Omv3.e R9LXnGͭ_MdHo5r_8oT ,lSx m~:zvRͳˆ,q^5\?oPuoByb;;cLɇ+~_`_?N~.Pn&8ֻXo`òD>Yޢ|wǫ~tQ扛Lk\!R;Φu04GL~$z)H ɩq'|-Wl<NFvl2(6)>.q'AQ=U1NJHQjwΏ}zQ EL3[B S";3 jꂻF,:VyF`lY ֧{&VfZ=a6-fTl8Yu -Têv(+|& Eg2/Szy㹊m9 3)G2 j*ja\9bY$">c'Cyy`Ayt,W?y6lƄ{Rz G4 K\ 8Skn8Cm赳e(bz0Ie^fj#Yخ'-ƒ_^7|#= D7T?A©eaG8/qVh,N[,ּK\cIIO>Cͨ ;zIPI$CdPD{fOsn8OF*"c.@.Gc7 L$4[FksN"< Z#~Hws:$/Z;o4܃7!3'Jp&A'>"̡ R?jRZ8Q1+N™XDD` ܙ!LD$4 jE.^;ns3>:=dX<ʇkŵCLj|-v=Tq: UdT37k#2s| ǒ/HϨKZCw}{!DI1D3J˨lip]JH2(PNY &vV ~BVgp3lZ9c<9w8R՛Tr_|"i:<c<`d\|Zt* =qR)~gs 2܉'R'QhŬ RE *iT&]Kg>sԍxf_ٺkwCa/MmnOnO wyK+5PoE;(qfp(Apʂ,9dV%* $Xjs[V[\6)t*Ř[mY[#~ g=9M&F*bq]ST,FTDQEsѩVnf ʁFBɵ7XѤpJizroD-'PiY`7ZckIp `-P\) ҂WʤdA[,PD YK ia4Q iHL(H-RHp@"0@] 2l-Ujz@ȱYWvW"ldJMBeV6ŋ+;sѲ'фkzDꢠ)T@e mĨrVD/І6pMN'Z!蠵^Ce9+ .m됄E}{b!p\"KfUYd IG.) 7o$CXP/33"D ⬏ ڨټGFUOt} Ttrû # ;FHM{>Qn Ս 0D9F/ وg?d3RΜ),u  Kigq/?DWapMQKSD(9UBX5e(BB+:<֞9nvQ Y\k^YF *ZG5i)ɦ:0,fLQe=M{db|ADZ/Y,o! L),$m@x- "KyKAX[f$3Ewa݆ Jb \`D'Hb)r˹:ƒpA0܈RNdOЌqW 0-dp0&%&hv.ZRTʑ AEMtԮ b1C^.A@1%iM4E"A1&B0=܅oyЙ"cƙ#[˩ޏvTa } ok (>(Gǹ~V]^cɹc>P{?sQP0 qVGT*M2Q;%9ٲntyTq MhyG.'Y+#{] r]kƊ*cbknBBrM,kˋ1=I?o7*F%׍ uqȎhë_vϟx=wo_;p2H$ }/ѵiko5װEu3[kHC^mr2;  sVߎò>_zU'jDxW^ǡ_iVr>MUI yrGKB[JvSب]#_c6_{%Of!N "gbY"*xq~Q+~]&~+W8G(5ɪ*=RjFg_ˋ~%T!Ar_G\]JewIhSvUOrZYñ*0"jmhVq{oGշ_֩MoUE(Y;ٸ>v{槞oZujvLP{|Cn]6$u:f;jOzDM*\4 y5` Q ) ZRhPY.Xt~7<;\sv-WRD-(,)yNr7!Q9LzL@' 2\%)Ĩ5t:A 10vȹ_G]X}ކ]d9-YJ*3Kf # Ǡ$bwgE'!2֞EK$I4)ɸ$&gQ9ⵏ4d8Pl޴kAiS&tǜf*  Aɰl*hT,Wuǧ`k2094齈ke #xRg28 X;; ;\nuVqZZm N/4loY+Оq<^溪nB|6V.#+T+NBakͬ]瑴ח| *#ur)\@DB-ME ٢/y?^czULGv &()|IϞDTW.eSeHi;l}'YPdYvu@%!yLq@%NKN:& l34"%?{G0 /v2AK)^,bX"=.n_zܥn-KÏռT]:}9Xwz郦Ҽ{js!Wu:qr㧷o˧;[r{9Ⱦqڥ:p6cE$RkT{!JZ9􃳧a>hC"-贍AU!wxF|4o7v:y?1[)uxvu7Fo?ɅW:nzdzl;FYnNkܲ]O4vUk >wus>~8HhN󚵯\gy-}߫aXF4RǚsޔV 2wc [/h룎sg?Ҏ͍b7鸣cqMb! }rM=U۽Q$ǔWa|-˱#ny}5m.C a"@lh46Ҍ6J3^i3+-aH, ʀcjFW sЊ;ue]}7=^˝\Wөw GK4G)~ZEWz Ԑ 8ftԊd2JNf+$oHWH 4+ҊTϮFQEW3CiIWd .Vte}2ʭ^]GWB ʀzvR+2Z(iyv5G] G]m;؞Mtm(o>_5qc3i õ6'BVO7VZ959srgmMp ^nt?~}bSP u~[oSs:}|r3esWY+毫˷7ݿ\|Q>Kbowۜh7'Qlg{^w3BvmN[ןںK$[eꅰG\~=[m-#8`D"1k0BFu&Y)rwHhj륃C$>;?mr|_?\.r9v&,ȹf&݋"eRr*kMioցK'R'ݗ[~ }c Fpc3u(m8(>[ciHW `jƱteԌV&(,`jHW Aٌ ]+2ZSוQn ]GWQy G$1ZLdXt5G]%CC2ѕFVte8eFnw+g3!j q|8Z9ѣq~b]EWz|jHW  ѕrlEWF`2ʸjB0QCR`Ԍ ׇVteiRJva uER†tEA+܌ 74+7y]%EW3*Q m=Ֆ(%Ƣ'ѕ' jS.;wi 3 UP#ʭI:&VQ бڿxvPws%}Br= s xXsҺHIKXBɗ!/x& Yp=<(d:.69ꄭSKu>ROǚ-Cܚʚs밶CP;dX'Iљ{nazhetf Δ2壺9|t-M2oX8s+28gFh u"]pc3RZta2J\t5G]ْU6ϼ]).Ԋsuݢ*`PfteЌue<~tpépӉ}`QtF*,zhC=P]pJqVteSוQ/0y Ԑ;nFW Ҋ7#=\WFɲj?A:x,]nh&2ڄSוREW3ԕ8ҕ{hFW]m SוRjjrWmNs.;(7:ښ3žO6⴬k(;Fr NK.c-Fcs1 x; B+2Z(2$"K-=pV`vf)nYVݤ3@{nrwɨl/!3 '{ε&2~Gw(nUik)R?eLߒ4:3`igtf^ZmoLGusy"na 3pɷ+n3 e u(SC2Jq5]-uejbS2ѕft &?4%kHWɮ 753oSiueUڳәcϮFɷHN|@8ڔNQuF$tF*-zhՕ܌ ]-(/P4J]4]m(]QW$&p8F< #hM]WFIjb!t].R+2rj/z] 5X-qx|L\ :ZٵoD&F?%dS.~'2"x%k遳S;9r36s ژu1cxO;;P])0KhFW]mJ)]QW@K3pۙ}dqeEW3U|KٕfteLve=¢*ҕkgBjEWFK2u]%/ntnGw鏿L=N;I˧Fe$Lkn7uusTcCR`9uv5]-(>m]t5]a!] Nve'ޤn$ue~ɮ+IKR`])wЊDϮFRzjH]CR`Ќ yve+ jqk%Γ⹋΂FKQƔ>h\a`kyT[XW_(RZ5r4͸%f|Rظ Iac!RC3Fh:Enetf<љQ.f9:q 6|H\ hc]\t5C] BC2`nٷJ33 6Ʃ(Ӣ9J;*N- Xp%+8u])w`pJҪp [ѕ L]WFiߏ`Ϫ3N`(`2O'HZU1(ô6B+XtЪG -ʀѕzߊ6N^Wʸj"GԐ S32 hO8JvKv5O]`lJW,^K%u-,tgU=MT ]UvRfusnYY.^^zϟ? |ИA+-ڏusmޝP;×WmKg5¿uɦ7?O\.zjs}*__^i_%(7WR"y.d' Cd_Ak K. 'WS rGruڷ\JŮV!s?׺f i!oBZ9.icTC`ڴ3i3 "8I>F(FKUkJ20Z4TCɉrCAVom$pl woVS]4 "]-H> \ BE3)'sܐI -pϵ#Phcj=h4k9H*C-1pǝco; ̜}QdARATZvڐ!Pc1FVAJ So czhmUBC C9x-i`(TqםjDK@iW.rZdC`'%:AeFyQKR8u'7DTqȽioDkUח=~"sv%]J ޫ%.9[w]۾ xLv dлN'rƵ[׎`j˧2t.Mvx$DŽutj5RBU%kB$=J ԠI@MNXmO/vuՐ#tz|BNHÐ 5uLjJEi7w ]j4s:BwcsO5 VvFaXMyLXE *jzBNs-mQs~kS;㈂AL3fy⋯v+͢JFBz A7&5khk%ߕ]Ge\zkY[C/Lu]P~ZȽNuTH;[;IN17u(jvmCUh4E u޻j9UKEY!8n^bᘰ'FpET`5Vqd)4}Ä^; 阰*+Q#(.5jW;0PLh]T58{ڙh7>Ǘ4J$mG_͚iwj!253aV*PpLXDUY 425Ti`Y:t[[Kt'Y?jۤmw6 ܅^a:9t]NGR G>d%hvÊ*q{A{&Jwmm[W[E@03y @uLD&5"Y(BYZ&q|Qlڵյ~.V!joSCAV(J892r^j ڲFxwD"=dPIWm (ĩ(HiۃUu2R".5=() ElDۃLcr^5 5$DjeDPZ 6A j"b C$CuH*мGwU+cКeM/dPgmD\۠ݶbF\*"͘&"9ih>(ؼ($0D!N&Cr6&tvl7MkvzJOÏwVՂQ˪>Hh]# o b9PT8xi?Ow6!+J$W4T*#d`&SQd~+2:qAyjN$\Pd"U̫ FMʴL5}4,|OC |Y%H Vjx$#38hGp!JrA#+PZDbYьmC5YK1Z1ة`U0poWWy?%TB;뒄 Xut kR([|Pm!A5MH=X<83 IȚB\"(Π$1S(ͮɀ!JP CEdU5\J> ʰE(3A68F)ت(jS,Ԛ}-6ot{A[H$Q5$YI$eoQڀJMnU{9 2AIX@iT_}ŷ%e SLЍ![[&ܣg͛9@Ŵkg[ݦ]L糶\GI`˪`֣{H7[`3 .-zLs'QEEmC^k Q$I"eCkBm1&P.Z~7l}AeF쓊UAAECrh[SP.O(7"Fh8(tP"˕,.TP=`ePˌԠ*1#K[Ƞ r =`-aÖ Ve="($'Mm2X{iP'7 B`|?$oQ^1èIP(EeQAR܌EEH,zށ U @ +IUE :ƀ6mӺ`fa+5iQc bJ3i^& &ci@Zv!; tsr:-燝 }UPxZUwhUAh`ZS0h¦f@ =peRt_HfhJ7a#Ƚ5>dN֞ +JOQ!,iJ M\ѓFn5E!~ކ`ݠ1jEroVӥ!D 0r( ;fAjc[&T$~BN hZF !Kk|A9oU ]шw9 0H57Kh!7~~7Rڰ썫F.&+:Ӓbs#zB~GagKyb:Z%q-%5o/0yKkq7X/KlituE+.rL?tfŶ!tts\N/8ӻzqTBoN ˯XЙW}h7F _VK7W0CӨ?CW:[NVkj $}&hV "t|/=^4g*g~l@x@h@a9y'P*Nsti N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b':/ްN mpxZs[- dh; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@2Qi7&'5'8'ʭY J tN &@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; tN gpcrqgsyN @ tN Ob'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vK덷\ZOW?h)]jy}]\YPpݥXltNz3"+Fc\"ҏŸDhyuE(-#]EӃ8#>B<]JeRzu4sW.k_>EWUG?vJ-NtR+hz5"/=|"1]!])-lP#+vJ ]ZNcuutr^K5"ʌNW2]#]m娊Ah{ah@e:C5;[~g\b~"u6\q?z΅ /Z/ vjӽpޮw~ߖ7ӲjB*u_t xuӟ0m =k)"vFe³v g_kTY!kwܼOp66^v;әllH-o-gu3e1z=i^tZܙ]S;&(c3XGň3Ƴ%N8#s,Μϝ:;$/}_tE(Om&g+څKeFCW׌fa}i|"A2]!]*sh p*sBkLWHWD#Ǵ7"v4sW֝FvugяuIx.gY+4CGtєDX; .fq>ù`]n4tEp ]ZgN{?+Qı= gBU@]iՈ +FS c=0*`̘p]Y,Ih_Z >LW_ ]GvA8]8{#HkCNl@W]/F+kX:tE(g:CRj? u+XxtE(w6ЕAh=""G_wu\?bF{tR0]#](d #+V*c+Bkݩt\ #]uag]&BZ^(X{(]wydue)asb9ܾn]y)nuРӚ:^g~O2Ƚ]i]EOBYthJW8No_۷˄3'~?r_\Mu! wsL!%Tle-W5>b|}VIJ{3ύTKW[ Rm_"`ʷ Q|}7k_xof8s駴تN}w*iS߁Gզ~z5V'jC`7=Mv7o|h!G'E^쒎6Fv䑖d-Qh*U9}*ٞoGUSZo`.DAAc.). ״R]"s]hZ (dwIsGCAAnkvb8tp\" 6kA\"߻Gog3Ȍm}_ݰzOMw1"L{ _l/wM@CNA_>G4+/{9_A{NO/}͛i^7}"J9C#7d hϐ@z-Nb>y}.#w{W@y#ٛꪜv0LLɱ`bA :cs:z!JfqDWZ9ݒ[7 yuQ=ЉЇ|p.7-R_L_)0k#X>YݦXg;3ﯯJ^?%fDJk[Fb""ӲҎϹW-BѪ`4]/6܌4CG/Ne "O|n6ã+J5 :(K.IJ2WDr.ָ/Ipn6X.Y$sᒁQ|HZ߯jɲ< jnV*>r;!{0%ńeSrcLNE;Iȱڡ k#A뼰zxsGؤjTm^oDf7>e7yvO?ۊ5#2>~a~汣Ǩc- ׳Y ^KSw0<.'^30|8n%}tȽ,~vt9Ok*tt`gfz-%uh>{a[zJbp|S>YK ͓罆k< ςuJL{gM,-%ZsEx3Ky3(Ot eR,ܸkOoe*W3%Loʰ?(ŏ8w3|g up;rv )@fDQTcn,ܳl.Q%SↃ"2pɽl0$:fuD+1@C& ,AkJΔ8("x vD g#6h~ ?278CS[{1Mnyո7΋jsO<.u8g˚2 z~0V WݪfWsnl lDMe1QKb͡Nvnɐꜛ87QOFw=n֯V6 [,]F&^QZ{K'\Oq\ CJ=-ІI`GhP C+ mL6KQ7LRO˸Ċ(46^Kn/7vy!fӪ=6 wlj<oc- ~K ?:leˣ$eQe4CQvVO=`km}.z%őZ`|f4st1rh "Wp1ZNsW/[i5-`K gqK崿wP~P/OX1$`1XE -Bio(eISUZs<(%i!SBă8( <1qblhKE5M~kS Jԏ(njxly2 F0r;!pF$cx $ٚ3Ƽ $gF@$$#V;"! ?aX\y| qVD.i)\'2cZ' ZVn˲ȖYeVyJojDEnsoiw>V &5<ȭQ&XC}>/Rp IFe2|dt O3/Cgz|u,:6z"1*Rx!Pt0 d`(53NAup(ţ=.0pxZo|9X=tPdDg i@':*cI'ɒ tZ_LoQևb?J @Qin,Jmǁ_'L (dCBm`Y Z#`)LJÍ|3~q !-;( բAeFɕk:9g&ףmYZPs|9߭}L[1 _߿8`G;\2iiOǽ|Ф~ӻlJ*7W)i2hco8ohH|2u57O^W6{^BleZm_c~f[h`+ ?0&0)au6_z۟.ή0q/LnL yZ5:S/{?]M \r gFz>}&Ѻe/e2?t9bdQƓqN;Wfm4,OgOF?৹5$>< h.xf8}0PJSj?C;ED䢋YO@^^x݌ '`6cqqQ#8.<"d4ħ@񙀙H",E1*-V`k`xN0zχ*oIbF Wh [7r!-1zY}NPbqɿL%N(V:~WS%-3ƓLWʄgj>K/RraԸЋ3Gk|*MqڨX)\%ᕣ\U xP4E f]Jt!{)BZJ8rۢTPQR2^3}Vz\92} Gz(4+9\"R3X(OBr`9XT'X΁ ;M)Qͨ,c3LM E£\:T*DM`㱕ps$ "܍hrT$LBO  Nkb|aE-6*\ aiݝT}brrJ^Kgwp)i֧o+Z1/`kzj#I;#@!E0zFb8``$BgS5:[l8a/Y0s)8"u$^+w(*EgUK LB4@B'X&P w%' PBG)8Iiʃ,e;9**y4ޣ'̈́栩yQ".#NCX/,&%EQ..vvq3*Q3x ґ+ #2 @$Rj\Cg_.=,&CY =IQDf |oQ4dHp؏觪u`a9HP/j_x8g&GuEeVk*oB|G~|Gvr|G~B|G~|GNR KCf="=bTjQTY0Q,*D5}V@Qp= =C5mN,I"I@ƵSEQt1H+CTR_y8@Gjg0?Wq8#zx))~me=7KCp221(Xir-[mX.H "@ʉUĿGweCI".ZI'&D!1Y\%M I'(F=R!h"Uch\:f k\ݕ8,b8XLLIֺ,SJklAwLY>(C*"8&d>Kk=45>-2xX@ pn[`jF!p_j,f{My˫k#9j3},5Mў"C_Uqv>bHOAW d֟VSP98U{m(A\%oOHt(R{M8\N;|oCކϳ qSRas]\m:w)4fiL۷˿ÙyQ QԢGuoɃqG񖔭lu2aT~2&ރ'j[sh lBT?,_gzH_)g0.9C0kcZymyJIl7xJ((z0 ՑeVFdDddz=`$x|e켜Y3;cz^Y;wv o2wZ֓RK{buCvÉXRR٦^OVUD}vT/할ս%c>u aL!cSի~xp/-F9]Ѩ5l+og{}`F}x~?x~8SL?O?v`Fp O˓Hwѵko5UlsՄѯFuyIwɇJ/ٮlQD}u.ގt23KϘ@`G\)SW5QiG|ܕ/X"ᄏ,B/گբ~R?%#=8cIdv 4SRIgL! "5e ^cQkCHYؼAv:s,//&2dM-sTDF %8Fy Tl]3ZtU05a8"9?!';gl? =bU붳 :5W%4OdI0-\(fݮN{)r#G ?D.\9\OART4?~ihg RlITx< )F@2@,z)j-%:+698? ?ayS&Җ!Gգn8˟-#r'fl{5դ/xød\Iyx,%]6~XTD;;ݬ wF7ްdAv|1sUR@ըIlۃ Vcv شΟz0xNy%k;2yh3Q΂A%e!k'QwXVi P.9JBĆU*btf!"pV7:7 #$ Fj*urrCP q`:-RLp7/"?YG +kunm­{b0E,Ւ E%^ ":X^y {Wi)AbH !$Ea`uxRj B& 8)?hYW `RaK&tK"2pU.!$+ bpԈ4rJFJȇ]+wF7͉s 7O6o5k5#.3\[i+RH"D!,wQZl܀,Wv5P ?ȉR4gKr PmVTϕN>wj;l"ΕCh0agQ {IJ˩g"#\ڢ0-[sL(QmGsudXDSKGPʕR.k%#RHD#t68[PRmhtH+ae^"W| (`+Hh'v0_%*.UHr-c13!jx ,~f)ҹVTXT+ s҅Z&&\(wp)B"|eF@1gz@jj@&+g^'ͽ.jA9_37UM۬kv$1e|oXKG)-^ZbFr˙6$pq[ 66ECECF!PDžeAR(NRDz| )1Hq̢Dԁ#.4j18t,~zsdzd[Qbн6~vv} ZzZ(l^hE .j1;-&/s ׽ W^mt ~JVDq3EncYT疪ޢqUoQޢF Y!.B`,(@JJ%C2*je'!4RI rH# be}0z)KD b Fр5)c"5Y3"b`˯/Dqw`.U6oo拟9yL=t{&~g.(jq{+V)#gńEiyNqj')z1VYEI6ʌ ;6$#*$;A Ut^)^9eJ{{K_[{ԫ%:EQ+ZrG@$AVu%?!*rBΐB6B%sFX7 )`a 8X$3 pBbquxN:H! n/.Kfu;0}F.0)Aȼ>2SFi+Sq[C}jQ+c"{&(rA()I%F,s #aظj0O*noJ̉:fd={gzz#p#O&hý&WR2YlK4yX[ZuGh W7p{=wWnD$} cA͵,gW+sGbx>%Mn+JUOO@Jق(R}^v7fkuz)i S[%A3JpXMWc{L,c05%a𨢚+&9NxQu|?'}ej>'_s9h[fG*把m09~1@,(WTrma(XrFT sĭ5O爣3DI?VHDԃ 8ƲV ,?:o;cƩKZFL 8MVHKv ΚJG1hhp[7Yq3 5?%k40Iv8 WyefLm]xףM;sqJ{k\ mwp ͯMV7ZP\L2tOOOӫ htjkWf <ÓyZoviyÍ'.qFk<7x:u|sdP WXo˹^gF¹z4?ǑyN t#@>GLTWZ|WЄNt˞<-K@A%/&Q@ SG:'p~0]3Ku'} O oipySJvюqROIatQ Kf_Yv1(awT+\(BRu>{6(zt^[^liL,'y2-oGlpom7Tn:줼T}TXΧ"f:ȎO>fq30iDYJVaYS ՘o׃W˃sO֘Jv*/*մJKEeW/]5?=?@k{v8]=.eWC;sq(o`WeW~z |Useqpvh%"MgW %-z +4#+=uE@EPfw3C)PZ f^/_z܄XGYi@59'CuZYHW;.Aߥ~F7r_K%|YLG~`S,{]7y@cY;Qvg {TpM`?iHr:^CǗU\GjiZΫ+SY~A7@[)(-J.gL pỲHoI;z] #zFq#/mFwP0r=ϐ<-[]VZguKIhk4 t${$%퍰jta Pr[a kiX),WήZ͚ήBnX}loUKrrtvP ղWȮ8Z=bW3Dް2 tvPJڲȮ3t3^TT|lhzOtcӒ%3f(]o ߲~/ܱx|kzowE^%!\0s, /vpINSwղa`aRDưlxѽW~yϖZQG{ɱ6$Qs-9˝R 3 |㚊3~޳1͔߲yuŇ"@wvϸg/ ?ѩM6G=|KX;<$8CP Q HH r7FBʔP֕5J+WJb(D .ssB:P +E0jUް+daW -MgW %e-zJ ~G T~Dt,b_UBϮJѲîĚ^\"ӱfVݲǡ;:Jհ1v%ZvD= 0GhoU] o9+BpiK.y3`|BdI8WӑljXY$UH2~,*SuuT꤮bQ&H]e%䪣AWZm]]!lbQW#bG2JQWZs D3TW ">u"ݨy*S)N9-#evca?ӃEE=f y8}:ֱXya~=N;fW=F2ȴ-REE N8S$T{ͲԧY~k:e1IP&s FƆT0bM3u{Nwx/s0(T$ܞ[P4qmIGjCJw#Ҕwep,YC2`N3$P%LG\M&eLTi9+@1mf>uu,*S+١L:種4Hii Vp<`&W3UEi9+vd<uɅQWZq`ЎH8QWJW{^yCkdefK|XK& ՋπG":`.ղR$I_Hk~O`Jsr C]7?WҬ"O^م W dLhh2iz`JG<@T|M^g{ |<PY~¶i?^6R/_}~[;X% 4;_-RUǸ)?J߮.jUv(uŭ] cQ8l{P(//oG/ |weXlVW%^XxJQ0ocD) 4Y*ek^ִ0?uٿl<ތ.fQu ua]*Ӹxɛalû۬LS}xr~`ͶC.e9]B܏Τ/g_(ZTǽbt$$TjIɂ@AX`iTuQ%*Eګc4;E;C?*>B( ˄<  pgՎseLEtA!։Y"K=>k`Wxc].83^w 1+*k]ȵzBAܲ`W[ʟ5bխc v \sfJz@#_1[.z6EWDNMm>?ZT~VMbhfD9K:PZHϨJǼC&+I* 4a:"qo mHQ5\"Ӊą!蠵^ai H5EN^;,aQ\XbOP\,!KDzɬ3!S(WC 7B<[?Usk9>!NZ?HKm?wp:ig񭲳5Pe}Q79L緭 ;8bko ae͙C[n޷2 =T9SHhS@F S۶,_ vbl$Z@J8j=!HtBytH.N0 W8n=6;J,PL0~1?mavpO 5ިo*=v&jQ4 w&wy U8EYul3E;q#ϖ0NT[k=ݪ_ϡ5 QZ?zj:^fQ 肘K7HE=,Gv%8 LF[ m?BT-]nkmmF~76*oQdC +#G1[5rf7s2rsNnkuK_EbQUeqKHnX |?WLVVڦ˪}]7T*'YWԏ[7@v*^wdzW߽=Lgo|o8( $AWM;4mjۛ75д&{=5.oikYnV/ϯ^Y/Wв2zZa$rq؏UQQu6ϊ**Q.vtP-TUo[Οݚq%#x>9|4` P^Sdѿ7b7Gz`\|9ĀÁ$KݜYp^'T0:f b>" Ok:q2JZMc `"DƒB4I-"G1Q-3: 5@J|4@11!fwdH4+m6ǓdJsEbI(Px0b{P8/q61q%sn8ڍ#M;~6=wmCU`?y=1tg֧5TmֿD߾3h~ Std?BDZ|P H|K`C/2 ,Hڴ¸^wqHM?C]ldl"Z|/OCۭq+=߽nM}=7| ? ff7ݞC\ 5kq[g^ZfE)IY'uz?Y.J>FuFFs]>_Wo{u m;s `pVO0V@j'SzTO*0h7df+ X:' Ȃi :VNk[o9(~ ih5ս{v?qcX!1HJi z0L&> 3'qFc'|r!(\Xύ$}1 LP/ ψO9^]i#T׵=wnnҍ/JSm:!h}|<4_k 3 BB <0w#z+^s7jxaFp=0t=QSޖ~}uQQRH)yހ&R-,-"|1 <0>9oV:8x]:0:z.|BI 贂54rg2xЀNNo8Eo帿@l<{ }6i;f{?>ן k%&?x>g*)_)e[& 0%_=ngI'@9[\-s@LC[P1r/(IGh=LCN`W1Q8L& "SVF'PCk(dzA}),rtW.Ej4vⷢPdy~ >N b!hb:vLVlOTBk.Aea"8nS 4%'0ȁOsh^#@^4?8HDHR+jf$Z2ΈDD%͝F'x1q]6 ÌdXGD[i~6FΎb4~OܝѸqFݮ:[k4Bk]P~gfݞ/ BoN3oip{ yekmH Owc[\8&kQ$Cq[3H/QM'HbzfWi0[a<8pBmḥUh4h 62( )&刊`) NPB26J]WY@'4{u{zƧ,ɳT³揊v5-qh IT)s4XU(Q/(XZ\9v1xA*yk)qU_`kxFUؓ[ֈu$Jz_*Rzc*XXe& C*CS띱VtKZFL&K_Z" [7n8a60.T;o:C ?to9A#3a%OAuε'Vk; } ן?,Lsn:z3! &U^Th}QAU z=]QT~5ރhGuk_KcXlI7O5X> MJp5fhsv>oCtn'Lg)IOvrO6ﴓϨDw:֚}wYUa%E0寯'cN/7ӥN ư)c8p:׀ p JOrjTϴY03TOuFk\8,}HZ=IK$ki+;.&r}`t~4jFo^2|̖ƿ}m ad#[wvb'oxvyaf~Hn#=.<m7;-\e6Zr>pSo/gCwI[=-+d|&LWkyZyKUz5|d7X&b"wG&}BݷR8EZ̼5NEKp3"J!Xp`}HJ8%/=Ȉb΂wDðq]VW I~|*$ 6%ǧkxa/ux5V }EEL̶3]Ybĕd}FH}bVV+SqGbt>z O u>JrOz ̻.y㶼fci_e|쵲2aDD&pãjnDh:vF|Zpv>r:3SCwndv.30TR. ,>Pkt:Ji+棭 smtWҧ]k=Z#%UD: HZaК(GcFKN$`pK DH D+T4h<4!13ˌ3ϕ,Fv*cꅩ„Zȟ>hyD]6!rDiQ|Pk\ ,Akc9&B>!FiL4JLy-t[LrNN5SSO)@֛sŴRy/!ZĚ!DD)[h$00r&HdDr֎DBJ(2b(gY~butQtaWH ,(wXz$9B:F$,X"ALhpI&}NJkVQ5sAP>@6`EH!G@`J06W b3Zn:8&WaGeSEA$FhYe7dEa+/-NqQuތ6㉥b}c>,!|2[;ED4RZ+Nb([$[u,ל(.0s)"rf >+" (,|#F >; "?`Y@FD8r3Ƀ6 &r170 8d_鐅C~A ֣4*3|8vtm J6$R,!K6yd)P:o&p@ŕ:J*/D4aLpb VSY8S`PLt&]pGJFGyf&NnH#NT2 ?5G kf-GX C nz} !A{1[,} *[ش+Fٱsh#%L H=3\'ٹYV\i\Э=.uLu⍝yެKD? L7m"\.xM}}|go">!\&G''Ћ)r8rx8j>,Q"S&]z68PYzDN|G6ޜ5՛oZc 18,i6¹U]֏m" p<-*]GoX nΟ9~/fO5;k,5ӛɰGo}k n@i5h Z^`HOF'3(?O邺*-!+Ve r!rFC+KFD¯|Η{^_#k" 1REt&cTIe1%pg_/ :!̕=ؕ8wl-Ip1*΄)O!yDPR/# sNv,j;-"J/ǚ!O>U+%+v)(¾1a}0^q!=f茋0~(V0#(bFJX%%kwsnܰC9H)(8E%ll.5hv?/M36xgoC/ez/07I GX|)='kgͷpq&G(yöj{ }6_,]^ќZXhL/z&KK!Sb`/T=ƢLq)q̆)θ: ^W=E<8u(ğo" 8磲cvs,xMތsĢlYޢ̋1%ޞǠo7jt6ns6ÚfEʈ~pt;S)w("=B9m<ղ$pB(yjK(*ihLJP`%Q$qkdB :jBG{`Zref۲,e9v`kNwsMK].1Hӭ-zR/+BZ 5c> HJ[S)(7Q* kMxP2@SxR6 q09EKJ{F8?1{x$P e$I}1 wBFXM52QpH6dv;00Y^*ēV#w\#J`)d+cJgfK KK#!956GN?\&U;-&EY N0l@,z>뽿;׎yqڵӌ!79-(s>j6Gp7)ml!xK. =)1f8~?fLϒr7v?o8:'5sNk iҤ44!x /|{{KiIf槬jyԯvvؚ44R|ڜ "9\LB J^B0ћ I-pNǿQYK?!|f_d48*[<1jU>YKl\{˷?M^hpɹ:I oԟy|>! E/_\lhQa*;5i^g{U'iǓtFtԍp +K$~42s,O -QЇ7a`6nf./IwËJS{!}<"}"cF1ԽNgfve&%A?0ML}hg(6} zl&w}?נ7cXe6=Swy`Zr1wEY-CJm -s.9ܓ`<߶`?<0InO3L5/"[&1~7mgLSݰuOn4s);e( |"D䢋Y,@nO7/#z=D_H'8 1y ΅Abʈ^9,5 9 a)܅! ]QQq!fAH@proDʕnR]LrO0K$ɧWORgRɧNao+4d'n噲]X{M.sĖ:2.D5k}b'PƔ"a(vV.àU*xb!1q TDm)puer\D41b7~i- j7ӎ]QFcQ["Q %3^)ZF:=a:yq 1X%#df1WMD lLb`>RkqP8a/,*0f繕~슈aDx={@;TZG8X5*d(p+eASB8+B6!(gj$ۧZ6F z#)BEי:0.ΖCZl%"5EqqŻ4bOCb*(T=sdf I+I錕hCfڱ+xH@=zyjAlp9?Dוm`7 ̼:7J9+6櫹?1i/@_i wQsvJs ?U8^L?'. 7/kV4#=.['%X?gSr}lKz/|CG̎ɂ}K;z_2`ȾAv.nI7;_HjX,kmzlX\Uxqe0|rW'u|6c~N⢞]_lR&|o׊"=6N>]\Mk4N/3.O' _GyQ9pFXZ;tRj+w l`ઊkU ] hSuyzVE?퓙?O*PU<=Z/jYV2^O䜣8[VQt3~;t_l9Rw9}wVJa^,ЫZK_;}:'}w/|Znoןd!t2t"u(ꢵUv@^,Jۋy~4Y[M~o96ZEMCT˝ ˞'Ǔ֛:1:P NS?d"?xHQ8X:շwo 1*0κSW̺SЙQҪHkɫpઊ63p:9t+T+X }0pUŃ*¡UrMgGqiWGVpUUW5_&Rq~J\UU\XZ^Urmy,*g'uڡ?88y秳|"z̻4{[9nӴ!S~zn_aG,?>r/%n:xE_ti1mڬ hjnf|Έd!D9x(:v@BV`btQV(( #%:]w_&z~hq澉ϏZZT\W0ǥVwR[,})39eN[ҲNlпӗZnMQ("(D &Yb,58tPT+-[}*]"ZdOS7 %2t։Q*v(擗y.QaKvhq@}UsFԆ3;T8L lh68-)c&hϾ9u$L )֎6dcG~LI0o8@+xobv" QB9Cq@Z +"f-jQ̷0FHQxy6$cm۰E4TB%w ;pTM$*V &+ #p4Auo;oJ?o"\ާsIQ&Bַ[lPNUZR4*]ڭlWx,j|ma$d&`NR C,.GwG0) d嗍G\҃hQ"}hLΩ(BHY$ȫWP@vx6 7k֪e\I )$Hə"Z;N]7fv?7@ʃﱔ1f/C]# O>Z4SdeK5a% :e#d#E==g}mWm2 @8dm#_\̪X䟕?!ʘ)0(*c 8['R׷G~HnK7d׾Wmpn8sH" kkrA]#dERdBFm)=} AX TPCO+ B&.IT Tkdl&؎CjblPXXxUerR,^6Ο]|8;}6ZWE⿜Z"0?seLfվYUAU)bB< 5c$0@Z2) thlcn&.Z9n6=2C,e D&Jg2f$S2)z:zti{F(m|m#df1WMD lLb`>jWik񭷜s?A<&ceL?vEDl0"∈wiGfFIɐh,XƏ8.M\rf$Zl{I`z{Č|UB)Msd//ϾS63*N҂2Qy-X&EVerdح -DGwy^ o7kO = E %АQjɪ)[ɉ8]Z). WʆM)S) >:㽕8[P/i@ U˛YV=+םmE{-*ivǤmiF7ԮgB9Ǒ}#vݘg~ڽ~W%\!kgfr=vwZ 0Sv*h$Gߖ)X|N91膩w|1fcrТsM!b;9]Y_R'ffsϢ5X4pzzru:OVEZ}럛3wQ{m?Qr^D6% hn]e|9Γ^R!OjE~|>!>qdr<= w4Ň U߱~B"zb}N#7O|ayJ3M礮WK?{֚Av#FLX#q{~{1xě4%?'-v~/)Y:d^2-jvS׾~7Yh٤~AOK:~Y|6 ]zmwW惼'K/׮/{Wu<I}[0mEC֛`gw1~ ⛭,$9 }-H MSbUS[to_|Q˻E|>}g~>ʷOSo<8bywL}y\GػO/I-.[^Ti\RAkmTlIJ7gX6}+v['}ɸp:wm&>4y293x)s!bQ* A Yǎ㜴.`Nk&B$q( A tV)K>&2ZG̅328Df\&\SZ**ZH?_ԋ>l0t)Á߼51#~̗6O>@;\k*+~[9Om(:Ea .g@,D(4SQ"(LL Y $ihWL)W{4#`M.sYAK<#g:#pg5tJtDX|[wքiJ?ɇԂbs7Mn 41#߬,%@t(!RPXG~Ir! 搄ĖS9b3Ocz֪N8[Hq@kXHLs0AҖ)qiV4OOn$t[57/\i K`%M@HjIM6.۬ETz'Ic5zZه6CC(PԶe7H ?ȖRJpvt8CtZh/eDQ%iN+MxP`* ń xBoaI"p}#mex%zh- #l jME) `p¥Hbktuz ZVc mbx>yXOS#Ux8+W1:\GEjQhӈ8|Zb7XԁAN`y$[%_}#B>9X_g7Qx28nMWQ{|!5JX)g[.!MUfkVKV/{}U%.ߣ@/&QJՑ(ӛ "qJ|5aie'Q~Ioc!azyq8xzrq0wUzcgٌd5K{ގG<Yd+$1HvzGd0:r=LܸI^x~prcē(ңƣ,)u78ԝ,%'|24:<$ sYߦ{krv =ۻvʥ_>w4l|=~p.mLJ&V%Nxi, v i}n&j1yc{SX 6{&fFt$4fjx3isl4mvKLQVm2I jZdſć oc%L1Kxa XwhiEb$" t^m7/u%ExNԳ 4/iZHp8Z*8S Y)dE2&0\wkN٧Jk:}YsPkpȰ21I @9VZi93e (^ tN Ȳ־d:wYqU6J X)a dὥdnҵFڧp-7;>o,ܽw@=g*2p"A>WzgЇx{uՍťLl+fIhNZ@C:'yʕ U^{WeK!j8Ћy<ҷ{QV;HAQ[cH(PIzUL$9s5-B˽Nn?~Mzu7>/~5dյTX;LP[OGEx!MR?iԅ׎ T>P)'}E)0q5s-;lJ2hU2,?J@-dtP,']L[o(:_+?5 PTpQgUI_ԮON&g])6Ta*Uࣃ11:gNii"FSo\ƣ}j~Z @@T(sT8ngϧK܊b^+t7AɌ 5 LC͌M d"Ex'K_,E7:Zj]3t&}O$9=>=1+t'GoB@$a\lQFFq\ĐDܭJpp\gI4sȱt@@}*> ' HBÖ%5r;q x()* e7d_dKVDGEabx-8t3Cuw"@FZCM9 Wjžra=E&qAIT>3@V $0[9W2 '=Ӣ6] @~f~BI*3y I"8R5b$E|ZKσ[ ^G&ѵϘI52fhetyAwfOoptgrinA1Aq-\D.wr_Ήa4r $*};[plj-A 9JNDM +!ɭֈ ȎLEcК +qڴs=toʺ5ZxNΣL6J A$JK!gŽrHWQ"fL=)-4 Ӆb~cd]r. kA*l+TcI" E2qDžoNҨPt=I|XևIi1zA֑eVR؂tG-h!"%3? w}s~ke*,& hp.g&y .`8h0:AjWqwR8Gd1TA EȆ 4TF/=uLa3ZpX#%gt94tUso Krpc`,yLm`ҝJ.$7^м q8:9.CcxvQt6S;*}t8yMFl"e}/@fMI9X`b&H$ieOg tVEmwv^FC;`U˺@ւBvFQ= ZZ)s($9HV/8s)fL)71s%-Cvy.,0+mw0NZWB,ǔ恦3PFS ms?[Wpīw HOt%hdK#}kR00ϴצK:BAo!H'){$)^ce3;"5B"و٣O$!;_>tUf&$F42.E8!%Sd "f'RĬ#nTn-W| @8\-(4Hi&R" f)L$ dldlϏKM h x3s>;*havxY s+,WY$Q)b{6 Z~r`w`v60ilK:Q򌳘~դ#ɊErqb]lVu J5"Vt雿*pB"^n^%T|Y˵p> 4q0}ItbQphEfv]ŷ+=C꼵b˃vKޮYv u#(y2O5:*q/hǍC:;[ijf?_5ۥɫwx孭>LNE/㛝wwCTq܍[;?K FU%NP.j$HYUǥZ]!g*_G' :nLB IBo#jkUxoΚ`6*Ŵ|7xoyQqkl[IT?qE|-d:T{oT4ɥ "*}τtzrcqxx'Uf?vBxS/&m|EJu(R]eIIB.;ۮ+s"i#>ej_ pJDh`&o a&Vd 8S^HVwU6F)p<&ru<$K$)P5ܢ-cm m-wN2#wѭ]ŖћErrRP~8X|vrMᨏxBw܉ﱆ6W4hݰٴ4eE%=`^ |5c#1#Ȧ}_v@Eв_5 {Ƒ8[џSQ_Ȟ[WJe*~38ɝa'ì6{t!d|S%rwhG7$XWս޽Eϰꦚī,ok-V-%Kktvok"7V Qmrdhs׹ ;_lm[غ~ܺlZosP=Ynlb[3nݾMYw>sl󭖛!Z戁;om\%GxVjYpwo!8bן76cuئ)˱K?]9(֬u[ny6/!|۱@b%%/?v~y1}@JGB*+ i /ofQ;aTTᄸ䤣ifJ#R. N] ̛Vi])+=G.u9o!ozpOY|f?X}x;ߋUDD!҂d:A&$P!qJzͽP9 YV*H\@D }f(ř@A,'DqT9* ֥,PYB9 u;A s[Z9+up4\E7m.mezfimet3ٮ4W$R[T9Z -P mKY1OSgR:аT;Dw➑^yy!{[sۉyƝҚ[FlX•i׊mqm$ aRe0)Zx\s#SM.:ǫWA71jmmg<  {;=d&u^q qނ,9YJC]횂@ꟻ#hͨ߬]$$Vm xRZCiu<&gB9O2Ei.e 빱1$r8eL*$ <#:j>1eJ*Sy_Ym+m|Bm̶!>q~?%{ \h3A.$e{>eq"cͥ9Yjaʳp!?)B.4OL3eb9̩;4uYX/7)M\6JP %&2 khe:%* Q(]7]z~9+HT>hNz$X7?Yk;%Aln{&BITtLpaN|_`| ұK'3R9OvB\%s@TAA$zZpO4SA됳+~(IQ&)+׊W }H?@p<*9v}:;]umaԛ7U9 3AL! cA?G3_޻>Jmriڈ UBk.Aei28nS 4%첞|Ԍ ~>=b$R*%OV3$ђqF RE *iT+ MuS|q4_ψ-vY9VݨQJ;7m:K>=@>F4'WNkpwQIyuk 81Y| ֽ4T$52\IFVG#$O3J1&*ybۮ{9KJqNC7Z{eڦhR⬷+:,K~ W3f[w2PR%MJD*qw[oY8{둴|%$\.X WJB2)Yo("l%hha4Q i7/XMSF) 4':PZԻSIy/JJBVDh/І6pMN'JV^xP75rT@ƅ!d)K dqpH/uVqf1$q Eh'T:I:aهxUfUe'o AQA|"A!xdYpA;Ot}K4hx/ 1ȝqem\nYn@_/0?KN=919%j?9;g'zڒ;K9ssb6K( .;%؉ęvj@J3, =Z Qr6jʒ!9hQ VzGux=wb5qOc,m.3<,wzST:_䴠uT )(@j WB3d&2xaڞoY$.C>R'J3^%)XHڀF#9 %"Z\'D KYĦ\qX-۰?SXI!8 $I,E@Zinc, CEr";߇˼-Hm($ԘPD9LhIT(G25 Rw4"[Aǔ5-ŘV Mc4B\ a M1猳"[?NIӒ*\f=k[6`8u*{}*TS@v,p)9VUr7xiٽZh*yDg#1 q%2Oi8d)s 1g?ջ7HݢlסjԻ}dHjs8ߦ8p-|FMQO%o@},Ѭu4Ϯ5vQy[x=. V@1W ]Λf9#`8y:^A[LeB֍ڑ@j0l0ɵf(! '*7󉮖c /'Y;*#GQ׼+HŨUs  d?>w=b1LE^4z&,_|ީ:t4 #; pJ>}oN?}O8s^u$H3 D;y mǛ5l16z6[kHfȒkc  ??O=?fUx䍖DW^GlRλ~EM՟S%X2 ǼQ-֑xIC5Xl$stkWmЄ"FX">%c,w:驝 %!:<&YJTD I0ـB%FO@"dnO=*NC i'- u,_:U]3rA8͓yDd`8%8/33]GWGMTNn^ykӄSK?ǫf'n{\,+D~؍Zgn&ލJs+Wy\ԔId=JUZ !guIt,$/)BVo#.z%{$tn:m&y1-xo c6V{Kӯj]4ݢIϊLA,'8@OEsy?9}39qȍ Zc RGu?Qg ?MN^(åWx^z[ǀ3&DQHhIUBcI` +ö߬~.p޵\I2IJ-!vdHt"lq$,P*26H W&c@I 1jM8h-l G-rQޠ7.,[IerFqɌr@2z$qD8!4$RL9$$ m灵'DI"w+Ilp=֋e$keʖ= xfXOoƥfd:(Rh]p콊hjCQەlp#oAdQ{/eI_ !,:ſ:oGZ &1#O@3*Ny|kmF~<0)( +A*گ~?Ӽ *hRI8y(}],T)4!Yeq99TW&O$ :Xr"pDȚ@ NcJATE W.(cI1ϺZWlfN .&h2U% %GEheپ\n{"<0ND{!L$_|i#oxЩL>v6Mj Mӏ4|EzI~>٩~z{s9H?M*tЃA(a}НDOJ~1~{`糬~;l;t=FDzPc'Ol`+1kY]٣'JwT+N2٬ulGB_8͸ZYQQܧI]M T^cdègbX˸g.WqC/b nOoYvfSzkvLoO:Ʊ_Nuzۯ\כ}s+O;!Ko8G-z>U8 /?r]Y[wbܦ{%8 uRg+m 3m4ӦEp/b uw3t/Ǘ-v3d% N~Y$fa0w8{.(N['ZQhu2(LpXM<iAh!RI6IJ^8hX6Rl*F˄Bt.n>?e?=*1U/c*/}7y4Xm=4X=)eCe d5Fpփ% `Xa?Ofq!% F2ZԉmFGE9cQ3LU>VIZP}$Ie"ȘUc'nfNF]۵q3WL/vF'Xn5u5Uxb]fζ>, x]_A},ջY 8Ufx׀j)a1x3Y"/TOToOokQU4zD2N{*֟%2]7{ryŹVK;6Hͱؠshb[c >% 6JW(PAYjQ(%3tz2u/y% c9h-Rt%u2?%~CHP$순t^| V(t.h"島bp @bTFxұPF6'ky6OSAJ $i 7օZ41B&U!x|*KmiQ ͈^~ܱPU$&a43P@<SOz]v2aYutF9)b *@a0c_2926RCh8J&Yx'2a!&>hF$VZCcuք=I[-4o)'JΐE8=x[ L֔V~ѳsqf]X糍oU(I~VSRZ hjF+T/ܣ{غY(X!zk[}jm* ܤ뛬3HUBlJ&NcO\U1"׃p-̿(RTK:+l %mgF+ aCYl̹ڀ= 1J\(HSk91EJHRhW( Dg*#gh:16XMI ~iEGT:f/Dٰ&0@"9A>s+H:3 )!.(l=LxͿYC*LI8=H &ύ9*MqiO$9\JD+MAMvYE:5L,$S3F!FgQEX bIIxyXTz(+ Ia$6@t,(X8H9@ViԪg eX O TVdc}}qb<v?E._W;(2?AT颕Z1> BˌU͚+ o$I?J8 ҪMm]CЩ0> ؒ|̜#v<ݬw/ G | )k32Tqmc] ̐Jf5PMdHl8@1a1(2jrs3s=_U\"bqlDԍ#"GZ,UgCD$V;|N&d 5K!d@PRL”?-Y4I+p:: YǂTώy:"~HJ:c"4zC\4|9!C!$Qx#eFFhjm0GI)R8SszDZq̟YN|=D߿z‡`k>ȟ)z[Q\GI_gw'7kOYSz˟Yւv"L+=Ǣ9Eo-2J<0/U?G#/?,AЫuqL ǶyC฻b"!V |l Fvɂoi9mGcYw֞Y$gIGI#Pjc'#.)q׭\JxL1j  %$J𨑉mI!|jA9KoQLmZYM ^2WQ{BYw귮4.I/\si\wҶtM@^́N=x)[?ŊkepDQwIY)(^!Im4ut9b`@u5b*D (5$muoWc&8PPz+Ի5~iL3S~5/tVz Xœ0VV' 1:-H4FFFBɰ ya&/M ly%~#4d1S5a?u3.iA:ɂ EtZA dc\79N%~>vD &~ ]iWTvݶduBitȃhT|hG>(>NH&&_&;3>)j?iuѺm$-`kc}~;H땐vVHyE6;K\jvѺK1A" ZXN˾rGwZ pz=wJ-=Bfd1 C 0 #eo2Cqˍ9!"qւS>!ŢU-^(l>"ByfXxCEw3k?(ӤCW1Jfovo>]]0X%@N#~?omV×K߽VR,>|1UyUʵ:z$(3!"$"Ծ|~ZC`CI6;  b EZ %L zG^g|԰ddr0 `{sv{4Jf`/_be+;, ^f'tcPQf(DTT mbq>JP!J&cRp๞?zv!;.RQR1P&$FZg {Q2P#h۬.UE"$TGk|޵5m,鿂SkRs'7'UI*vq\\$F%(Z J$ dQb  z=t|-GF'?Fh-lHBqkO <^(ͅ2pdLލFbakifnka8Dc>v2B;ْ\[gøu-SK|j(9B5gLD\-b0D팷=`|UD" g ʤ>x K*HK#!Z46(dXDǀLF A~q;Sq>WY2,(s8E>ڛ/cʠIgKޔ(4eH?gvWtOp|PpL>O8X+}3*'w+n.ÒL,MW]ؚ0XRؼLj19,E53+`"Ɛ cq1!y]ӿľzܸdИzpi]1S/Tчq'g' VXҼ_L ~I3KYm<{vtUĎǕ6D.PFx 'нϓ 8<'`AS<<{;-٤JzA$67fE: C? >Kb'Xyc=`}+*^CFTlBh=5RFc< : ^ɸ6d.0x]&p׻ 7tL&Pm n\odsP,+sõtc=Vu2#_AuVyePm7xYRTj3- / *v֝v(NrUncULgqzm#C*&,JzuciPIdЦȔV/ZY.,چ|{6-m9m y=O#HL,zAC CTF,6kl:kp+j|4 [ҍ=q5 k]Ė£Ǫ'5떣O=uڝ2~SJi"P(O&S!TDi ?QJF0'H^M)*?{R̥4d5iTɫNJF}ciRl4eLV=Iq8Ԡrj L ~gZ+Q98 _m5=۪93l#pMZ($X1i)Xi{+6{pbq8D!1_ 7s|g / Kuܞ)cK;s =tz˃gHR^ z Ĩs%\i0GY,Q-f)H6~j8]q.n6'w=>=\5(j%g~hXB2,¬\7۴E{pu+|Hhoo͡oo2KP!!%.~`~{]@/_h.ܓAoW ]Qn`YWG?zR!Rf]>m1i5n-bz%[yEzb8~0P0T$u m|,oVՖg Pm3^>rՐ<@Sэ?񿛸L=񿛴?񿛔ȖVƿ?=aa&@K =Rfe8?6REJ=cR=^^*XL%ŤftHuMKϋ=tPzu-k? T OW8a9X`P]Ϝw\45;DB +,G#=-@Frj/IbBڐby9}cw}^5s:CW]]`Q(}0h飜lHނqg𵧐z)1=#c%^sGqG!X&va8*j5a๣ktgȧDncǺLDo(j$c8=zE˯m*3ۀnS ߃ \o"(S2cd$ Žs#6%Y୭oġ[S: kH$Ecγ%+vYav#7s.̿+[C=oK;'1_HIhޞH~:QQ/@a9h}wSOFy\r}d 38_rpdiQ>}m.:[Cea|2N Kf亶I%;GIˌasN~5v՘hLmgi=euKkpCm:jq$Ob, Z Ώ%l `|XkM-g4Xb]P)λ e;m-N 075؛stXV! #)uB0,#|XΛ@ hA HfX9$,R[^lMhy{*zI馜W9xNfC$_N+~@Q)( BYm \12B,wr5 i)v uVcJZ:`x.`pJJb`'HR04u yl3R!e kɕn˲-Җo/mIK9G=K:9{PNXque(&He0!` J 5wg-tB E\s,$gVQ0)('Fa%*8.|;ucGQnu #w䑑ɏzŵ`pF RPC7añJs LQڨߘ3vrMۖ\[o$`|w'G?-?"wA+>!`"z_\r̨'sW38*|0WeeXPqП F|47'^ưA#8)iQi2bH?gvWtOp|PpL>O8X+}3*'w+n.ÒLթ0=b%8oU/.& ,ֿ61` mqe@/DL0J/!H1dؒq%o Ͽu׻8HƝ`F&$׃K3~?Ut_͸~>~>9;fnŒ`jp'Q_/=i||I7tgO߽Ԏq%wQ h8t8`yЃlq9hJgoo<[rTI>m>o~맃S;$6t~Tey}ŨO>z&KWTˇ >.Y׫Ѓ? /ׅДtnMC}h'|w5e!!I!jE^e'C6w[}T5|0/EưY~2`7+ &%D; =Hŋ m?0@xS*ra՝܊^=˛ܧ㋳;q'^i|$|^48H"-7k4HwkHcﭴϗ ǭ5I6tn4dOE573HiUM"?^5Ԥ!9u]F׷UTޖ ZRtrF*+t֛7%=0O|ɢ /`r +]\{⩪,vdʼ/7 =Y P!˫[b}V}|׺hT]TG5D2oBwj:W./;/K{ LI1cY LKc<UUj6aᙺKzo#k9*1K9^ө3Xld"TQMtE W@km5XZE[.Lsr\2blҲ]xX5>7E-^ kpRlc'Z[RKsZ<i-n1V fTUzZ0-[`R $RLEO^;~ᷨ;-oDKϏ%K|`8(`tģTPkܢ4ucF^D"5F ۪+vD(w)4u֤)\c2`6!5S!S"Z6x[g!m#A~i5Pc30 4 y)#J;eKa0=ƛS0$x"F@(8r{EPWhh#'%Y&<;Sb I?ü!g ٳEM-%z՞l?OfɮvQl`xߌLG X!ʏE] b>TYƬA&S}͎]Df|UlAmEE<я l3ΛEIy'mt~@ BK{+ sڮ'vkw5?[tݦXTm< <8>l sϸ&Z2ǐUewV'KZ\S81Ę؛8kF t2c_~}͜|&f)9Aޙ_Kn]vWﱌ2eMZ% ި\fK&t3ʧ[L7I'[Ծe&}3ۜ5Dd.@:'YJ`Ikasv0@B{YzTz} 2ZbCЃtmPz!T1qe +Ȍe9Z}G.&ΖpeM9.N V@@dDm&WӐ`{C4хťCLA #`Ag^ YxHNCJ&+o{BEqaHicB9#bPDZCԛ8 ^'{,;WOؓgR' UIɠ;#W"Eϴ4 B|O^go\i|7Jf`*dњ<Ʋh&O|rD)%< zh^*|orW ;殰sWزH;蘼Aw34Uq gr1W+6gc{[;s]j^hY[2IR}d;kp#'a'Y wCIkhN (<3J9lI"Ơ8<~cVtp}`ײvEwEg9-ZLr!ҼI9RAɼO4Eh2%S8z,sP0?-4N_ԋVwV&M,1p7FAnd0 NfR$N)) U4HpsMi6mllV֭jW$te1z<'"PhPX9Gi!"a?V˂ ( li6g&Y .SACzc4 AAU]]Byn5Z P8+#Z/mW 2DD/-6-oN YdfNn3ޅϗ:t!Q4__zZ'⑖R 8|Wv:H3f< J*ao37.ߦj)go aSXa!@Wġ:cS/kV>a.(. hppjL7 ~}0 8H^[5$'i^`=s yJ[ eLK7^s&dq|zRPϋճ+GO}:|Kט浽\^cn_A3KSmNWƄ<_4gFF;Wkw8jO/5qQyj~m`A#[ s>|tz>Yao-H5*F||&ўK 5bHRmk9sS lLIc XVzg7c.F{nN 9*G]dרJsuVnRCϟ'P.}5K$WhVMڰӋ*qE5͉U]t^懓~zs8W'o^@+ SEK-¯Gyg m[-Pn14>zB1골ԌېQX- q\rqq4k7^͎GhY3tT@hE'$6E[e4T+u3ElL1[ّntldᱹ~OevŽhK|GbOsfK;,CE)AjeAr։DO7a7aiF{>aclc`nhfMVD=w H2rp!K@)E"ÞN/{:{մ9oa5;ug๹V a;dӄܸY;+xkcRTjh2qm00ۀPﺪP H&av<ȍ1WSB˖貊8oU~/)za<{7Gg-Wt 4J) ǁHL\GLBI!tltlS_mA;y܁bn^8rczģ0;\~nc/V)-2z><Ͻdי'^KF: rҩ|ޱn?TLhwUGj,npT~'${gye{<縑_nB$> cYd$ ţA*O/qw ^_dla\9iRuӲ*Lcc}Jp2 T?KjN=ǃZ5=o]}7Oޏ;v3B! 57K5u3̝Y:yϽ-Wpm5nEu hbtTE&9 Wj+jHkľ,RC$g}23qV;&Gt`uE-;ku |oWgx]zTȣ4YuJ$i,ǀ-}WQE RR \֥YmjcmLg F42пh"7{a]+O,vL4E{o̼|Uⲙ6Y?d ژFnk &6/S{&&PdЇ<-M^sU2l}АIrNLv`k3WɽI8o9^?x5x]()SFk JgYFtL,"JBZ kPdLS+C̋s:|VR$@c-:5WAT|x)aݷOݝd^`CdeVq2B&F_2εH2z I +$R*slJK&| {ˤLgњDepVym<%aoPyٸH{9WQ",gs-h"} ":oA/)\mYoLpP#?qAw.gMlz8U▦g@Uz&|{uy:u1Mr(K䎏55d8PM][QRlؿ ,W]&95%X teW7һT N8s6,HU8hH:W{1FK)!c8 X 5ݸ' ΋G, N#(w5_vn kp7Yק|Zq>&&ο#qK}dlճ8@"B 4ʌ) n|L.hVCHΈMd*0\fY\5rL4.4_袷ӣ.zo?K+C0k6fKz;?fw6I$W22c1%*sމVUb1 bD2+gƬdI$ Rds쐻P_9^꒲_cHNJNIF9^iJ`?7%EܔELmefi-v}!H̩Dk8ȳ*874j ^8i\#a>Ͻz"KU**DOӆlyypYU-qjԊ҆#vvqVֵbkq{qK 0ZPJl ,s:1Q#nۦ&˘0FA,AT砒Mr1aYٰ)(/ípG3m0'A3*IdrMX &Yy BhHMDhMä+qW& QqyY -KOp ^"g^U0*@h1q3Qn~)TG'9$4^ 2'-X@%Luri9Kn5m?rveH)"(b0Ɛ̕hmmRҦ! jƂuNh,ȉ,OAF%E[FuL1MJ(`ΊrV=嬕b,6Qzdq%d?|dZR:0y20T 2=B Q"PS>(8o1$cǔdffXI1KLz M }t=ï_ƼU$!w6y!gn1TM^M%bijO]4 }5WXMi.`BbWZWJ#{zp%)S͇>q5@’w&jbt e0V_UZ "2\\0Nf̾C:jչ$\a-[76jIo2$hPX- ƀ:튚bϬ: ~ 1 .F(Js):FVs1 { pApE[4W\. ;\)P^!\i / h/ W+Rs++#Vl/5 .  Cl/ svU5ከ\=J^I\qM˲CަAܺ鬰%=l ~SrܼpJզ-ӵa#m~C2r7(h S)iyQ^4ܛg|+,u׭z+o+Οߘ遺+0eLy$/I60L 0 ՜1Ͼ{;7uG6I⇩drW>W5fY)2c[ݮ\vnۛ5|7Vuߴyq<2\Qh-@׳aYAŏ4h4! 㞿V/!\bF(}緛mO%B)Ѵ)$Xp<šl<ŁԞ8naT;-н`m^Zk*j>bmdX m;me #rQsP)AB Yb)fB6L)-9Im\M0ԗ1ʻ)Gf[h跗=?Ηg=88j_tM|G_Mp!Z`6Y%@Q8 VɄ$zD8 "^>,z`x-งB !GKz.2AhX98<n4NX*bc}ِ6eI?H7{d\bMWyERx4KIXLcl֡ldB 9$Nxd:=ly~8[9$Φ([O{Yu:VMKDmެ =㻟/WUxSGuJ\%I›Ppolh >pfzƍ KGf>ugNkk)voo->tذp`[ `,cn40sie0im7/yz)nXYAK!M\M,~y7PBM=\ӆ_؁~Ta8Q 7y7= z}6x8[&fF ͉Ip Ezwl<6%N;4Xoiw OLRK eU ԵN;:6q?z,l&fvQe vq0=`Y1 543cl[yx}}Y4/v ) f[2]@/NdYdf!GN$TR}vfB U0p 缋5B)nAK YP|GdX4ҝK<`iTGPth<']=gK.uP520c<# Ri*zOI ppRȅyO:&zF39rYO?[FΞ[!qc}m :,Sϳp 0EA&&JŤ@06@ $ω)\ݘބL[2$ ېd6.y[LIIHH02z 9uѴ^*r_0z.Vj5>9J]J^Jϧ_o'MP].Vy{]r[SkAϖ<\w_T@2״]sŸTLJpaRy5r)! "*2:ǘgF:L#L0EKDDFYIJ8%L U*հg.vXxJO{]ь7s.Kξ|4<&_FGlnImˊ$ZLgIY#CpJ cU[{81=() ¨Fɶcf@ЪRfAʈ]݈'bb jWcQz5ZLLE1W18@RupCNC#K]Fr$ 4CNa "i[č LP8Ĝd#:i*a5rvak/, 0s-8EeD{Dh^AT`U-&tX2b^*T)d'۬4Q#3S%ŵH&pc2"V#g7"~IԞpq?j\r,.be\=.KgGR,oH3PHrnBO炇ոX<@؎EWG< k vz\QΎ{8ޏ/X.m#GEȗ6_Ȣ0,ngb?_m![/-[N:@XjUSSz~l^.3DbPRiijkyGBYSԐ︥|Gj;wԢ$48k#]39|=Z%)Td(+$խxOAfe3E1yx@o*ѥoݗ8w!"no`l.g^|TYx]%|vFIaֳ `X/>^UBZZ{d4Hb]lk=f8O(V`2&r@*w)r/8zܳ.c&( {Fhڂ:Qɪ!B`խDs?8wjW&]bq``d{-"]YjUjUPp>ޠ୥jw.}8'BGmmk?m~w!\4ϯ ({|cd%B*;L8ӂ#+xR4io/AƤ]=Sq "T=e5c[E=rOHF֓^7'ifg|,ch!2z+mߓ QJ%M *Ie0^w޾fٛi8t#' \o"kf5NUNo9N2 KFuϗ[^E^ٛCQ |ON$0Ϛ[S0q-DZ'ݮ[_aR*s9E+%k oy.l]6Yz/g^,ܣ^S5y?+\ȾkYME!_ ϫJ |mjvnXv:-k~٦\?XUiV ^h5ߙO'{ ^bI;ڣQio^ 4^$XGNוAFՑyy:Z|HRY>pf [C"j@F40t)$lrъ63$ wM;Twr^{(PJB!\.Q`R@I߲{QطQXmU0duںN$EI9 3]kՌsFYeZTGB-*8rO/k_nu<&CTR0"0)/HS(gPhA`@S"G"KB5BkY*%o t !xldMɒH}M;(~ AK!K HNG"c=\Wj6Y*NKaɍ_PYM]uXBab,((AR!G"$H18/[RۮZh;ec;1Ey6֛44#:1U٢Ik))Ѓ퐍Ms]E~߶}InNRB[u`а{D$`i'<8[fţ7KE4` B3e:"5"4HyN O<(K;ՆܼZ!ygf|.nKu8B$'[H4,Oёt|ؠ^Gd^}N!O넟Fd4[h<<߽wŻä4~]+zxϻ:/܁3>̗kk/DKFȍKR9Ufg,]W8QLt^&Qg0ڂC}4wi J#<"9bX]1 }TX?SuK:*eW2cLX zjQ;(X}s)Px^ƨRZbQ!XB3e{YhRfP]Էpp}s OtEp>O}D֠/㼊$-lԏm a(59 aP+4YBi  ) +5 6ӀpRb=R]z!& XtƘ̏=x)jJJxIARbTD츷vءj&b'eJ)@c4FJAOY#(Gala `_}T޲t24EP@.ZRHPsu3ʃʞta"\ro4R7+6nzT_7B15|~Mc:3ǣԁT2 D 0ƠTPlt!\`@PJ¡*F &m*ɒN*RRD1k@zk dWwn,* L4"L4oF ѪbHv /:ѡlXvDP1'ɴEHLV֒P1C,"%Ju7 fRȠ"& W>Dd0Џᄆ]=yvz0h1CsO#$}+96AU =\R/C=ST=q!PNýd.qВX*ITd"{ hi$UqTEJQ~€ ǭg!8ҍǧe9/:_tIB$$i)= Z YjfLV=,*^_,WY+wǘu`Iٶd9IPFGKg=X-NMaEbٸXn3 x#)*$%`EE$KTPBPja߻FeHQ(G^RF!8^%+ eHF;hَLJ67RtpƠ`xͫqD]v.d*e.ˮS>o|Wm`<kgi05x`>?[`[욈ªz4f6s0ooFG~]ϣe$5E{Z9Im~p>>NvtsF}fIbWdNB흉Ck:,O{9O'{5kɈDBp&RV\-Zmߵ)wUd^ŗҼ]N}iy>̷H{G^~[>'g?j #+yMދw\=icF:qGQg Fwچ~[\y^|~e`v0P9Ǔr<>kU;wōѭ }$s}\בAN>)m@B2fXZ2OΌ} x3vI >cVS^Z] 8xMRڭAH-cP>DP%ZI  Œl-#{:lWjzӉ]K_}wvNnª!miv?w1_^9*+Ҕ4UqpY2 7QvwM%Kd]L";*ZF59! lJA[ 4ig, %)+bTt63X"A9:Cp KUDJK a06~&N)n'7JadRE]]s[+y\ 4Ѐ6IRUyT 6cYR3ί(HIiqf<i;5R?ʊ22QܿL;n=\xR{3AwbY󋯿jN/<_v<|;vPO[mN6XE/c 5iwNe>A*o[8q`1דմn7vMsk~|^;,䗅BK핵w!^-Km"`Kn |.)W5JT&5964~\ӧZf]*w)^--c'nWg}raqpvuuq(c8lE$/6%|]o vs_WS*FMrd eMY;zGt 6?.~ 7\Ğ<|~׏v6ßz6Wz1~c:[nf3fl.n+\9u>4osÝ7|zu]q1b N)[nډ){lnSkӼкf}C~7;o0ǝilwokַvq4xVP9<_Nhg߲ɍX I6p4`ZBq/$|ޣ&z?e#4LmAɱ2J\1F.\jP*J0^ܱ5vY\|-l$rv'XC9"pϏ_onT"Ps\dq]uqɡ4bĻ$.{KT&ubnRXjS}X^-4|$$\#SS2 v'^Փ+U8s Mo\%K=2}h.:3]Hfz;gYV)o9/vTAuPptPh@J_aect1T&*Zv5T=!rB)Kދq$,tUx-9| TRMS\4}F1Hb&d+:3>RBSsq:v-9_5OʁZNbSB ts> T@|UiTxF);KP9[Ѓh}F()9ٶ*FeTbZVq*J?^;wa$ıWE&ݸT;65!Vl$ 6a[ZCd:,kJۦ^_B&3:;jT Lr$r@pU5-,c`JjN'1{ͽ:<;db׸ ڟ_t" K֏baB>p<2gbyl}8g:poq}?Qn<{=g,2n}fke?dz[Iu٫ &j ~_gw'# >??ץȖ?s6x[f{s6h3Yӽl)Wo%xWi:㻏/wG@ʹ蟲20Eަ dqvqV+7l8QɶF*q80$д)NRVY+:#W$1ibiC-5pG]u;]%>++ k{8t=Z`GIHW[{A p̡UGk¾UGiݑ^!]6xgV^>3W&M8>tM'[(tU m ˚>'ʼÛGQ1n 耿C7\v V}9Fċ;$``uO2#] : ou':X4VگY5_>b0.&hlkBk3I:mO{e޺tjftSwOfeDqYug7mryvu 5w>x-+жp[uwu}+ՒVv6_r4' Lz+"{Xzk,h@E~"A܎O?O!y I%c11WF ȴBL 69g-֛ ?֐O l5ksI8iV)ᢛL$()[ı > 20X BqΌelZ;5Cc6]ܘuk%A1q欪-"@9?&3֏^t-#C"0hVR!h617ԕam`M:RjX*02-yrBë5/-&bv6+,Vn+ C3 by-@#6O$Oc8c&H[v$ ` ,RΧ!jx >,:vQԽMbmb-TE(ɑ V[5j΁Z>¶Y[MOɇ9m\ DHqu~1ZMĂڀj vJʥ VCHQkU %BB>c+U ]*P*< D̤q1gU$zʘC֪)WĠwb3RB&ύр1$q"b ]0,)j!D#:*X{I.%QC`ʤU˄Wh$CZ-[vSeI- OY2g z=SZSa\Ï3u\qx9(#ȩ [.ҧA2'C1c0mVS&uR6 SjPhk]u+TR\]/3#+˓UVYhXJKI3, 쭔]b-PPDآy , R ^0O-`67[2a`X- d]FP(8_2)G2(N#WEP_/@5vUGQd<7*zSC |z%*zǐל eq2/zP CР, LhDku/۬U8}n:!oMI t,|<&u0L|GP!!.A Y*d,9I0,{.N"D5V rLh_ J-7b:U !t7GPF <̲w"< eo_HU* v hcٕK"ʏR57ғSlFʈS`Iy'6l1|/Vh׳`w'3ܥ܉8!`߇Y`;[O&~0z9m!^I-|{ UA|E&h ڦւ'a15BW /*}p,}W2z8 $]4 -Um똂;; ` I"N A-)BJ$R3"2b'͵`z `YF@H^|$K:Bzx[#fH܆m GEV'*Yȩ ՏUE~ ̶",93F`;>Vm{/OO#N檢2swoBF"G#A]ŤyTJrpȻK$$߰%lC^ISA>5M%lAbDFT׎f89@'^ /B>,%A펁.|jF,V,i0Z"r'7;MȒz؍E޵ql_!yT`fd? N0I^26!);Yդ("YZfcGUUV]WLW3,PM.?{2 "vDDhÕ,=`y׫\- c9agQa A0օgR=6''t,+,T&0QIA R7o5BPY\׭ x/zDVwk^zifs` "}?o3M,21sɺNmwĩWrڔwcnv9ʖsMLud -[-d0l&&SfsTKQcEֺYCu58E1eGCk =@9;hݽ=f6k80(ý XirH9< %\#AY.1eL B`*tG'' O5p==`5~g &iB_z+Ip +E+@Na|;)E^g<[aNXw(aQBRXFUTk {ֳ\WO` 4.12W׆EB?zV1sJo,2azr`%^g=xpO>9.J3)%LTH#L5%i\4Z`-k$<ks+u5ԍ7AK9pGVs0F D 33!-+Z /ڀ@\DbiO tmU 12'k)j2#HeMC3zR Э3nx {@A4' ^m6WVu˥W`ebJxDp wSB,)`fipMuQ~UwW$,X0B= a7^5լ|stvd%uf*A4Tδ ۔N/~uJqMKrk]nv=?ǵAp< X GR`%h@xɡ+*JH tJ ˝d@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJU c:#R!@+0(zJ ؓ@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H tJ 北cRa7ly묦Zwe7~P ;I_VOxfК~KG$\\G#\Z?#<K¥c.}p<Ӊkl ]UR*Z#NWDWGHW/v9 ~,t=G=t('fJ< }NW6kLWB]=J]=LUh p9c_t(':BBǛ{\dշرUEOW':BR>a+܍*\iBWCY#nd]Oa]g2!W|Z't8&9`xG71y4-Oo%Ѡ7eռ^Oh{ʠ#rSۛKRmR7o s;wn)o8|?KC%Ɋ1Mܿ1D#߽YַRFor7F6'}ʝ1ًsPe@:.ls7l Nکnh WmM7JD[ܓzhK^~0Ν-o5Q&hD[J}c oB;\1mYUV&JpKRz3((%e]IiF9M>W5SnK"`gwLu 1;L4BM>ɿ37_Nь..'LIs ZX땽mf1p M-z4]QJZ>Fs7j<[uqcֻPJNHWN iDW ]Uj4y|fAE)oK],p ]U$]Ezy¤j?Ъjq`j?C +ItخJGDW, ]UZ*ZNW@NtUJ(-]U^WxWC(#+5lDtNUX誢5rtUQ:򮎑V WUEk]JznE|K/FףꡋY-.=LǏ8D'"{۶R)ƦE4JxUA_OFlU~~hSHt>^1Udz?*E{:]U]!]Y#}GW`]Uf4+83PNcHWXƴ?V$KVΎUEIGGIWr9|V#ίph*ZNW%mCtӊj?kWu z/~(wRPAWjRDWznjDtU*nBWCrhKDW_cLs6p ]U *JLj :;& 5+cl,tUpP*Ctutu[5phA[~tUQjNtutU%R1`sif vc5u7u0c^|VX,5EݻfuQ/Vo|B`~@VJW`N__.yhw||Eu(raqz>Tyꤸ-MQܘʕ$E 6m n,)kʶM)B`R' FM|`gYirra̩֨}NlXхWIoK.^Y`mVHa<0QDfbgh֕b1HyW~ì3 Xzp!ua+cjaMJF+T<)O-ec"c4ߊo<}`* cm̥w29d1, QD g#dO1`J!8z|z b. rOvlʞΦ+w{$iYj^g,/j1M 0Vh BDVG/ı,'F 9Xtnz6QUep c]gvb9x^\MM^>]/{ZJ )/swF{KF$-!}}*Y]i4[E :z9כo# rр}Z,xDŽ5p#=7? 4,N?wv5͟x`B3 -Fag(1G3< i,Y'm XmtL ehVt.JERI^[3w.9\'7-B"9&LCld۳/^*f..O|TvjqGĄ3q*߳MElRGDQqcȉAy(o^C=M4Do6ٳu6S98wX/g go$3Ϳylo_}D?Lsmܒuїzگn:։/1~W6Gp6F/0GS_((s.Ͷ8u@i#1ҨDFӍF_.dv`~! A;6 r)}~nkq22i/}:iDx"O|O4~  ˢdzNaFT|o{ֆCہi=RVW4a+7}Gpn++ Fu\))i dZ>IJKwڱwL8JJt<>6:Vʗ47 d"bAJs#&%.FHh 4MG|3'e̩)@Z!emDHpZx[E#g*CVg:d;fd*כ(ҢنV1 4v>^9[@HߚM&it>ۙ@KYf֡0Fͪ. Yo[ -8IpD(s]{σ u4"ft2Q,sFrZk!C˝QKbزe9 ؎}Y^91xlhZv>{$Mwل}cXJYm #(11cb1؆RH峱 ;'T&$7i:3-.p^BE »rA (Vzrnu^fix {{|8{fxZo%:(蝎9HѫD/H L)JdSZ_MoQֻԢGg?jB Ugw H R / 5ڞw^R,^;GtJ ahE2)x (B0BRe( `@CV{OwTX,q@)翌fo/3Jx1_ƛ<~}qɐҜO.'y:f˷o⶚ 6ٌ6a_%LICRO!OH`9 uKz):O^t ]}M?qOvclȣKYx}i})MN/ {%i. g ?F}/|DtUBf1%]F>p;ĝ1 j>2mͥKdy)4+7drngAɍ]h%G,t7dO\}_&6旍Fb"t/- t; Ǒ{@s<6uw*`| F?>qDŽ.7i6^$OhQ~JlG-\|>a'Ms=\̏,*ẻ]Y4;$nݽS|OwIWjvCv"eEn3@]{mOg]Η.S=7;nr]|ܩ!ji k6lM^H>mxn-gct\ \. s>  I0DbfXJ7Re }~bB U0i4RIG CP2&-,0!&s=r,)]IP^17'U|81 ONj(ד3tFRwOc|x\օ^j, 9Xth@3.uDs2pXH؉R'WKP/}kVJ~W(qF9Y8G"9*}(IIFǪP)i%\ܥ }Ĺ qA\f,ɼީM_e o5qyTp'd7T-L><(oiB%RZc2\mVĴOr`ƑפŤ7L^ (i-pc(pR?9:Ɂ^&+yV/u\M/|}wXiN.ڝtq[SAWSg=r**ߨܐq ♔ÅI'x0Ld У% Q2F[s̖4֖sD'R?ȕ }"!2xJid9W3Tmd&~dUaaq(X7`1#i1Œ{9UEyjleƏ/˧cĈͭu.ye\X5\ŎI1IA,0igU^0YUĖVfa('&CRؔD AduYKt)hleĮ&~ nfǡ-SQ6GQ"Ihm58 w:z)hF1`rSɸn,jcY L%9x_[~eQdvk͏CQVFD1  f˜RK>rPD"1m|@r \p@~ꤹH#EPJ^u \Oi,92.\YM+~ 6Se(O*+˥.O͎CPUC8 a{E-z6rjA06ApC3E?:Xk`r>ձn|Խƪ@r9~!UNPٚ!7@d!,T10"dhWUz%Mg7${t2%S (oW??ȃb#$cJԜL3IYBv8kZݒٽCǍ胖&9 dt ãk4kz ]6A^V$ >׻Qָ[Ll!wuՋZ]-MK}G7G˔3_h FԶT+0goG4G7uZlpy6H?0)d~iaV& 8 rlTL(xȃV,\$KCx&%4K+AVZKzr%Ŕ kpuw*i Q:eƹ \g@Etdc!KbrY'WЯ a}1ecGySLSYv~~;{?Q~H>yEl+| Ӆ`Eф r? Mt% *h"\`L"U58:r;!s$TֻRY1B'ʑ>@RMg=MA^{}{~b,ɳJH]ݳR3/Y߅&VD2АQK 0˚`Tny @mbGyL|ԩ[zgp؜Yk8Ԑ .x=Dc\ ){Jc cgc*d:DPSK\ u6TIc2||11}8thzḆÕ<="%yoSlg1(/J*Q0r,`<3Mmneגk{SI,%[TGbFcw2u,*$Z;Of5qVA7ܴeNJYw,Ȥ.PdU9Z&f@D_Aל\LF':٠,PKrӬʕ洓 3pł bfh1RQ2FI @Y>dQ6п*NhowW{dq rb^d1_>: m33-ICqJ]VZs5rcHq/a9NStXȆdNBKHZrr"0 1*żCX5qO,-Z|բN&|qDLPrIp3h^!`Q3L="=:Gn1&祎=d]g6Rc91J dc9xցΒ8%yh?$$Z}\yٱaec>*IkG *ْJ5d04BYcA*A LY}jߊxY4`Ȅ[b!);Lڤ2e5'wАiD,7(R]E:dK #zM];Gb1+)g]r.$/JQ<#7H+~rDFba 觸'rwZtf\J\rh)sU]Jjl m^<=̌3ޅh]H_M҂w\AT ONZčEx9-~#T-9casf-wgNtLXf:Fcj@@ aÎVSx.xxy.l9(qBw I6N2!p5[/U.wg8R)Uom)Z>u>]CWHU*!ZN}6I.rb\-+6J:q7n(mqF,9DU0ZHs>|ֲ޴qM'-=/a-"_B֕4|ΒuŰw:T? Z[WAz2{=ܮ-K]Tr]k *crYnb`_YOǩIb0sO3LgE a8o^o>Ƿ?>՛3?<{kXq-0IZA'CI4}O(U4(AFYyoP%uVyMNAnFzr&ooz&b׫54 \HX'얕޸,*T$fXQaoqbAAUv^V;w{۲V:9 y:Ih 4J5. a2)1l <, :鹍 +{  Ɏ`tsxu/,F\IA3q! bd0)̌Բ3N)n5<|<7F;Īmg?n;mR3AҠ⥱D;Lr0:fw#f>ih6rAu| cSDDڤz^5Ս|3㬳hpNE& 1M S F["@#JgTR'#3LHXXÏzFƿ;ncPgܓ9PI΢Vƞ{ׇ&2U=R*A"rOz}Yb$VKQc2dw}#"mPwCi GPl"}҂+b"A,TĜ7 P-,eYq!W/H' P;A0z q@FU\y ONJf /#γdC=XJi)I{OYA8)Cr3 Ga1< ‰<0} sFS7|nYI?*[r #s81e/3&3=( ݺE}$X€M/R~x?yL<Ѥ4Ds)3p`H +:1 ՌknεR( $+ h ctx"./bYn!& A:%b6>罥B$2Lye,wVz=VpvQS(beHl m(PP W0{3}I)!/~ oa<` *t<:btaO\*zpR}ǧố *@Te4umKc,;iFIc^n|g 3~0Noy̢7"QA(nVr:ނtts7"V1P ?t3z@Qys0=PU xBI9+5:{(T8O+9 DVx{vܬOYustlQП+S~a*O[@0/8\o 帹;x.WvpX;$\RֆYLM:1{0u+rP?k!ƶ^%ꏩYtz֬2%,ulkOrIpiA+)wo}~{y1 >} yظ@;|T{CsQEMҔG;TQW0m_6ԍ֜eϰ1f*94oS"@Bsfhv&8v7 MуGН~ DoyZNvܸ +iBdXJS¯Yv܂T^2.L\G_P{hx|rf{ihL9+%>l[L0#07A0-d)sj|L.d¬bN OD1ebk: ΆǛFޅ Ga፵'zai1^?Z- {?:KD4:lJI"O {Aj=;ܪZ'1"sb:e[\2KAМIH]0G<՞JNP Óx)u͏Q6|c'3Ύꓵx%FKZC#ǭmH[5:,FzKgjۏ0+E,xU)5$j%Jkeq ,i4i|W$wp_*ʻR񑒱ۣR.;r3V=Si.Qǝ5+JuZe֫15s9D)#b2PV:.AEo3q|7|87|^D{'v*~TKEXQoZ-hmTl]/ QDv3]mR_-Pr,[Е]OQw-+,uk jB6^]]1Jaj]!`mZCWWB7Sm]!`ZCWWHW +AmQ2m+DkYJPCJ2ղ맓;'Uc׏~aCrڏP GoŠjy^}R/Mo4:ÚrBս9Yi*{!5w\he| ~bXGOXҢ[+[Jڲ"Z."JI Z%͚:zK6ewZ\:TUEX>OsunA=jvSx Vd&2V_ St,_?M4Pu@}]Wf_yw͹gukYe}Z))_7Wk#\go/w饜OL۾rl:NX.mǀHHm>f: Qbw-av31F',m?Fq|Giq (-`ΒkX*9JDKiK*%9Gy9J.C@Q]eU;*&iTuņg<k>|4ܹ 4Hw3}8b3}bKa>RaXd+rYHݫFӢ*ϪzͽVV7= ԕ x)8'n\ߕc;i/lO\t/C5N*h\A[5Z>vd,mهHb5IߠMpZ>< cjaS5/u*9H{$ž,$$NdAK]?H>~629#A*R$EwpI%jJ{:Ԣy)Cmøiis.(Enux&asؖ$7ۥ0Zj`Q05%貀 uvS܏kzYzR45f ؅00Sڴ~j WTXsFrrQ"RF J.\0zS6v[2)uJ$@&Cҩ4:< eBӡiŌ"2!H,c C+ut(5^)ҕ]Y&*sBb`g:E2\ + 4eZYb+RLW/{ׅcׅG><  DWa(udtedVT0 ]YF'CWWr ]Y2z(tut\]!`$p%BW2] ]qa6 ]!\Ii*te+2]"] `XBt *p]Vʢ44 P[Ux֢[eDZ^ME9HNRV/Ӭ"ݬCST)(eɍ{IIǰu:>ecXcX\e} ZIg9E+UteQztu:t$"pu2thD>fQLWHWZrRJ8#%>f ;]Y2>:EBߟ蔂ALK,\HfEnRLW/S gD >z@긥(ahg9?Ejt%k ]YBWVrw(tutŔ $pܗ -ʢTٻ:E⚐g+]YЕ,Z#b+DItut%W5V-CӕEIe+`TiLHXpyR v"'4(4+^gwRX:ۍ߲:9O*h1u5">tI}nDbؿ/Dn 3 X!\ ,hYod)FgR+ )-I Е -ZF(23]=])CNi#Е{SJ;]!JIFS+m]Y&W ]YLNWtute 8RЕEk]!JI Ջ+zzF r`0 6G 88R* tE3]mzF8(]!` "pI"v(yS+Ne+ .?v_ʢd< ҕf$ A3 pp׾¥"h9,JA3] ]f+h+ X]!\#,Cݝ,J+Ni] wedrWOWwryjߖK~S|DAX3(2zA%fr#W I0`E1˵q=> 'e һaVD쵳ʻ`s+]U]TY5ViVu#F] ܷ < zPETVUqyZ]sJ3P 8Mw7ۇ%y7>[՗qiߑ['vg1>ӟpF[uJ[8+)-Д.Y^1qx|B7wZSk_.vǓۆ~?⧴p?B.O;vJwF-^o`z0rܼoo VTN?rW((WB HB᠖֘Cct1-/%EmK`SܘUfPlK斆f{c6[+tM纺z|=9SKc4G?"MOw7^~_/:Q>&⮜~8u$@+C K(L, jRŃ{]mqpy}B >Ppx^VG]2CrCNI8ԥ"IᦼSd'Z-60rLGUΡXU}1==Tʪ h0RZ^h-jB7J֕]\t~[W-AX#m7{Z]%?M3;u롟:5#Go͉7aUw7a䁅M=@< .uibP 0U7yɶ&n JZT QSg֫ZJrв̱d#J5-]b u]O56C Q6.r$nFst t7ekV{%' DY^\6UQ3Az!czw#,riz>WOex)ccT(4V=U;o=&`ӭ {}e#ܟt>pt4N]ty~}wY8*!` ow"_Ϸvn13kGoo^K?wn,xhOޔ?Q ֯A J DBYj!k)k]}D-z?\JWV[Wk˫E&7lA *ӇՆd8D++^N `|9pD r}|98TfK1(jiP;ލFF`B%/Ĭ8mFjq8ܮqvloX6ȧ4|gDR rX >x8ܒI !}dld"lzdb&4H˒Lh2K4AUeF `HgO3ur }}Via`k&̳;AV%Uz&̈́hY)F#AZ$qsBQ. BE*>4S'쳦jOxM{jw&)E|N{@3DUTР&Oq{3A,ʓ A'Wj&!9ϚY̥Lg  W Fc40;dd|@Ҭ٨}V,pJT|VF.􍲙&zVh,>γ:I`e3MC ]A$L@aР:08A52g=k*ifRY=k֛Y6Ft 猇H@z1xp4p$ -c2lfɕvqgX{g՘ӄz916Í)m}{gCL\v쩂AݵFյvWwH5ޯ a srOΊ4veObՎmOwݩ%[o-W ;ԁk߀OȣR!E @ty(wKވɷg'gݖ[`k͆Zqjs}yN}~Vc" N1b1^['}i}|m&\3AL0"\l$P2I#hkIEr"gpEgs#=(:ibᒰ'xm D\LL&y{jV\ ^ڣ+V<=u"W.6[o:>+4ަِ| l3g B2>+$;ώg R'˿iz6AZ2'WL<@h*>+X'Wllx]|;I!Qevead!dWǨIgUfj{RYxiSf0B @BhO2ijhvBI)MqN4+HF>m taݛ~=4$JJȏHNp(9e[݃tGv*ֻXq rr3Jl|VȽg=K "1ZVO̔.X̀t\^GEYhqbt#pd65r{Kn3h@V hyկ>TfGAw#}U_OtxQ8_/[qo2+0|DŽLp͍ebaJ*L&*iAb$) L8f籅U4w d:NPDܸir0uZͿ"o"F)B ׸LJyVv0}ygk،_^p&NYէo ?_lŸ\0[gV.ύD0"P;yT.^Ԁ̞7S jTAHbdPRn>%B_NS Ћ5tC`i3}k"ke?>r.O.o0GMI-J#)q.iv=7ҕ-/9z.2x)Ps/R0&?RgDAPQ}Kq ;^[a~8O)>퐫ᘟtdҊ7%gugSN0r;J~[%+ L=+) =y//wIZS D*{ 2Rw$lW_yNy0/>\ߙTGtY` &)Id$|yZY뿀2ח˝dX98p8 ҨaTWb*kLE:R`[7 ؈&aQZi)m/59]] LxXV3`$u[vi,Re`)c*$Av|1[Php&x,E4'~ U#L&R7X qEh ?a8G08'fx2"~'Fwx᧑)${f|.62 ܋Oz%>/f?j?{C l<eYUK\_g Իŕ5ƺFg1g/7K : HU{- o)$ Ok Se; ^ר YlZS|% K*óӡy ݑd3*Y2K p1V 25{O|a\#eY}@IxfNl5?tyUH P{lqf  YuHp J3Ȗ%zs'XlE}$XzQf[ ^.[>Qk I(+$GIDy\xɲD"LG3dݜ1)Ԇ"^߹3kXbԟJvD_G퓟u4 Ki:"cf4i3F!ܴ73% R^`la [9(T{NJǑt`A܍-Rv`c Hv 5!tx-bٓ2B!(EgcUQzI!]ڗOVW{1Ձ߮dg2C'X>M ;ᗤYA\#ߕU^r󱅄f*Z@"+}6 I:ቝr*D*lsÝOHʏQr(}!(v3[@0ǚ$̽dIRz/Ap9bP'5sfK+u 0mv{1cθ%9> GdW ٶሴr Y̓DQ4Iܨ0,Fd2>5(V7!w3-/rbQ OY~7I_t_=V_t|{g4I??gշ&|($$2B1IIDS_SFEeZz5W´#V8(C@!@B,4RXMT*m YlJ8QU|O $,N@*{^B+]KW+i(*uOƢio\*#Rz6; aHUճ`e@@u.GAE# ^^BKm\Ko:W=MհD5NɍrSR3"i](8R E6,VW x󶎻&g1K1H55w0yfvR`ZD~EYU.nW8q _S.&Y<ӹd"}W]qqzPT"G*|w*'ذGM|y{{l`s Ne,~@VGTWt"TܰwX('8i=˰T%-;zYK IIx 61pH q+۽`5 ~5NjW&ɤd-kjjОi}/Cߓϐ<~z;zU^R+Ɉ22s7;L5ns0"^|{>Q\\-2ZOt2_ }+EDl=WR祆i9BKb~" tް@yu;-p&g,8g.GtLw1hE5qM@s0gŢk;o$ ĎZ'N-n.5H\<'۞C Dg4W2)렱 $8h/{ ļg0h#Fn$f3~AT_w8kGP^ S}C@=k$ }GWD*le=>\ d/HtL>i5Iy>17GCuޙƈ.0RQ&^2ܳ>m27Bаfkc&$ݎ%4EIi2F3 &drK}N!af3m55Lh nb.fJuu2bD` e,T /Iȷ&>P w{p82){o8bqޗ0锭y=$G K\׾Vp 9lHxkpǒ6#b{Uل>xdHC^ zXWBjXu.Y>8aJHM$Y7 Ĕ\p[6 )¦/o ضӑ{e]ɵP$݆ݼheբ H큕U9]eIb&jG^:;Ӹt5־\, `G'e"d\e [ЧKG씴nڢh`3)˥%!Sy ujIi߾ñ]1&9]X5[`n(ְցRfK/$)ycS0֑I`A|3HzDK!dPo-+Z!E -@IƪW똺J"| J⭅AII7/8VT%"erU0/E2Gxj<_T2/:% 9)5nޥdynk1 W=g`N,jz2v{=6J(K4#,vΑ4.u A͗k/x"$]c]:Jr:,)=H]9Zd2m $(o|؄VG= =A.qmv1*:4_JY9F'6Z?W)W* mV uOBtzgD5COƗ+ tR/5H(yznq.yӑol "X\_Y0I/)CFX0e,VL Pp̭Y¤{.7<kN$M0Q% nڇ3EZ^/193RT(OŭW݃yJrI>Rt}4-|fYcK8w]^8pld$AAiPWNBH3<7?%[+8*a@T^&640K\bC,Qh  ] |P#UԘtЌԋND'10dMgZx| Gã -dj4Z3\\{)3 P%7\Pe1kIeA]{7,ձ#v bWW|W~ϳP``teϋY6c #47^1]w0A7yBљ?cl?΋:lK@d ? rzzZw:[xz"Pϣ7MuY2^9 j!Fܼ^ pʄR [|jaIުkr x5TUjKL4Php*N89Z$2smQ{unԼ}ػSzm'{:YgvxakSviLr2R k fɍ=GS4ΰ;r k *nWTPu=rmCWT 9v)E≾HXȹT (jd[^c[FCjlR,qieު>441o9ՁNIbs\o\!3N-#UE8=J]^FN*+>r0DJ%)ALP&" q+@"{˜6;ب ̧֦M&kˉ^4DxSѢ¥dakۄ<\&qkժ-3lޡ@-DA[3(2$,N qԺ$0ZKI޽}M&}ʴ# DUhky#se+[ȥ˲\uASLm͑R#_ꋦ ؍a]7% I2#1ek2mC}DfX56vؘ<9(܌Dx3 9$QTc~KbP8 _+xxKKJľX41nM,.:z# z.5zaf_3f<^ٞ&l0DΪLLA A_3t'֦s+7un3$Q^nvRPԛδb?K2޿rh,~Mc=o.Z,Ծ}mZ ȭ ٵH<]([6R7nh 5ӧ5T4=QV˂,%+!R.p%=3M F+bUU{~vo`;tsN:gw> -s薮˒FkRHADRt#*7`dɹe7G'm" "ߐCADx~fzd!-ZhdA-#*CBGpp`SBp=2*CF^؎ c ]4`$EK^gZi"`"j2Bd"\ (%iiu٤7B p;{>lݠ3@Áv=F<;t]ϢQcR+tS#|9 |70TBHpԐI1ڙƊE"&AQMy g\K7'dX5K`',J `4J""Jlk'\~$ɨpJRİGTtv@Si< 2L0 4vBoNF hTҩn[xIEVӾx|tNV.;8\VO܌^ИEcԼϦN+1 ho $݈J> 2'E«WWV0̔QhPX1JdC%ҵ9 {-| nI,sC#N,o\i$_ Fc3#!P|3LqclҮ(cr4DyRՉyТd&XQ pqx@ XSQQTzxtjA0Ek;QCjz7ԁ,X6і^7Dz&%8 Jnhe=r݄ʚP,3+%s2#GȊNV]~\VX]QćE)`Q7 FTfPB,RNl Z WC?k[v>q.fv"(AT9`5T9```C6Jsz3I1fK@*&cI$2@@ II@j5VVpFM $& ?&5*MRĀ}?,Ғ$)LRp`ޏoV eޱ.u:qct'|\E$mwh-cEkB)C0<׶ͿRJA+XyP3R}qIiyQ(ߞZB9>Ѿou[2z=y)+K _Y`C]=tP!g$F]v3Xi U ^Di>C056mjzu[ }0\AԎ6؄Gn;GfWllcd#ݯ؍"4 ]imR]AHC̽*í)TQ]kXu {ZKLgf;Я b=z+\ۺњGwוQ@y: :",Х\#IIw 4N0T뉄vWemHQUTa]ZdP^n[DBvEWPH K28.|HW9^ +v=$cZUx]_=r+H}v5䟺-Hή&7_.喋¥vZK;0t`H5a?=+ţ4x,Ӈ&;RU 8#$v^h)-Ÿ2UFk<,_Fp08mCLXJ댚 $v"()]D郵T(6Ӽ^^7bx#L/C Lj#17xh+~O+c75=ο$A IvH1cHJ $4I3x%ݔ F3 U))e Ӹ|* @LE XzO+iǕ WQ9L as;3ǟ2% "9`#3'v4@*C?Wps G F(qq~3QҘ2DÐQrAB[і,O3S^XB`0H+|Bjo>UيJqH.cb 9jkҪO%ZEL+U p; '|O ^Vo'͓8*{bH H.B.g Yi\ ,a{.FJ9g<- 2&& a_ +WGREO3Ϟl˨,@fi§޿ۧMՊƱߕ%K/4\2MYU q5]ڨeqFmp2,ߖg谣J&*'@)Ȣ{Źծ<]fZA?K1\B_((naƯ8](Zp@Gc<14cS_Y=ƙZbF>\6Yux63kBc0. s)Erk` 魏q|W2ltR^@HKQfEvCh)LMM5 j\[`ԜQА١ojl><>p ! .~HqCOc3F6bSoѨGCzjotN`{.DfAȨ:0s8x+,v^5*.Gׅpkk@,E BZfzUwPA0(*&`pW1ݑMsdc<MN% cq"y;v\ _mk<&:IՑly-?u7?x K  | !%"@_J %o⤬YR?srgHvEͼ5 J/eiLͤ}>b*F@1* 5ݚ@_7D&b Fq9M6mٴݨ'9?HoGD+i3N`lvqǀI!֚B9$fOc1ͳ1ք7A'Xh`RtdK$p˞2ݎ,NbXJbte`z N4W3ώ-N>[ jɕ @sz#&}IGS&qS "']bxo<RۗoɤIeAin]{+)wMBܦx$ІUgd=^\ȷ4lVbGS̟zђ}drvy\ 4H}%O=lB}[Ԕ^,-Ks^8_sL]-X7EE(dJ]Ck˔Axk "v]PM8w!M.PE[k 5[dƤ>a{͑ߧˌsُ#PTiT+6׵r!(~0J ]A< 2JNoiVIlv}cX}a-mwꗲb m[, 1ޏTz[HCV$dS)"AR]TGw ʐstřY^8Tr\)ZdlZ;ͺR)ݯWKU.z=^f'1`cG#z]p-u(nD|𷔱` pBno>/Br rbdsb˪\tUVZ"x|ۘ!AX]:/Z{rYFj'FIz'[îkd|uT~4O2x%O!z "$]yʢʠ #F3EI%$/[gmxxv gML/TL9:k7rO*LRD8!-T=ܐHIYJx,˻e=#ܗu> ɠ%1qc7$%3{3N4OM\QhŨ'8ebg<5jXSCLW˭3/oۦ84ٖy J»Vǃ$\cRZW[֫V"l (EeQ͢9)4ЀUA"cdf{ebBkڻzT!J!u Y(2(%5@q—/$8l(~kY|kuWQ0+X._l! #STQ/BWl\v)65eiPpmU}aOUlps-7|5__Bc2lhN6L|z.#Xa~ַ̛ MRJo#K $hC)67+)Kt)v<~&]2dCʸ^(wU"J`cRz"ĉcp؝elNj?Auab `z'O5Lh(fX&luIPD1l;ebhKoR<|[BѾ%yNyGd8>_>h׆j#&6) Ccy= E wE"Q6ֲ#5<=2~H^sҖ!c O5zךk.uclLNxF{o5yydf$KkETr-2&t15fj,$'dz.zX(65tng6G83 1]aUX'bs̘NQ< @TUPij[R_Qi`r&eĢl DΥJ_ڒ@6<(":wSaX1\Q5%Tq e2& J,,zi6H5*#c9AY&j0e"qL-{*_: cz> ƌy{FY#?b|b*u~teiJFy{<seaj[z O,WA,:X-1c([d|qD1>Ρ>d 1\-2.9f S*cY/rT!} PoߠU2MhfOFH?~~ ۛ/6_ߟ#W1mx9P3q߮ޣJǖ blܻ;vNdSBmF0P%i,RZgf$̺If̎{"$4pGNe&ϮPH>si;{vRvv2GgWHjx>Yu Yc&Lւd~g&MR9i&aFsȌdb2%mW(=eK1AX@,x2a1 N6'5[%}Lf9[izN:Izͼa(6593zah$j3 ,XVqK]pןcr4C %~C3ede! _b: 3mVyJ;kƣ{~5A acW'lCʱȣdXY"k7( ,K1?ּ łte` Dk^X2칠K<1KG{0=dcfQM94; E_lٱHfYLdmY6MwtLOVSˏϿVs~h3I\dم-"*0bRT̖_܌kaALYx)SFi, 80N_jIa?RY,o8Z W]7IFϧ?wzh-f"($rCeG둊Lz~6j9V+]{ l"cbI6e bԤ(q6wR<ȨZltAXƆ4|gSB8>4SLE4ӤMFQPx%+&p'$.cɌQ_#%{|)}1QJ1SP"uB]-2&"9p<_gO//O4In[ntTxkpiQӭ>Dl?r,)aiAw.[L@|UfKZzsòKs&. !.My1 Ŋ7t38Gf&4Mdc&K3K#J\Kts͚t֌cBK]a^%I1YJZy0|L wx'i`֔kIZqh^e.)ӹ,HG"HH&J]mj^^;LV*(*KEPaLeu٪vq-c^ᩕiы#6,-ݎi>b?YkoEEQ8tY▬e :-?K]Z,G wR獊msuDb 2>S^´Eu(ǘKvR҆IdY)pV\:"cfso23nSj[d{ g"GMQӁ=TJ9 P!GPT+"c}a+A(_ҟ:M|f}Y]NL6ћ04̐o>pǞ jq-5+QPІCɨ\}m= dI]a'q\WW$y9Rd "/yp<DJ{֯Oы,۲M[R3n/,XU!;4(XNvv.vcXE A>[sӯjDȝ! 6eLHg/:)$П<NPP*h0Pikصf}!Gl.`wu`!6xo37͑iӞ J[ӗqB+iltҟtIa?ЯK 7N؈hdoN s!/]2`(yulN(2 QeD&ZB <ʛI=dAmNE[uޥ.fg_l=otLjVaa<ŀCœ"p5՞,|Ysͬ"r6N^7mȴq^g)N̔\=u>/sw“$H%j k_Ρ& vZwSD\NluSx'T]%υcYFw<ɺ9JL!\#]ȸفu}Ӵ9b+%y4넳r1Hҳ)*{jN:eN =p VRv V~&RX1/|-]I5*j1ׯ:(ưPHڌG)M 5x*4zۤV9((5lOs\[fhĮP"Vl1'B֣s9[-mBt\ }K=>V}RD୳ibgТ4:e9*[>OO*R(M C*Xbb͉$kW9gyԦӧZq+9yIR{jER*f|>2>ꤓ%5iX[ۘD6B<Ixy`W,E:b5kcɯG& \}^ %s F}4Ej+[:;]VXefR8B9 vmv9x[gd@LhszPbn[("sNvS]$lݖmgWo͖lWNӶEl]1`:a@:QkcTJ\w a&<y+.+Ms̻/ercX$Awצxg/s4ՄouğtrMi8Zw.v+wWe_:zTT̊׍nTՂ ]s`]1B` ]UQ!Y.X*LY+tft; $o4Z&7dhSgpn\v6L;pw}lD/G,tiJm27uɬELCE(\m)HiCipt:\hs.`fvW .&>F00$|#J%,7Q*ROB\9ݳ˔2]'Ams&ΰ[ ld+bGաǷB0QeϵPk&"uT 9ikQQv^vD)z2ޖf]B-5})v"+$ 0%|+.tYIE] ND_Ugs4$@+ΟS >%@"dd cYgY"]]⽔u{\5HULP\[ "קS πfKa@b&50`:b@b60ƢY5= ޯb00s0[^ic#}I\dQA cʗ92[ʓ0Tqlʂ]qqpt#P88x+t|GѺҐS;sɥl:a/}!i@\#/|#^Zlt!C:NF(e]!I?:rwfCp'uS6Nhޟp%F!{𫏮mu,> ,\|c!n0˦u!m&/#2f33T59DN2ds.2(ж;1ո ֒qI$@xd- s6y?pu !1 yy76 uBǼyz*{ qt+p2p}Q|JͿ{@BF7fgF\1;^Y1{yWLߊE5}\__RtuZ̞?O}E+(wLFOE;pS;-m9ɋpض|l[(߄MJ}%HWҎH!ͻA8ޚWπ@h(^}hsQ F86~hxD_,hh'98nT1=.2Yi5ӾP |p+³=0V&fVWa*}VWb I/Q,-ْM.dl !1_v̗=2mt$0RuC6{?Q%(K٠l#f];ym<v3/~x`VUkd)0ѭ8.ZJXJ+Iz o9U 8 } PEeZLg;VN*,{ :XHWbVʢ" y~6$fQ.TZ"z~ylJ=y1"q nf{m`dk=`s LHSX3D%:Ȅ c0ki%$|ɉLgcu7)HK pZ稗yo'DR큱'굷5_=Mk{OOX~NO&2'c/eDy@ A !Q]g, whym 5 >MgL|sxקmqh1Ly!axcs0ovtqa1L(ߴ~Qb5Ŋ؈-X[ d?M J tV)V Kt"A>=w%(ycu6Vv^{)rcdA R3ccHzN [;aW&TdT/F痫_}w1fժ=aNc 'T?P8U 5wT|j(F˅x^8caEhhNj Ruws?M>:^~r j!ri^ 4K" ,C{{kJl4s艰~[kr?|՗D yvT,w?)Vt8O%n۾.3%{y.Bнs1uƴtj=Xf8'򂊎z.<{( cYfC l 2kԖ'M~b7eӟ9T~}tGYe3ƅ ,f3؅.HU}f (OO(]@~{vSiR4*iVgbt_th~ױY]עth Dy`BPvκSrdz;-kgy㲂sᡲ=ޢ'Gŭr `JǾky!Y},SsV9/t}1P1P@_G12lSyF 5 9+^#yrR׬W?{WǑ !`+220 "Ob[Gd'(,[MRꨊȈ|h(ccmLqh+8o6$ b7?}~7鸡l)VcaF`]+}Yf`g[ʣ^18+`^b|zt֓=VHyddcEE,kEn)+EWUڙkP-P)w%@mhEdA ƌeme 6>ИD&MjD!m[m}.\lz07fUq)3\T.6Ym) *̡AVxt=ꑨξhIR(w|6H\t ohJrօ+NlW$!3翂huf f2R0%$/&eWYȷѢuǦ*{uA4@d/txQV-Vݧ*S7I;tڞGNI]f.|LUe]fȥW{dj`03@z/Z{>ZbLKgM!U`*q]"f=<-ɧuSZioOzL(G|']U :mYmO?am։oLS~ qP7kAjJ^A'z X󔠤V+]G=Ĩ#: ӷ 3/ <\U8tYbfưek:^\P&#f,F18,}i8jNnm叹,k+R/aކ޾G`$n sκd<05P;ё+1v,YbI494!vIqSu{.)_99V#+`YdYED,zs=fvfI`҆f ݽ:OIB3 s 1HxKJ SB(&'RSt5!|R6dqӶM$ʼnrbj6K Ӗ#֪ Ư4R[SZ$#v2iZ[E$pZ(4W0v_()E;p>]gg>1B]n:|ߏo:NCd?jGިN;'zrlvY{E$ 4yS2 G0+5'׷ma_cZ.ޫ] =mf'5O'TD-A4{_.'3Y<Ύ=/tՑ;ޮjl GI M]iU x}y~y􁓸RZIkl8wb1cWA]HO c)+Z N s4:m{uߎ]}< ڒϣ$ڱAE6Ha#U^AW7ntIU꺊0}FI&uXYň9NSqIz|Fbk-g`8N9}<^$ A۠lR> <*dt[VTQ> &m9񁮄:?^CCʱ8bY 1侭}CXu If`vV];/q#ͦKL Pbǃj>z{oW-Jm2wӿo!XVC`Y foQ?1~5hj5 ɡo u*;sɍ=vthT!`U!K,rL{gFe3!}MT3& RF:*cNɚ#O;C(T4] )0Hw2yHgڗw% Ut%+nEV\ȅs*pFB%(\FD˶(B+tS(LA3c`dP7>'t)?\qrc.$8XcʪpeN%T_$SI.9ajHn njcsꢐ@nr<"d A]fMـ IH*%P!\}^ H)Mq=tb*#FJ7E rMؖ0A0 +ԞY,+(EYscY hqJ$%_[RЩk2cXE3X*JG!> ,15MN)(/R&=Ne+kf]*ܦz˾>VXc50Q941@=9ZsasZD-$5 ( b:wd"?vaǦWyc5j 5cpmx5jՉ[E~qV砰:5破!F+IUD ƛdQ" YG!{U9NNB-e#K>v҉[vĸ}݄R}/Qs )8@DVz1m2KA$ȷYwO7×f ]]]_hMbcSjW4J$L pcSi+J 1j>Ĵլ_DkyG".]T+ƊYmVlۧ~G|zDYqOut%Th6/aF"^ (K`)j/9>3m:s%!@%F"{0EIpgZpoHF&Bߩu @WBJ?0+A6+PF۸ ꎄjvvoT¼ 8_.BПUV*Gi11/dD_^L?:?,-rO %'o/j͜;5Zb٭'{qzt燺_e~sX??/|gJ(iGxߎjykJoeCŽA5p,a'p=8`a *AMlFc&yE!9yDly\cHn 6}fcf`su66]]t_c브(JGzth/! ȯwzR4A=پӫ<#߈~c/acuKqIں]\N mj!&tI#N]ŭ ьw.sk^"m6ϢFwqt>lCLDcv{ \j=ܖe4uYPD*tؘx<%{6H,]L[zMfu0\a,ְ,{"# Xt280CN&]BpE>GG;xr.c6ym `\\4g!ۆo:=iHVC0 [iѯ Wπ72^8vo6^P.Udc{}s Oo}-u]ũg*2e-a/2Ym])-uOAL>4ٍu(Cþ^yS;XAdBK5tBx~n@\!foGBb+vxRL Zu Q6Ն֡KswY%7 gO> -|Ժ<KțXeՐȻqn ed KFYͶy|D/ța-aٻ6dWo]m@OI6X`F_u %S"e9^( ɡ4p$;À-53uꯪȆC5]\] gswycnES-ZBrf$hP1,Mhf]KݨJMn/{̣`5(c67Ê u^Wr_ΕW6rP[FnAg}\?'1Jo錟ńMS\s8GOY>^ $WĂ> 4%Ա\U^X5r`M֕MT <ZەRvI,i[bAV"Z#𠶦khw. n`gǐl-F bm+(:Wsmm 5Z(GR*n! q;G֪-a2FQfV5]\]X`<- =K0gb#@hw|*z-Fo]6@$g ԥW?j(|zaLliP,u{^ YRCs6IXw(O,{-d &%5]}I 7ڎ!_A1mF֫bPJv0R!AH}ojwSԖz.oT_ #a OWhkdt{k3w1-IMRC:S4 gp4l h#}Z1Ze П!Jf_J| +s*]GT"hӓ_7D,Æģ%Th+d] 1Lx_w7bxC`)\Z3pAAݹpb>`.zXJka xd?6.n{=C,4_hH OuhJ'1?A[jK( 4Trdzl;&A(aj:q,dcW 2{qb&Grݜ.ȭouoV08*|g}`Õ/O ֜*6QGZG5Ufϒۀ& \ivz&Ϥ,ڕ %}| <,:j38"Z2׹YPᑭ`.%)䀣h- y8ԃ$ärK'3V{CV:R'}$f4ñrzUcR 6d~LL.3{2er#Ggkʐ +ՃȽ?ϻ`}z]6DCW& >l z;t6]V:ĐMw+ȑE䪒`,aϒUa,;z3VMɕ[PJцDr}l'iN,GToЙnsB.utHYv)޳/wg5H,5swVvپzofn!ăT?3=]%ȡ} d߂,s1}|p 7K{ B~[Q3HɺUw6h uy㚞K¼9[yF3iy[*݇%(S 299Z䉮,*V1A%la,- F#ȭ eDgb0pA2YZdwS v!ǀWsf l΀|R,CܨI-x<ljO Q?>yO'q߇,ʪ^gRAaҷ[snIX\Jh8ZN+ME"<#HF[srY/6Etퟟ +\ICYcρ唂+u;<"pnI"Q8 \1C&RW}gh $O | b .{ey2 h+K=H bj6x>R\ʂ@EoX Z aR]ƱhM+Z̹\{JE*G!9v13 ?\?LQ6AB4s")º"C\ϮdžvkZ/~TJ.*uU˨J]FլO{V2hMKLJ֑Rl~VԈY) v/J6\p_/@pQTLoLJ?*t>qNpQ)ytd~_gnr0ӃԑKSu7cqF ͑B,欤W\x@=6LJBaju/zx+RNdJU[ns¦N>1%-v8"̳hg!q)iGc598;e3<'www6ܼᆟyO5abv/iZVň",f,[V\AcEfӧ{ .4_itT{9=V7  =Tǔv)+jjX">:piH$1 վ3:K߸|v%[KL͓,fW*Ǹ|^;kc| yŸb(4fˈ) WCvO_n(%-/^źawSUSa華*2F#W˄𖐫"Ztj>"׷\mGg1&,\mGV!%MwVW$F]7ssMBgJAɯu$FD񖠝& IeYfV`qDv#{T2S[ { :XLpul҆ Ǘ.Ug_*J ħK  Ո MBzy?1TӮhr:PPzmML=6+$5 uV*}(0P[ݛ釒 w0v%7HwXKecEp`+;p;u;_FǦ~9n5?_\?l|wХ]#yY7 ]BP=`};7~fLxϲ_n./ޭ`f Jߋà -yƶ{;kdkk+E/yH TU1ULu*i,{ƒ<[/rwW㚱>$HD)_8at1zqj&N9qGTBpl#N?Mm24$|ZB5j .;dIUS?@L(ŮdEV+m5cL ,XRqY+sCflѠmK-U%885C!5g$i=V5S1֠W&nkkx]ek5%'h[E)E4i ѾA!bTvUxDYIZ`3=1㪯*(A hȆxG{#kj/d @J<ŸzME^SkЫt"U0ꗖ@P$Âgw876/Bث Z1Sob-qOC9&<3+., B\ǀ @!l.6NL2tylhcl6V+ цbdC s/=L*:.O{d˯j]*GeBscJ:gƆ{Л0*Qeb\=r5OvZBu &PC+1rScՐL/d^EE n]C-jzEg_w1q'ێ8L:nDGQ%ကf=H̪~?s*;qCh R!V`io#ŢjxR "%sD|-UzQG4o=)вqmť5#Ev!#o{6xF۬yD)x'Tة'}MTS0K2^d'WvC>Bm}xq-=Cr7U֝S]JdW \y5cg.C[*:PLDԽ4Bm  B4*sM䕘erh5+,6zޠC*U4Y2UϬA;*nY)O}`Wcџԓ/u^: NH?Bzp{&19Pqy͉x,Vu-M1)&N9Zw.k0^8) Iwe4e}R0d5X@+)3!V (o(АV+)Ev%zJ}bkUC郭!`3C+H2fFṬd> : # H.o~[v(;g`Áォ9 elKV{Xj29,Co3.4\2T>˺,\rn!69 ۮhle뎥W'P͘r˻ Y!-nK.oK./_Ϛc[ T&c4\ e7| />VN 9_2xQr5:el8XQ6|>g) |:o YzFx+cS ɧgsދ$#Xbf+B ±[rJȆKǼr#U!~5{Qxkz;ZX_LBTZv{Y;6nOSswm^9=9 {YdQ^kv%\&mw#ȶZ9gMxF7uǞIr7*8Oф)#loO`LsN-ڔJoٔ'R.& 9Vʻ1kU~1%W90vfݼ("@<.6Jؠp3Z/`$xA[dPF )PDqM>bTʆB,Hl͠h{7cwaP\T'D,- "cٮ @CwB|m[;m? ŘNXߪ4ǹVC0L.T*:W3vO~3.ߖ|PH3,W3g_dtQ)h9Ԅ-bUYU^ړtQ $ۢ7B|t An񞡭l.CaAs>1DA!*pK|Q8ģ.CԹP~=4^/neK!Ӌ6= e_bVt tz)+O{^/̢ a?,=0# m^G`>D:F~ *IAnoZkޓ g!v%4А/2X!pdz wzՊøر5О?S;vQ}N}zki q8ڼǹXW ^Ia3yv& o(c3OObIDLa" Uy_ί\g|Bn]!nX<2|G,]cQIAx Gݮ^kN0Ikτp M'T\𼆇"uC'V)yD{ng QH򤕋GLd`#}K6D`OkŇSG DIr5CKfUd=`Yk?`(ZlDjl4lS]ثb) 'Ù-kJ4RZB@˵Ywu?f|8PƑRq0D4<败Rˡ626_T3e5c, I/^ :ɑ6suͶQ2 `&'t0(V{Lhq $1UyD(>ozhXnxV2ts.~e%UbDq L%SĜm"5 xP 4 !U[]Zيz1rJ6YГ%=YГ%ɒc$lN[~(C;eI Vjd=dB-S0$t^Ruյ%gGpS%Ѹ û6HٸdQQFoc mJi @%'Wv:16jk/BxXП땂~{0,𻞿XTO?)9GWףo}Rsھ\.&jr? XLGjZGGj[gkl{PbjBwHq%8`(+0rBv0T.'Kf{A!(NC{ *3gW<l>ay;@ZpJɓ\suٙW5-"(AnbkZ{Mbn Rlp ])Ҹӊl!X gAY/T.dd^_` mN%ٲJcDe גNF%C,*] /rh(0(PX{kێ68uWY8'AKpCϰD.f{Pw.l &a~>͹JBPPn-)4 !kܲѬ4w@v|i^]J@T7@p++6DytZg'eeش%}h (Js WM2ylm .|Ã=93&#^g5Gr䯘vg`z_c ZvlbϱmNI\$%fR9GbWσ 9D[Vo&F/ LP1v>)06jnԬ-Ȟ0< x`2^nsG$\f _ٽ`/mz'0yUc#@8TtQo\_4X|;o,ZrP GԋqhZNw &jؒFMP(>}dIZB0Ò2nǀt3>F8Mp`u b*UމC>(v3J.[dRk PHK05݅G,wnVe8RmD{-DHYj@7|g-Eݴ2:/(ޠ "x*+tZslV\yAFL+ȹ7>Д'pL22#^؊-| J#c!CԞg|c R9_z*s1Q\2zK^<)Vf0Bz+j!D k[I 1;A[YNaz"}m/Z-YRZ/OysL-9Xi`iP[ :Gf*c(c-kإt./+W!|x O]9} ׭j8O̞"D_?wY|і y+lxx x xypx'G^se1Yfe1YۘU/ 6|̓0>\<7?;nѲ,ȰĂ?8BAyTYkef L~ F}`m{Idž3}̥?f Y:Qq>`1ħ{}%7CL/xz=/DrGdSG|\B GRL+B:p_ɇGS }uAg|ý.C2-0uA{]iGDN̓{Ϻj jņj˿ۻ7N6dI#ph%8&M#:^p|xIPzz%YO3燇K3,5cVce4O+LA\]?@r垳 o ޡ,ҡoP7u&K;T? u{!7R{y H`/8=~x)Ds;M{gOv}oxN~ d6ό)#2#9\vR}=gySt x"H )i0ܝ= MةaտqrSjϬ7}Nk^LaĹE@xWVN;$^C vCj[O?Ti\w{meP]VŦu =5-klf{fhb'A?81E =hZ*'pgǒ_.y1MG"+O%K-+ 0 U֤g~SZNI8;N_R\~|i$"RygU#5/g$gZ+fa5Ex"rOu^m?i1o$;h"{v>"!O>=;g[jϚ~ݩ:z{͋K'HKb˥|x(L!-142 臭-pE38 g3|q!B\AAcE|Aro`pqvyM A7)h\agn7#7m{x-G|{͋qi0D˕)\Td;$;DD=F/<훛0;|l-_~羣foo$a/}ux~h.xm$Ja,}fΛd4k= gV[tF's)[]吉iWD..ݹr)|φhZOYQcbj%0q 'H6Ǖ >^7"3=oQKSF7/, ۲7=SKb5/Ƹ"+@I8ɇǸS=abc )x`sRgLiϗn6퉎o$с Iխ(cJ':^Qo5C}||Y7==O_N~|WHh'&3Fq_V1ZKdf#Y>f}LQ_R{\6<^J)QبyQS!egm'/RWnͨFWH-Gnn:'7hoѼ+vi% jh˛wz%NHJGQiI? K/쌽u_RjY\`|9p;2x<挵&,AWj8~}MRѠqpu% E slC;xh0JQnᣫ3=$h -R9V16M EzHLI1Eez4>(#JBd.f{7\4ލy\2\~KY#.__߼$q׷Z[mw= H\x_`SϊÅ+Gnvr`U,wHf3J$p#{<2oB?!g|xx4.736D?7T-|g M%H}e"MsLɇa6|hUp~cZۄ_A7{/QPaAKlZczAL.&Q=`QPM--~|3 3S?,i',i[}J9 ӪC| ,M0,Ճ@8"}Vȗ:B=6Li4 AUg_]za~֓u8bSPoRDnQ o$ۧG=|zuP Lٱ@e ߦ(|( N3KJD4 A粀1XdS?++Cr ˉϗa>vp0eX,źxgsq󜯯4*\tj)y; /BZ`{ނa ق0n-9+.ɘWpS¼ k^HWQstv:@ra&Knرu1ȹJBAiIkKY{01ş)p5٪2<0a)NQ+Tk% QfbPEk/yBW/Dr{^ågPb\d,p*wȠKƬH<1 ;lmV絉xuk-c 3ls|arc+a;ȍiJnb#<&}yj~ӁyB@Bl2*(2C@˥fP' 4e8ˑӎ&SLP` _e *d4PUZ;#M)ae){ n**3B8!,|NDxyڐoDQ,4(^Y:Bq UYK)RR%8\" ! C If!V`}_ J6$[Bu f8r{ NTja׬Ҋ^Lra@')*'2y.Y>mew4o;0v*:l[)+v}B /$]껷ĬPZ0p_]]#avCr̋ B15 $'=%SlB'R()tۤ0POl嘗#UpNlh` J=%eV۰* ߘFK͚v(]1/F2fNh/] JIeBc'1'{ fxm${YZ1e&`\eic&gUH)ZKҚMu>W#Pku@h6 :4TPulABUEH#D7.ǼPU gNP1-B$,w Iuj,,LcrUc^" 0 ah+{@!l\UIoO$؁ϧ a14GҴWqcc~\k{4)t) Ild:c^TLrkÌp[SōhAӢA<n05&d`Fy9e Օ0V-|-(YB?x̏pZ%ЊD5R+c(Ggz}֭'j)lo_y1T-nL "C+ )YNܷ8JnaGՅ-o+)ˢxEU$C{2!b "=:$9DxS4JU܏O0$o++Mўr۟o$YRPGPGZ̨?gd+f]jo$bwY.tX2i?ry/m(g4r io?y-%y-.+&F3Aba]O%p4,U.΢OŞf*]f/M>xmvĎo@UY<{cFᜎǐVG6$6> Z>Y9ATL*XiUbRG?̀;#6J0PʸF8x&'7]{ ߺzQzmD֥)tqH[|/Ժ(y}tL|84|5R9⧹k-mrGϯf3'rn+I B+Ѳ'=W>2hH> 9^~W%ŠqnG٧?*z2mIIR(ɍL]hIdhZ귇F#H烻p0&X>_)idI s}dM|7h-tS[B:!|?Y! w1"_z?Ѣ.vZeHv]nG-Z.w7AI\%^9f~G?A?D !%g22 ;I v;ZG;/%zAR %d:*Ra!4g[:nj`YeۗXC:C #\'wy[1i4MJ&ssxpuj-fvٛcxMotpGcgEeclVVz˱Tz Z w>/ij⧗4ӗaSjH\ֵ %Q*YگBkƐ-rR!-# 0րL%X^` tP;7ȅ+Uֈ{刺Faۏk>[ޱ[b9Pz3K9ޅD GIxtӛ®ū]yq|u⶟:B4nQoBV6(Dw nug^qiuNI5XL)&, u,& B|FY(L{30^Fۏ埃ͼuGn^ͿaJ(^_.GqOom8'楷rm+Bȡ:o߾}jՂKOE߻e$qÏd}ۜfOWMvK/ya h}Z< Y{FBah@|;8ݦcE}p6ds;%`1ueh`NluKI}ג|yWiu89J ICS|ㅰK>CvMߟd{ޗVsݰOZ0Y-wcvqݘLޘu%ړ1߇#rYR74flO_( Os->J%<3/,r(D0I*$ oUFʼTVĬ:p{و9 咽[!>~ʚ+F,Z.@UlR_ &}7Ao?}G?O>d3{8.u"RA#%SY* MF<˙ʹ Qi* GFGc-h]Ѥ@RoGf]H͚ 4NAZtatIʃXPƨ=1Ia|&4U.AhFY-YB[R%]ID:3a[xHFz풞pShifDF}Z\ os&8!SJpҡE-VifؑZ0ztf>KoJETaГ ZLUgBrFYn );Ss;꘶ TmVaPT4z( MRUK`R+dnI=G:I 7^"ŧ-fYfHuS6HLv[= {d8C\d\rUc)G&z TҠѬԙvL2c](t i|x5o E"IS<ԃj`G<=$}b>Lfw Baׇ8o%¹YwVq>h ӠKP| L3j{q72ٻFn%Wy؃]m/Hr az4#ˎ$gfpEI[dn]l!dlWbX5>>}7x2]Tw)f[L_8Drsy*MZK(+-yB @9 ׼S1(0:*sV<фSjY'ǙbJ5)F:QdW\u9%vɠe(H8i;a`kSJ('!a%!ꉉ!|tBdad<2:IIc2fţy! +ˮ,zCku]O ^h 8$$"AV6acŀlBGG8Fj!Y ⦅B!I[BsO]ÍAqMfՅ~V>Mh-eDhdFqA$BK X6ʹF$%QiBxJ, *1h)8"Lc&Aǚ@㲳gFYmcb4Hy>;{@SIiq(V!qϙ 'vz8eFx;B= 80Yaě@,QB#ֆZA62bKĚ' eNnn,F8'fc%A󈉧J*k:C"x`s0 MVuYOw㿶+HXyyRrMqg}'"mN×9vʋ<9߭9ظ 画 C3> c3L w3-ߘ݀r y%F-?w-OM3x4kTHO9Qi݋Tud0K>?|l\Jgz0Na4xf}P꟢4 /̣4p=-(EpB&<OԀ_~O& 5Ǵ"JiaX}xx&-Uf{Yp_' m܇ɴE Go@TLS)m1"0taz7=%_SO%ݠy.ZȄEZa8aV]߄|o u4`y_dӀ`MjJ'y?d҂P~2#/03a5R{u)F\ Fvؐe6dQf|P{.;EaE'/Hrg Q(M3S1.R y>M'# !LxĒXuX'BT !:<*=qCp|3TWQ{\G?.dMdk|j!kc2c?)?`y.0R9{w3M]oYǿR '\(QfH(?wgo8nKj铝JN וyK$kVjU^0Rؑ@ ¨tB|M Ï!Od)OdQSu\tr{sK$(:Rn[ʪ얲*܈-坰J*q_j J*,BSe ڳ~iQ9oE&U u7,9.!Fʷ?a&Ci~lԷ(À㉠-A)zNT!Z>~Bς& XԼhR(T'jh0p3wM<S5FɇťdXwʹD>62"M>z!# w)8qAcASTPf^ ߁-*;WvE̿J "?/nn=(<砃(VNJ(hT F#TBxBQlvUdΜҥ9p^b(&x1?\ŸG_6ݨ9E"V HJ QRFF⤰ i)Sp4Vyb4*)cDM̓WE@MYR |>\(r 'rzK2?2Rh*))))_OUb\TG $ Q4陁CfAO# 6iU3+k1SOuuBa~:׭:+H U 'bQkHjиYȥXrbfҁXtЙ/ZuZtbǔk1թb^eU;ǯj3,)蕧`ۤEjG!R:aށ_PA 6Gm鉫RuYKn] _%S]* 0Vabyc|="2D `\82A1VH%`2UP3.iMALZڇY8n-Yj ;+ZavV@8ą% , aIf,*#Vd-bI*enCWm}Lk Z.mSIU9V:j0Wi;l`m K`p~n,F%ӗt)=lkh֒+{uhԑw FĕҶAWFVP+$s3"^|9o܋ T@slduJ<h ;on @;>ou7Z羽]UXMrv|fEyK(%|X<])Atttt:-#S+JuI6xBmpimqf"ldxG="?LV8PYp9sD65'S09b(˹^\A#ř+1ZY bjn:Qɳ{?SF7o'_/SxwR8!^p%|"%T/'G58gG.9cnfmm~h:5~Mo,Y[z;Ti|{c$anGѭNi߃ v=~[ Hƻf޲"ן;'#?b*#c&(­0^eyg %RB ]S[Ryc(:m4ʓ4vkCR X֕ZpDA f ׫#Ky-RZUE}8URgQaZ77R*iVh.c VJGI#80vq1w<U,V!]Ynu '1bePU& l?Au7*-c1kI\ f-F{tI^D+B"y..doHo#r.yӵK#(kyퟐ y,LPM鞊j?qrIQE)3ZTXda֬fY*uV_&n^dEPGֻK"{҄Y 7rJÃ&j4֯8n@'.c aIĥPfmL01pG1{:uRH] j]/)3 F7:Tp=L_/^(>jR.?Ԗg'i0%-p4pl ]'/Ta wn~])`|}Ό\H7|8}M~4|$*v͘""HVR@_ߒyn,tקËqϯk,|s}կwQp<z݈%^\be|uoɷgt3^#&M*t~=_|?'l7>~+F§AyjpwZ8}umKoÑyM^/#KISynq9$wбS~ [D}23N~rV3hUlY=zJ}q8'O7 FgNΦ(lFb﹯WP}ܱtc32vf(+~`~w=ѳg'py7|\]Mހ!,?(pzS⥟ᙹΞ~yӉ3\?֕=(2d~4<)L}io&Gѣp\4230>~5 'w-R,Z̤U{}g303kav:&bZeq`|" C&1ĂZ(߳ķxMeyvl;ۋF#4ӛy{AAۓd ׳۲|Ku+gW^ʺ5uûLhRșܽnn?0iO3ig+s}vc{ WkJ tG\~#?#_ ׿^y׋ A%0`ʒg"#u&a& n,KֱnDM݈.h#?]p9 W4ׯ%*!9=#? \Y1돸WUW: XLqi>Go@7$뉊#<"#hx:#<"#\IU ČI-0XH>QFF\xXH2fH2c. T! j*GUl-ZpaBgFr/byJeLI)1%ZuD0Lx&BTN#;dE4}AJu.c^e̋yQżu!M{#Ĥklf!h \ր-mM_:uuHu!Quj<>Lg*y WFtY K)Vb?02WBN:}3< V?$-)%)=NOEB'L/xTdq p  ^F P=̙l+Ih YAdЮLo_;ܕjA#Ҭ;Q/ݩ Nw|Słw:װ;!eӜcwi倬mI > v7m3E\C4;>vov74N@vs=5Nnjw$:)QiwpӵBQLu03DU[mow3Vڝ`颁Ax;TbZEVO@2LKńV̴0Q":q`NlyBizcy@ZH$̘چ7l#Fkt;eZ|}] f"R~ Dzʼn2=O.~MQ~vz`f`<|2EFdcޗj)u3WnYP(o_Dq$\^E['Q30x*L;f2Ą]S06fE ʁkQ ,T&$1A :[6` Th~%!|ys&@:╼OpU- md?ȥlڄ لY̨L&ģ?+oU߼xØKL*WvV%{i6MF4[nA #QI►G<9P{dv:-=sX3W "tLXw1 ZDŽoԜɱ⑌x2К6DN]>_TXeyT=&@w`+P mǥM9ݚ|` E~߹"FX@ y>o-Ow֧&]IiG1w+*;cԈ-m-*D]>ӟ~aŐ&<(ʶUp#yMy镋*`mʱd&g7 eBM!WjȁFcJu[:{R&K *Qvꆃ'O=q|4& WT8:#NJg/HIꪦU Nޅ\Qg NSm3@;tm[81m+5(ܧ4d@ĢxӐ|ED-ջ/hԛyD'@ҬjV*k'rDN6+hNQZdsfomOp8M Ռ"~E%{POLO&PAbЄ|m2bB 3@|- a>,2o_S%3$7op~8lӄa" P<-arJW8ISɕ^۝~uVjmzAjGԎn Uq*yR-tiQ^ vMН*w":X,Q֭PiS3.3O<{i4mgW(Ͳa>O"J33"-"[wyYB\3ɩǡ3iNB>L3ୋ|Vgzۓ~gFwdLm\;5M2L;_/6rlXPY7_{f9!l'sT"K52ɕҊ$u'cX-AY0ZZ |ؗarM4s ՔtQ3 nlKvvRWK[Fm˛ "ڸ]{2LF*;9Rəu8J%'Ep/HWV%ABۗ2dml³$$35aֻ}Ӗ3]z/F׀u+rpn=dS2dq,դަ,դͫRM+3MVf);q%omIo;7j2ZJz[GLQsd{[zO-&b\L Na05JSO`T]|Q͌Xm=@1P̮9c SN{C<~c]޿=oJZQ}W,#oEƢ^ h| : 14;d1c4{."p{iT.,үهrC9B{C1,sLAhSVY Ψ`FX)}Gamu5Pl}`N.c4WPȜBK̭6>;\w1) S2t=쇡;1 雧,qk4Z#쇆hYf) \c MEzBvyj{I4C" ,2 "!bh(SN>[bP3{>6+FH2@_}:E,%hZidΆ@[VLU3- hϲyD! y{ϱaܨMmwVx _8i?_"A fdۑ΂Jt'0GdH=1ZLvB&P91X-Pm%Qsbu%~[ m'ZwQW6qa mMiCw.p){,t"s$25 x2^DjQsC-HF2 <3K"sXŊ#d(%aRuw*_tE%c&`g1P⨬/2D+0sfs\qڙ(@E5,*'o$KNv&/gF3Z-+7Wv#wꉦVFjN?=5ZJ-iS%qO~ ~LwRj !oExdN"4[+5>mѴs{ҭym y,Skҭv٣tkŠuZGӊкÉۖnncpg>fbCU2R^CE1{0)$ig!s&8,.aN(|(& $(޻ydSyh*/FȐVDGOo1B%4.(u6 ˤ*b%O }sfA?,j`䷛~Yb*^ 7EhqK;ѭ[uoK$a9Ɖ!O+ Dg.V$V_&U9 xJ } r4Z3dz9qjWv$YFl74{=iXB9MxC \G<0EȤ5+R~\vhGRY,zA{^6nB;+rH"Ue\P//rY"[e#e5 d6 F}|E"Nz2A:Mɑ3Mڒ@Z@= 쁢]vAFRۄ@$lxIŃ*yva`Z$mVʤE̕J)º5 egy/J>}r{*iE*4 Dx'H N`X|ހtR)F4HQb x+hj9n6Ŷ命e{l"2.իt^"KZ'iI: _H7/纐ҪrقZUWS Ӵosqr%Y4V]w^ԮVcNHD fICܼAZUx WB!vUQJ uNV(KTHZxN]sO2!J'a@9%^g^Lm`=N-WNh p<9V{$# c=Zn^dBHJ)&ѳn'lS҇F$]7[x-c|/gNUmzҐ~"8Qٮ5f%Vaa CWsB2á{yo*Bϕ?1(Ӌfvjs;ce !;l(7@Vjw0Vuz HնUnD:3c' [dcaLۣ}.;[g®Wy|M)v%ɒްdU|g(pYBZwshݓCh SSKO>;qCi@Zan/foH;zh?l$}a6% '&IFO{=bev|79Hr>- 9m{b< w=R}?5 uIBkgP*@SO5کmv3X+F'Vp#?o9 fθ2Vg%ħw2O@JP)=3/ƾwndRs@@=yrΜSkR;Hfl:ǜϽ=Ϣm0(ƻLD г;l#~z* 3Q2;ka?UZlx<#W2AFR *״?DhQka_3)vל7jR.Y^F% MgW1RZIԝ펊[jփ~ʡ.T"jU: UXzb+'-W'ql^ M*A ;gz} W ,pX0rJ ιR`ufY)y R*9B#r`Sf6kiZ5|ԚNfb;}+-xh BZ02:>O. <[i|Wيs dWQk4y+5u[ѧ4*@;+Mޮ9T&C86SJ7թpG֊A >t۟Uh9m^R!oExJڃ8WbP:#iEh۱--ݚ'Z+7΢yJ6FՄ%󟳺rv¸Kd{Ock>J\[й*|̛̥y35]7f) 5_jgyn/m} 8`ȵTzJ?^ۨ2O.YXjC;,%4嫳fW1RXJ2偭,QqKY`='Q-[kXs -L&)p;ݗ%;s"x_Cb>r+=AJb;,nsoJV]vT/I뇐YoLUE/@N L#z`ࣙ-1=}kEw/ni@2|Ԧ4N@zDTj B4Vq`ْI}ƠKCX" 8ZB;o"x.ͭ2YZ$G!DP[1 O>Ouє/5f]!oExV4֊A >t۟2SjߙtkW)7΢M<5Uf:֊A >tۧNwkhtC8.VH\G>Nl\lbSm'Lw]|s<ܞ R% ?Nru9 _utcsZr~TџM3)P{^IS s:X1]>Zݼ87 |[ߩ \剿lՀ:s8~BiăƐFg&98 &ssٔƝK kBmf8-1!B5@eJI.9aa? iI=ʳSâ @U# X Y#hJ(9` WJ 3?agvMԃt 4)pVEfl;@fXH.Wl)-*{A=(G DmiaS(՘BujAT  `+;k!eg*!4F3 gZ$7=$p9 1zJ"3@ZRFnGGF#M8S _ 痷)=?w&4x]D3W. A2Hr\r֑OT/q<YT{d\ ow_)zR }N[{]_w4N/]!Of4 SxG҇/˓k G|ӏޞ2=)Dj6&I[ Qn%'J@e-IJhÈu S[!!gZ8tPmlV ޕ>dIV>Xq)'qmf5/-UݰD.HqkC%. D_{}ؙ";Pҡ79Fۃ:"՝ՎU$LShysDHZ{)a}/b4Fb@&ƅ;a:a Zs:0%j~ ]G[7}'[Mo20 '5Q7 kUX}Qn]7%FX12S]e&uv޹/טIirp!j.J#[BD2n|[VW .G-v…qV iG=cD[@0zla6˷F"}ŘW:-7 Ҋ0VLLH]EISb=e:FSY2%:8`4XXsE*BE*I3!F3 G NwAipkq o~Mu~n#l%z%<!' ߻e{/%.]pM~\0F.W$/'`VT03.f\eoM$/#7dߪdeҵ? ?lYϩy֫csUvhÆwDj XftYSpW&lF>oTA~xGg|D~tE!Ԉ$8"1̞hA~ q ˒Of+cc fD%s5 fZ ɭ%1O 4_d s}rkۛ́{Ҵ*]/PefN~$"PYi;+{$e5= z9ʲ!+,ˊ)D>Cs3_oOgƚKowC[8̃:lqrT= ს #&o/n!$. bB|`*H*>9Ê3-ʪ nuͿۋ</~ ~ ( u{@pDk["'u\/z\SÁ{k2 Tz)tҜIyʩw=7X=q-LY*+xL63YU;e{Bٟ4wanu7s@Uz(^lA88P R#P%5jr缍$bLKQݾOfnV! ׏kgxqj~kcYoN&T; L`z;/B7GxE3Doz!pӣT*HkZRs\*:֘=xZ pBl-w)2U:uʵ;ƥYֲ)3q!z @Um? \ !8n:?=Mi{m:R!ڐ!ġ{i~AtƟ{a$8ٺgѬdlv[%_g{?{u{v5Q՜ľ>A3lOaa*xgX9D%G8bm$4F(0k̤0ѱ-#UO*WYG{ @>Lb'Kx^~.8Y$`̿:!u?E,6qJHXj"M,"%D#E"9G":Hgr RZNJ1g:`=?6<[ZnpU?cLr)$7Z>xU< 1=ւJ%ϛupiV|.ց֌3N FҁP-bK"b,$8a7ē TǓgM@nW0]Lһ[\x#!Lpczal n:˻kna[0ޘq0y|cr srt,55)',_:˚nsނOQ.yz)/{'׶^wʭmFBq svuBbPEtBǨc[EO:m[`BZ&$X ~MrlݴDA uoIaWumۺZպ5!!߸]Y7 K!XP Nu[(@nVnMH7.2#qYRnaTsacԋ[{jaHh-\Bq )B bPEtBǨcm݂ jݚo\D϶\ >v?cN13*`♨},9źpa{^*/Xg o^ ެݭ&t!?}p80T8.wJ;o<$ZV 9HBlHkwdLne$'ޮ]60 F@ɊQU姥8'FϡP $#H߿ܚ)L-c)7,lvnD&9Lh*oۘrgX)置A@MBxγߓ?Çhϯ&z7f?yś[08OsK\ncv.ŏOȼ _Wv wo5;_-_\m^x|0○/m/f/~py0sq9ǝL~V`_߭A%? b*lLpDJNIE$eV(xsHJs'dbW^Pg tv3Zދ{Lz3xI@f5'/,%EBܼ3!la).4WBu/|>qޘ닌|Υ (_|){\쳟jE@~&9ߘiA3LtFU)!Zc_XfN<;} Z$rAD I "02 `(2ؖSJ XQc=8Ԅ\SpDP&Ee qDR``NePj#8cpD0XKH5['7^Ң+5 b;3az:{1vW2W>:t{׏[Of4u ;:llK%n^%}!0h' 0`J+`Bw銯fL (XM7+b _8W x/rt`henyr+2D3yfq«xg11FRFlƿ;]$~ ybpPsDS&!Ƴ1HOLQTsaJ:X$"(?Rk(~"‡TNk|Bj2(pMqvOޔ?!ȵ8@0^ލald~f_/ Oݘeݛ>=}**4*Aº T ;i%Ot("ni׷xT1 BKsW(7ɹ2.7gەG2W=ZWςToZ-vpGQksK.uhM;4HڵO 9`QWKVn0/4Ru"1f{ FwGmyq"QR{0d;Ԑy@I+BqA@`aWoA9fcZՃFX^G2M|C/5hN?]Y˺e5d{&5doyi0 h/uxr6OeCj8t0ؼ.I`=1 ]$Y-8m }\ATU YdnWw!Y`= ·t4Ӥ}{ԄbvXk4hN$pF3S9X~0CC~CgֽH.ż[}o,ߛPV ?/e uiMuQ;0`B uiBڿ %g( HO̮{^zZe ϥ{֪BזX G{yzzOf9E"Okf5~}? %mŲJ ] 0$=IUp"V(H &jQA(B$0@a!љ P:`ֲxe\,ᾛX'cDQG# s9^tݧhY3T{lAʡcicD<]L" Elwlސ [;82"ZıVѢ# kAz.k/ ZHҶVђ z,Bk-EZt}Fhڶ]pt7KmQ+ NMp a[mbևXҊXҙ@DJ9W2W2W2W2JH>̀}pFxEǀµ;5͟`ը]xQqxum>7 9wloҝq%R=5HO )FT(ű)wCWbVvU&QR;%i8MnC0VqŌh]x{Bd1ʤCÉ߼ٿw~KL=W,҈ITP !#.a8G|f0 Ea(} 'HLQ#0DZA2V !qFġ zBMmwKOv2ˎHl4wH,x!VÅJЈ4U|NϠP0ĻrHMrp.qG3Ը8fXÌ[+$V) Qb$X;K0R*9ĐI(Ejw)fe;M$& k}q 鬦T 8:RJ#*E*J "7!3$J`8 >pjz(,PrQ9n. *{}!UH<)SR-ct-+r@(&вYBJ[1.B@샰px)h!$J;vZImGP-{摤e$" #RYmXXF" :JDQD!$@>bcLc1qdY Q*[A#aQc+#iH{ |KA,&r0Nm6?{϶ܶ䯠VH]>d.+IJDi%oo 0eR9GA{!c/s ސ &~SkD*ϱGjzqC 0p1Є"wFr@{ޣHWb(#δfR4 gdmAgt Kh19Mwridy7PQ%QQ=JFsS'ҡ/(0@p ǛCOrX E}b48q"^:+v8ZݭJK)+}V{ݯUՇ(UJژ-1= ;%(vwۆB^ W\"%iEt([H H [C̕Ca6jB/ XԩF_YW2cwty.lղE\,c*@6ɦ k¸Dc8CRXs3%bw:sƚsVj12ASdMo,x c|fISSyj >6R43M`W~|TAGB}k>>#)`SBaA9| }DŽL3p'~L"H0O!uns]eG u&`wF]+zN%1Qg+è?eBʭ!=fCӴ+kgV_1@3ugq&ɐ51rs9v K*" Kn$/)|#5E${"oxՓo[+ڙ(q5WFlQ#&%-zv -q*?H, U ϯl <WV-z^D[8; KNy?9Z~ƔXk$Ãc;emrGo,dpjnjۅ~8:A`eO` n~$ȋLbC>>+ƌi5&2y6;eqM9VoBa聮mVgѸMyd<|cVx:Uy(&%g{Q6[]F7s iuq@N c/XpD&S+6WIFs7ǧ.8wx~sy֊š]uN…nXAwьd,VQ%.$ {d̓8lS'vC0}Z t~d"Ou3 )T5 Du…4yb704` M$ҴZJ=*,v.㬓{-t-t6ٿИ :(l)jp&3h{SZ2K7c3ja)7g:((A2.y95|ZaeyZq\Oz*|`"̈́:(P&h@$jhWEv)xz7䄽ln%<zdfāO77hD3.I iQN꬜?hv~G1.Uhw iߖwq:AOSTr1E6wivx Շ25nڨDOȋҳxh:z-nЬd6d ͇Yi6QtX&kr+}[2rbKT|՛%Z a]Yֻ,I5y4wuYVg_V }QNYwoz!-Sv;,x j^j\_f V]K zkeaSsi;^f^QnnGz%lϰ6u=TM~} 15o;ˑu;,֢/aۤLfMIF/iVAERK}7DeB(nfs6ʭ땷=%;t_kKjaSj:9`SyaVs4B-VO=[9`~@s4k]v]MQO;%lQlr驙ng'URijG:])]%DfW0-҆戧x{;14L=U(T[.jt;59 3'Ág ]1Q(dhd©B+8jy˚X;kMB5] I1&IxؠFЙ!T tSFw9q)i=rNރiZ5[4k=0 Fó:bk9hsjMcT0ΎԆ&icMiEMmH̿lKL+ji;V\[l©n+1=kFp->r)dt_6t@)2 ],F\1p=c<=ׁ3p=̯JXk}cQ=qd4Ԗ̪Apck=Ԯ@̵OFY0͒x48nN8ie;UG ŴruQB$J]8.w 'q Y(dN݊f3d!){ELq˗;a.+`"Ԫ:_I&?gUW #Dzw0`#^HH1{A[6f.@-yȎO9pbLjbiaFHU\=vH9gX1zvJ7v BDdt,Lqs۹xק ?׃cGԱV` J&l}&I]Xh)d?(p1oa-SXq5Lv 6ey쳠OzZvs1x:DP]Oý LY6wOc 7|ۿZjTZWP0MYMoO< D63ˬ<KD;# k shgn΢ nl氍B^AiƳoyrp񾻜Н ʋ cKD3'(p?;?rN͜]7^B 44Ԝǁɂp_gyazj ߟSd 鷿=ς~'_ߞ;D@l!(oH1 l /A <6#R}*!(/#sr/shgf<7B *Dwx$!:y&AqieR@å:ӡY򬻼&% =׹xѨ;> ]O!\!؟$x@ܡ{ð \*R %b%9 }r7'N\8G60=--C6I8I%p :́ @I׹lp5#C)Ǧ "6odN1St2!Ldqmb͓ t/ӡK^7y2 =Lmv%VÃ0b%1b=dZbbw23!(S*-M_Vie͝ZܸTLnNG }üs?P+4ō*2C*qU8$/;ntwڅՐ;D\t"KB`Oʎa ttw<]8?nn?zݯ7]]Qe1-񿾼w>^E,m2q>e\3fnnPƑ?y\L̠?_50.!j_-ި~6r׹9s}s'}'}'}'[1{9 /_g킧7s;ˬQa1_sDevYoaxoJ]}=5~;)v?$gWi@q7[ѻO-(:99V35q>g3yrcT Yic?'dQlZ Oe8 S\tx]_O6}$&a5irq,g}hU$zXn؈֯܊V?ٌVi)NZh-^LSrOO֬~۴+RXzo 9*sZ/ծ@AO-Ţkg0Dkg |ПSJ&> pa5|ol l@0D57 5CڟvW~?;[*e~VMAzC}k\6q'q!ƶ֝\0GCsxh~@Gu8v,)> OF:O҇QYg9|%JF8%Lx 7s+*M2-֝((61-H4H]hm j2kYe5 ?jRLh2BI~Hv*t'2aQuS{^p"e?N78=yS8]`9$s*m-]gϥlBP &ȐJϐKKw *0 bކJE`*d#ągOS 7⮙C <҂Y#4!1M廫~o>})}ѷ&)J7@3tx:& لxpYI) a dPT0 0Jp$dG8Z0 3H$ PA`Q;1(8tCQ @N 4/*а S>ޚ X"&4ektP wB)cWH`ks`/?@->BC*0_cWU~O€H})5!B|AY?w &. kʅ-щqࣴrf`5Ni0_YT5^r9~^ݸ$dq:➎n{$Dz; gI%$i;eEqS+Na$ b9ߤ;RM %:[~`Cwtxr;ςYɶ.>Xܥg۽r>88Ѐ,@:bo_]&|{_ڔc! kJ>)&Hw˚_~,Md]B:>[&p2|#Bb1p$9cABN1 RE3r(z:0kn94tvZ3XU`@p߭Tήч(O^ʶ\w^Q`$s4ޔB$'O8/?!s pid®cUɧŧQ}.[-Z}`6[@ S O`G) B@G!s@v1fMbT(BB!hT(m3k(!-#pn3gg7vFN,v,= ^ɬ%}0KQh^K__.zHB)Ė WryCs̓$7ñ_@l 7/x8/!o oF|p2߹fInіr h?% (@ɞ%+Ax13 T|VVnxDѝ 1t$CJ,8S/9@O2*@iA[FveC:!π5߻V,rxIW,U,*9n_tYj,,62kh*{1xU$  }9XWn%Q]qO@$/C4vg[&{rMld?<Qx`6o^V,.28/[/|Wvspfo5Ν|ͺq`^X-sׅq&$ޮXgoo^.^r1o$Hv"&f t Ţknaq0P0fX`@"ZkDdBk]xeL3 8n "nHQځo^Z+f7Y|3ƔnN't^Ճo'ckMGcNKX^Lnhj;r@؁);~)^s[5n!]ٮpT;mQG0qÇ0o[푊&Ov+ ]I&~ 2<{٥?(e" 껛+lۼ!Tla@E2K~|^+_=ASs;{X0q!#RFcDaǩ |e9 2== SW|}c~m"MVѐǶ#883$yȀIL+1jBTdV lNHP ?5#X9Vl11$0F "acØ\4Z f5Hi·w(?`L}:u[d:hϨ$j kF&w:->#sM86ǔB5mGklg9b:of!w+ L.c{lM+gCz^d3i u6sN4=$F&L'h^$9{τ&,yU\^zi' K5_؇"FG%f$x# :f 2kH:H d c=R-^0 "Bk lJ@87+)("@@?bCÉA} hoP .c4\Fh\xZ37hE Iz@V m̶ƓH2P8ŒJ!. IM$)R$h$!6y`@#$Ȟ$5!5aV7f{6e|YoLͤvһ,qQrꙪBvܶJ.}26Q$qq;Yx P^?E+<>n->Kq{eaK7 9pv#{c+FUlTUG\WVwtիז=wKBjájnݝkq7it%59ѺޥT:W^,] ׾9y :w\/ۛVЂIc*)?hJ˫p` cW!@ 4G9< BHagl9ϕ-gn.m٤GJu\QRHT=WסQf*Rf;6#0Z\M\>J "@Y9Hthz\G^EBH鏊yfo H!ct@U?i^ṪPIB'Yj,ݯd),&R !*B IXS.H@&1VBRR)¡l(S!u5 Mƹ'Ty* Lb:+۟4otrZ h!0o7Fn8L6C: HbJt YSP׍epپe:blgOr;և<z\2v~8x 7JpHۥ{=0t\<,T"O,LcdKbIzSe9·h1EX=عק@ı51V0)Ky#>QsE5|bSi ˵33>cʂɣcŒEWȌ'2T`v&塤 &|^-QO?doI|B#6^)`4D!7c=N+^zژ`>Nz8$ۊ 'l7H0z]tFz!Mr<\Wb:djJ?bqr C3DnStk 5w 4`;:¾,mB/;z]J|U]ۄd0$?zzE '3VvLy;{j5:ͺj|sJ!@WN@ح,Ij;SiɁ'EBUES$RƏd&bs}:&~"Zjۛ08mQ,)^ߍyZ=ҖtM>f1nmE& ~j!r̓S:qQ˗r֏ ^Wefe͜60:Z3 =+hm1@r^i2OaL`Co=@XYo6{ElB&}2&ږާٻ6dW,9#"@OذKB_-&Pd"}4= :J _UWWt5w+I%ک 4*-y4Ma:q.fv~PU6Ҷq궱zM GLP[R0ZQ$<*:iP"6 8愍~IPu1zاJn5rQӎ.F+6WZ[Yͯ&7|];(Su8jro{(D$o,Gw:6ɽ__0Ğ>O*U*|m|i9Oh>yE Iز9 ^Em͜-sfaBB3͸Ps+^M zJQ:hL#h H b<x(Z/p.-yʙu qCY62>_7a2w֚ R/ ~Xט.GcjRkQ8jXY0- : a{3 Q/0Zafx7a SjUԙ!eo7: J(ŔN*1۾^]]+|f’.)VG~ cEǷϾxP"dS7wCoOrLs'Gߔ?VVb:r 3BS<̿_ [hcl073\P4^ơoBi3WP%a.ue4v4xUP05lKJ))v(R*Ym{q3!NgCX1t%}V4NyDRt~c5K›psw[|O)ݭFd{o\8xbq^K y_bF0:/angJju~t~LGsw?ZoQs7(`EY<$zrww \+]+H_~|#[vd_s@rR": pEa(RgQacɫ|!e(<^ơ,q (%^Ugx_֞N)&n3gFa#fbbYqzihR,rʹ_Rt(#I4a_(2N5:[1 *>C vy 15`wfWcTN*ZrIh=5Ht ݓ $Ї]=HC\҃)'i ᳟}wvL. vq-Ta]".qkC@wѻv=3h&fU6 ^m5]v[#8BHm-/eBu-62ݺwƗ$JդsZޅdw#`'r锻Y|>,z ˂T:%3kկ yw F1}‛BBICQ2^b#5A^J775jzIIztpHLHE۫HeBakug#JG?|KT9Bڶ^(7D5Y;Y #j㒨C5C`8X$#6OCYJԉ7f{O [ &qCotsW5^D Uysx5 ycR^QAY8y>Q^6*$%°D#A[Ɔҕ ι"9rJŗ٨j^lkWMnn 77P}9<dQ]`WFY,-2G x(%#fyU((A`2Ru6PL"CSBFM*~ڦz߮gJOc<8&gT _Z_XQiL;xأApS.prJ^=eQ1VGgԩآu͂6k*>>^Dk(Dޢ?;ZxIA5X xPH -G)ʨ4q#JvuxG G,3EJly?o"p -B,(²`PĐdǜ OZ0FH-]+&Fh ,1':!ySA0-]qaMtEP]9M&|Lv ^GIh܄l2_∴F#U0fMaL'CA_1Qn%SLyޱ|7#K{xB4r>[<@ZgDjj'/XR2[4bQ[׃Y./FX{eAf`B͙jEjfB3shZuE_H|QvԨ1 Ha$Q+jT3WugP#1}!1FYRh6Q#*EgQ\_J]+jD8ݨQ^tM`ZHʻXT63UIֽZqnOxbu'DvqٺwFG${O?Bp%my,ZѴQʴni0Fϊ@e[e8iП9Ɗ=}<9Ϩy͛ r͇]΂yGvyb>[{nYQ}]Ǭ J*Sy7W;iVA 4fCֽKq0N Az\ W˻\V!?07}z 5vOݠҍybe"\ߧu>Now;4לCym[0W ,Z%zSpUNp@Sx$=u<:qE weރ? XjjxdvHKc4_p06c OK0_և@*,ګ2LK*09ހ{SE'mRC+vI% 4*(&t0w.fscaH+qh%UJfZjPP 1v<"nYTVH1JiR]K˄袱i5VlK͇R诋ћ쮤DM;}(}b7T39syh~6'qG+6hGnoR;}9BGS ;i˱Y*ݵ< r 1)lrB:  N k sr|ȳThɮJ5y'Gi%t+~Ձqf =|t 6>Mo/FQZUGs1 ~1`lT,Й :f:ׯ㋑hGJ[ !xH .ls5m 0F]9Xz+EI9~,S՗Wȭ𰘂,V&X;{'&J &8'Q 6B i;azÜeгnLh "L U/F)A-b&Z(i<#b36-̷,KӲj=vB29ǷO$HA7w+ LaMACc\T|?};2{Re1|O;S0~ersڧvj0{%2 xrm3`~&WXN1Bd5ǖJ )(q*' 6q9BqE\ 7b?g8:~Ԓ gϦk߿߷8=ʰ !ՍVd G ]]&:B}/dܑVw0^DpM-Ѹ SԍV|49.K`/8R'Jh{h+DrZ Fl3V O Ph:5 I`Hs%$hHS Z"9` rilĥNQA6HbTŅ5e3 xmD)[e B;'I޵6r#۽*>̇$ { 6/lrDId򫭇[ 0v7~bXeR96>Ѫ 5G6FS(WB{'&w2dJ/ʀ":E`U3:%$;WYc90<%FWqj]7 %ᨗTJd chRq7|HH ō/!Ahp" j Kw˫VhV=Z$cMj(VԆlz{34֪=U\tKmcg{}#AćWc?[|\懣/O+V1xH~~̥41W?ӱ{ O%}RKE4GxǤuA侣uctǮ[6֭ y"%S>\nÈ2y3(v.r MZw.K2eJIDK7-ɭ_H' 06п|m9rJ6T'ZXIknr;Ex.hGG9| Ճ4Z,9̗{eQ&_?ø7I~-ͮ>\/5&$i_bϼք2 z}'Pz'b ,`ڄ%ml"U&(=*$ KF;DB3 k 1`c'H4]IX+'}8v cbUMafUM=ؗ?iYS߇GiYSݧ}6<`4U+ރbj[$;5V$m^J_,\Lpy*gϩ~{:k*[#HxgxPެ3Gvg3Qs Sg\RVixMk ?۝t0DKgc[jrr7r_1.c;b/-.|*O 0MVP+B?~zx}&ܧ;`JRxQXaC  M'5)6qlF6Ǒ&g1+g A8S{6N6;v]FѤf1Âf6X^ ߛ\ mKsz'B2i׈OyBMLħ_#> -/ecq6IsΎ;jg :ձ7y{F>m(؝O9U?'\O/3;pvNX3 \% RU̠qD PSyˊ 6f;SviB#p@-*?$6śNgHM~h%Վ<=}1Hr#a$H>}R&x͂%sF)YJKe [W YI髹/Y)в#Sv6@᜶/_ۿ2r(,*38xh0J'@J]=m@Ɵ`vƐ3+Ep{С`e#-S~j9!H R\x*]Ԧ1]MY4eSuh].+` ZWf+wB)VrSBrJ<ǝ` %B Xɜ2C9zrzY^ A3Ba\Q u(YTp& ʄsV o}girh ﶩb67GmdMM./O+K4N>{:(|/ $Qv˥h&4=GUwL7 @c0PCnC 3[tK.ַC ImՎf`z ')0m'@k|hBQg]x*hpv@(ޥ>no$α{Pp,n_N" U h'NyqǪ8PvL=o$"rMÒuæ8b:]%q)Qp)]I - @B:^hU@ɔԢkjnbWx<{-5a _G??h$o.ňpZ^L c9A0dAfܸ@U.Xz*KAXDW@'O}QV xwf)yg'_ {J Slti"@b=;Z֓'ߐP8ĤK T'QK1QΖV(0De?A8ᝥ nJkkUX<@)N9X#V kc| \PwOki)J thI4nWSKU)*E̓#bJp>Z/x]xU:OʒIR[h >'bPh%yKcPe i6Xb \֪|X1bƈu#a=b'<zb+- z[Y;]R L( g/_(|@4߄_@h>: (*Es6(H宅 BeU؄&\l!ܛ̀6Â3W* !2@4ARRW641^rAQTW E+k]ջ#I]^JRylnfSmu YlD9EpLebNa!1. '+ -}U)LcזZWvpպ cՍ:C}Hrߗam~OjUS~ B$߱}9, lS?=.zU&~"hAd[5|]9àC2DmJ -:dh޹1NtIP!L.=@kJs*CA8:`pg.f5ϔjK-d2Q2d~!nu0ti%0Qn+EE(QN8 TPүnhU{͌ݴ2ImYX0jlt=#TWd=PlUIFg$1‰ʼbpMgޮ{0bYu? rOqPflqFἅ>MoejۛKoG˯}*jۡ$R`pds}#%HGyn,:I$Nދ|$훳]aK/ydSW۩?']|~9O@heŽ@|YueGgs79{w׏7og"8!ɼv /,*N#|F"ֲ5h)*`_U,,wJnCiZ/->a"y쎅Gf5>okpl`$A2{-i+ZK8 4x1\N ԓ.se mh>`?@/!VAc܌] GOL/dZ@a@|˵VOwnTe%@̈́{ B|4Y="wYE@w;I"`qo: gn:[1=s3ʒU8& ~sn~>X10K.z2ceo{Svr+]#126r-J*nbO|Y7vBmrHJNs;[bm^0 حz' in48mI1y | ʖҜ,C][jthxmM3S6lbuQ$J녘P&i#" uģ4444\oAZ++C0`Ҏ/JV*J1Z+aɭ#r_:";7(ݲźeqKBcX jZ:F- _RVA9,/ۊ&TkֱoHM1 z]g\UK⸒/K*) jK(dVƦ X:G!^I7:B @H!f5 1"Sͯ,J 5[ 5q>h$Oq1ܥš4mt AV]LƉBMLΖB3[ǿMO PUoVh]lQTB 3.٢ȵNkpr%/a ~<-TLP4Bj! W^6!ԴǁO$$*GQ=,=) xN\!A#ξ-sw (O{7Jxax[r.7: sFb](lqFa\9HB$heNha1:.42< .u s$6چBG(N:wD auoETE!x=z-GPu`xl&QpQ-3#hQibR,t_BHD] g DЯ"i\](o2C*L ?N7Oqw=?; 8t}BᝆqJTK0_pψL :( љEo4'F#q˜' q0L0d\6L6cK5+U99^PG!iWj+SQ̑w|'Js|'`B7S; !f'lH@#)LC-"SX[EQ ŀ$#Pz<3 d^zf1)$a2‚ƺ|UؗFt9-UotLB$ RR$ahF>CO3X iT4v@0eJmG @h6ЁNtw49WͲ-vwL0l gMGdn3'v &H խEA6:w9QؤP:,*+5߿(Yfo'`d$-YbrcC:1UHHZkOF<MEPdVX3&Δ9V*7Tyk-B"f;65 qblEYe Z*h>wb`}aEP2ԭ8Z.7B/ B!Uњ 2`LѦ '`g BZJ쏻(ٖuϴQL342dkI?THIR86!G-2SPt4vvvv\rucj9b"+b 3ɉ#Z̛p&ƹ!HtRDIqJ֡[BElV0sFIŇ 'GJ〱R>V;kG"HP`D $&!BѓZc RдV0pi4RR鐲 'U =a`L: ڡ` 2B +_nB$&Q@R ԉ (!ocS%)G8!J ZjQ@bM=cRAamHEr$%)W- h/!ClS)*mܠRŽuyi 94QeK.Ul r|)";EjAH҂?6W-9w|(ȇU3xI،`=;HzM=[>c} ?6@W={lZW=QjD6~mjd4thDId`Ƶ',^س&vsiR@[]Z|%_dn)eRE4SȝAfjiT;Bu_{dK)1SzLi] 9i1"7!2 WycهCCg/&-bP)|ƸV3Kaf3C82Ld/β3–,A3!(Cgx+G!' IZIg桐KVҖJ5K1 \tgB e.Mxxw`Tz㝚csß@7nH0=CY"aV`B-"m?/fo3Df;1lX[B;BNP4 =׶%b8Yp5C6Q!ہ(t^8;d$m5mt[B6K/̾4ogoDg1 M K^++$}0ōNcZtO{daT>֡% Ңh(6=k[ԒǛ*O5US.i#Bu??>lRg(J0QҶ؇tqQۻw:hKiS>m*); 㬔^u\nj1 ŢLhP\o: ɶ~q2=v[H8Rjvjb!U᩿y :}};Z JrEeq| I"cN(Fs0n#ܺ%fyn:QRZNmJ+/ub%awJIW;>+8@j>$)V4FXrGA4%}j6 6Zw1oP_ƀAuI]nXq(q$[ ޼ynȥcSwlnP1xzDu?g_=z@BS >1Bm{x8=7}z#y㍴NRZPr:a#oځmnKႮǮ KN 'u%uj9j9eK*V 9 >_sViGI(vCH0Un0:QRɏm}~Rj`zII- o*Diwt)4Ҋ%]յqqqqS˶bБڀ k4l!GđZPGyUK'1Y;rD~%~Jpdg ?Z6**¨=ܷ/MXR#S;A%eKym.OwI+lXʁ/tZ_:{|m4溋'ye?O~|^2ks>a %ܬ1b $Y{FxsG9).58&[y{HZc: jT=5vvLtۣ|!ŞK!>Q@>c,Zo%Uz]"0 8c6yiQ6kQ0) VKzKi=˿׉ D-곆r7mDYHr}AX],EAJI ##1'MuzVB57V7 jIp#QٷwW#|c2 5CV @2^L5k*Gjdo)8sY%Nm:xt*F.ދGU(ftF9Ϗr;!u"z5K&떄D} ~o !BKuQA~\!Sޑh:gil_&g#тc׈j8+!YÕC0`ME)̘)S*ڠ#ALav2USVơ8ZZKr)s7Waqslc 䑥 W (2X_5%q)+~,󻟾z'^wr?*a`I+Rc<h>yts!ꮘB8% 5gyǜ3RFB=q97D#9+]"Q71( ?WHbns ͍pqHTuZ.3Z Qo3q=Si =IFj{+o/z Kx%O PϬ JRڪo,b:uJ_0nm~pW$Xq,3bQ.jMlN7x&hLo}%o{*KUZ[)Weure߻ ֲy?}w?{4Aw=(nsXk]%VO!iF2SDȴV30Iw?ͱ;@ _ln;.h+9<=g+mF3IFxԺx7kvo|QF헒Z mVMi+ "N|\wB>M{1CP${2&M1X^ŋKv+XlK[ŲwJf>q(٥>0vz[xroR۫Ȗô22 ^Υ7 VdwHQԉokRK;7Qxٺ̱`L}O,F4+%><1@/cCHn~6?{3bxXhy~?:kp,6{0֗Ͷd'`lXO qv]O~r3?\}=m3WJj7nbns0UXǙw%B}uKakbE piHOI41@+QYa"3ԲchF2hޑ=O ۀ& sF6ig48qxZJْp"L>q9a;-TsXfšnKYs@`?W uv:-s 5QPāB\s%מZᘵXk&j,3LBu`ZvC}7< znV./[~G fp=$p|dg, 9/`>?{q sI>p OD_%ƔDN%)ڙC.SSur6}| O||&XH-HVl]ww0l)`bg<ӓw'nkpUp9z:.%[6ޏ݇vØ,RO-VQ U]Š7YaZ5P&>ˉG\d<[K4j'ڱ'ޜ|x20Vod4jRl>b͒#(I l$0<% bdA' zUJYYQjr&W}} E`gWIdd2}[ o]f٘6@1M(a:7P,ʻZvL(D!)˴0r1z35큏lv+T))+ho೿Y~ם7?rܔ%oc <>&8fH*̊$X5f-ߛ 7VYwgA 12 Ch`fe]{sj kmuCKkՁ_/,?4 =ۛ^aveQf? 5|$f'n'Nޘ8,-ԙc `f7NB 7$;^/jk={ WFW tQ>ZH+03z];F*_}׃ ZM;;Z0IWK7+mr5tzGޞ?:44wwփz_-N{?ބ +.,NWZYi'o/rhx|u4 4An c)N4_فj2xiJ!Q,F$= &{Lj_~yd6;6lյc&(2 d?x2cZU'8I1 d&LΘsxIWXkڠ+hc:hmvPHi}\Rfl 3K )(tJMq9VCh;rkُ:4+r)!M Sp-3#;U>T#LJ֍Ӯ!6އi-v.c}2h >Jo̐-"Um@Ă׀},>e&*N:/lu"< )G}^&G~ЂI<`vn.*+bd`K3ʪ YNE9jk&7;;;ʰ;m*"6Fjq>bBjy{ p+xe]ÙGGl'z%o0eݦ!\َ2;eiOuLiܭI@G{\51@BL~#-#)vn<ݳ61䦬p'OnN GHvzlQC)<BR8RN&n|YȂaoH7f:J =O[39;?f=ÅI&Wc*R܇{L$WD\-s7|`v*#@//9coc($^:5GgBIМ1-qٚoCXewSk9%R-n<9Sb#洈-Xs !'=zcNxIjtR!qv̩ +i:LݲYwz؆x+0!P[Tulf1kP#sFgPVYQ[H!r#e47s kViLf1aZ-]{8[e\*g+_{}:= ØV4>I7VHWÏV Ի?>}j7Wy̼+,nw99=)W;&tsqSlVNx̅^~C|7>nZ[eBYkX5>| :4=~>DPiF;a!H>sk[؂ j5nv򬶾){U ZR'ca+|<2)ryت{XI-}.=VcHSYCkCa{%|2DO"OJR8aJҤBAIg^3C4Zsru _|05ʝo_w)R`X;'uY0ѸTOjÈ*K.c(bph)C pK}qT~ CGnbB{mKM[ c*Ad4MH0Ht&H9f V U UH54_IR`%1c}Tæ0\hp$9WgAꝁ Y8&hQ#]/pj'vޒq=ԩl:-~S1ٽ0i;J耠^&sw=bxxa/Ļ ^⤍-?Ҭx %fe6#8Yӓ߯O$5U@p˰G~ V{“Ia&XIb,<.2^>>uPqc,؞P"!BYwi{"FևV9SӞ(S|UVGbuRlJP2A$lx ׇ RZB=ZyK~绉X=zN 3PS!53[J')(J~Tnv!HQȖreKe}c6f I$ȶ2,$VXYר۔:ȣUOP?6yVv6Mn}8, cfmVyj5&cy(qnpˀ48VHmhJ`8)ޕrh]A)=DAPu :""McB-gq4SNYPzZ,~ڄf=.<`*!48@jH5JMj9]G@b=jeь-~n^/(~Ġb*K6@6qF:,4cYf)I'vZ}pBQP6962wθvc*F嚫zyX F*h_.nq,W|w?۝c縼Ju[8klv:,=Z\ŚqF pek2E``7pO/6tJZPbXz3"SR mfE,MlbO';+Íc&5&YC9~cB.qB! * +-nmd:QCV݉4I1{uyS!8[T0qE%CͬR1`rJ$C}_5;D٘' D鮥Ɔ?$gJ|nCW򙭏m -nA6}u$fG}O` l=NG1f"ͼ1$"bhB4:#ӌL0+&J/fmnʑ]̒\dDۑZ>z4u;#1;q6"Fe`2"愆5άC`_GF-Lvt;['>gcū]oWMXI*ܢUXa\Jk=Hve\aVεoϏ={K+L o?eʕAݒ*zNq$H%ͦۛC}vkׅ!^/e/<>^=*5.( rqgoXrv'FXӫf'Aw6z:-WO:90!CƝj܆/wPǗ|~_;NLn_w_L(X]0aAT+1lbnԪsA7VO%z 6xB߆ ݙ(M8Z^$74ӈ`-}}v-͇ 2X(@۶0m}6 p|tՃ?9h5Ɖ+.x]<_GmbSpey25yG-{\;ig?GfrWu4֋q F9kFԽt\;52a"׻{}1 ˻[ׂDŘ" ᫕-S ԩt7G+r}(p4 `/j~9/#Iy~уgȝ@֓%l@SP*Ķ ªB( a-rm{랠0 L[€rߠ  9cP@`ϭtheu%ՇĠE7|88> W{JK0OCpCt0\t o{te&?eܢrvӣFh ]D:јsƃR-`LAgn{%C\"CpIt9u-u t>QXD}AKV(4Yjv 6vR7zCYV!wq=Ϩt}BO4.wQ#C[Mؾww*EA3(!*)|*IX Gm/}Kz Ʞ܍iWZ)Z iD$xDVDn]|2"HdRo$2z1 Jg4D"?4D O7"Ne%,KO}pS\X~UO'# X#atœ6t 3A0$kc) -;U". C%q#=FIe B+A@jK(T˽ `PKK!clHs&/;EяMGK ^wRj +bR,M*R 䤷zkEbE64Wu 2,*7afwCIH1żYfw\wPkףX^rH;m)qt&@ua6 7`.htqh"=9)xS&qX㞡oέPʽׂܚ\nWN*Z78.!6Fy0oK̃RB2wW7fJo,8;z`8b&^CmR׋#N398ͿUyHC9Rb'{"Ti^;jw}'c$3۟;9<=Nhmon>ߢK$U~gp}N8"⁾ͷG7Bc*e}}UKss=`|zOH%ζ(.aRy*Dnd''t)>DvT)1aᓝ)t( f.iRi=ӌw?Z$1́1DGU d9G[hrt0Yh 2ߵ[QXnfT~y \jt ŝ2RDbʕa$1 hqE1[~F2]lu?UXꝋُvm_QEc~6/j_|gy|wCBsi4^_<4y A3 o-Pؐj/;-Um`jv\1 K$6{}]7T .,&P*`Kº)6{UQC8!`!C!=a<VC+-#Fg4QW)x7x|> ^cF$|R|v=˷˭7Z]Cv6gl~@c`:4=Cm/ACv 9&# p+/+ 'SkPpּDtqm&XռԜڡv%8/=)PA s!%21/AcwDih!8 GpeǎlMz!F줏sT5{,m #X}[i␁]IԐ;a\R>ISj[:ob64Wѣu"ΠlK\$o Uuc%4 J 7TX@9<+|sog#Q9   (gHJPY" Wm_Rn""*yp>R(S bw=׏Ϩ~Zӭ lNsgU~kg|'gʱx8 %toZJ>,hQ噄- (J|>+;E\gQh40bI5F <:80i!P*Gz|g0‘ef D! 4(5[s=zS:(׍9eKHjz{os@9׫Ea! @

2[ (Vg3}]\^`XOŗ&Cl)(<;*\C\n%զwDWhY< I()Zn e%45> XĦZ4vș zY6Lz3oS6R痸W&>\a32H:kXus3#q_,Jim 3q{2~b~_L|F il{>MyORNi <σp2j&P̡)0˅Z%*n6ƱES$}b*=aLr[ ! rR&V0h4VCS4|ӷ8lsU+n"ss@Yq'iW?yXxo5|u9ܛSk_#&M9ն`=ReW8@Ή2 b8T! >SE9%Y3D3aމlw 4 HsZ9sYH6:,)=g:&Q :-X(ÊdRXi%yFٚAl٘*HDZeEj:u%ۡ`D:F۹˞5۲==DȩA6AnBmoiVj%D P1f$ /\nuu֋JF`z]^&uxBlDf9.?].dLBfkg&sgi~fH)T+<Mnm˱Mi][{wZ.9'%5UIIm'% ǀi{kmЋwU?,bg` I^&.2aHJwIJjnVW_~-\NOO Js'7ɮ&k *E3(80c:t+ǒek{>V96B̸P /<729 B)P d$Hd2}tęG؛#;ψނ8S iETaʀ!FaJ(gB.x!_mύ ~z` ,*u2 e;m-NG"!j  R+eF CL@8 xKz09_$7(gzt6_~tG{ rHD[D@$R *Z 6o }Hs%{33yj0RLL-͜BH`+R15{jV(tz;gOt`mtiGH@$Br5 Gŀ%A$}zPd &A7I c0(`-X}!pԆ@߸!m[\/QO2FcEdg6J /^#҅;B⸦2C_n!C^Zd N# f!3*u&G s:Mz ˎ]PP묙\SugiR6=4#e` 2 F" (l :I+!$1u F ٣^óUou.i5RHB+M\kwᎇʹ@iD8j!T+qѱЊt g@ :% ڂ ~& }5 AR#(n -@PD"AT\৸9~ў)?_[Os}b~PcS.׭#c\zTZ Q+f9Etg"Θ#Ts ,M@LԂc$ fw5Ο{M$ƴf*)@"E#uYMPs[k{R\c@(441s]Z Y8t|^gO#K =S=&P^Q1{,.P-)P^msE*Q4$LUX]&LQՒ |;wEwV%Q8H r;~ywNJmrWUcXqNxq((ju7Pݺ>DCSPq/aUh%8I6=W,TjEKxG\qnXƽ ny52L4Z .5ɪMny2ѶMW|"2;hZh1Z2ֵ.m xjR0}5)~q;bg /iB 3Nsq~Q0[khKd0^eUdcaq6045vIv7{חB7: 9#iezs L?K4IY|],g Ay|sMxEt;~yw!.V%I);;0ϻ]՜u⋻ nKwjrqu¦fldоvX}xcoj'pza;-m(c8Yn' tgmϑ@u6ncގαa`O g5xY+Nlc5T ߯* e^ -èCʲW#([kTݽܛ_ʖ/.LGkҪ9LKqs$#'w-0?Pyzqwh-@1.RXxq n0PWܲWO>ߎ ԱS3VdJDsAYEXBEX EZݰ;uiigB?FRIXi]* {r =*c!sl6B0ܠVY_JQ(]-ޞj+H;Lt5]0_fd])ގ9{hx9 V{\=pAsrnx59V,K4nʧ˓wIfփ٪tL2,9j)S^i;Zdnjv chcK67;hͯt:k}d (;g*szI '_v#/6E,ͳ;{O Տ"en/2JUo## ygװ t6S,c_:џqق@`4ܶ3)XF`ͤP^eL ũj%h_R0X\$oƗ@Cmp2w.F1mv3K ЩeJw,) C;P"T_D/`ć tJ A a{$ڍ9I|mT}K3ФCaXzmxxnK| I sUwӍyxJjT\ "̳sidǛ/g q*b6?^3O*Eĩz%# }"NQBA:($l))p&$'l+ /= agYNp,2(K R*"T2$d'Aה2KCxS]뭇x$oms, 8ZI8 z*^  RQcNtRęuE;WXg(X爀JcOT,~b/`jX̽q_/F͢p6va40#_ٽ~nj22fy0k&(P c@e  #B QRofyZ1_>-HɐDcݘ(1f$X&+.ֿv##fT9ja֛O#-DVR)H5*9X6/MF͸edB '5d`~2YOC@6p8^0qkDQa kgDzEeFx)Y TbγT0V7geTI||AI5<"B[jQV+e)q!8%*D."2T31jNhΨxYkq)(gGrWYym: L0蚟^om<>4ͻ:ӹkt.BI- Nry_X &ӕ~pڱ/oFYq4qoFby|inGv"oҵ2Vd>I/!9`d<+6H`MxȀ)QwF%Z"Vb"<둔!#8PT֚2XO>lK%:hDggTAkƁ-eWЙ? dq:sʣby>7/V?_&e{)FoŻڮWKh?>ZJ8]eZlX/Hި,tV;gݻ?bv7UZGcU#Y(űGcݱ RTm9͗]X y%C~d6y%z)jԅиҨmTHk%>F-BA@8UG Uc.~dV|"%SBi&>#F{tIF0BB0qF2rbJXbhYuS3I*C5Ͼ|][J^Ge.U{.04HʋsKFHp5C6dT!b2e{'cFCPx,Hd2]aF-ߩ:F A::!fxfowIuYH#08^1+C 9&TQM؈MHMo4ZcUw9 $ݰՃp-Q3IFyEdR:AuoTYm}I艖]CBs)FݨebҨCK6\aނt0b -ƞSD5W"m% ծMUŤgXIq'X JN7*J7}NY3n5Hw.Q2%d7+X7b]noTn raS{W}Һ!!߹FݨHb0FAщF 9-Mm{_nuHw.Q2uuʽҨ*UC{9|Y羟WvGbPʈF~of-oMDK$;(wߜ`։oTn-y[DZ:$;"^v`+БZދFSќuh$;(%u(GCd/G5 eCEY@ᝯ(CP}T&(ꣽQCPR&0HQ:_7j!:5AHWՐw:5Abz׬Ka^M` ni ]ڰV&(JtZ5ݥn֥xo8,)<$0wkv1f Qr1o8h*΢gEM]3]$G#{(x͉|gXUÍ7W[3l||/;.7^[ o_ T5~ͯr㶸'ί2/M07ʕMF)+;n_o^(d u,axA#M ГeRthMTbaNG_v*xh DCJq, JD3كVmAXdyZpAi 1$M$ UM*dXhB!H 2%aIQ `%!Ib ]F݄rqhY(4AK{))Hř#LB WNn1r$DѐG4('7#HN Ƿ"rR=m eFG*Pdk]z=y8ŘYƹUl>}+7txYFqEz.~8 ːg{7U؜)t|Iޫ&8K THi,i|X#lmd<ʹل+i,]L\UTgM֚g%x+Jb:[V!G~k[x4wh ћ?t# ӧWrL2:YŜY{;z'z{{hm^ŗg8>Gd&pFU`A&p_D ˰Ҫ_ uNaOj`N؆+Qѽ+7 EpMXvjBj*Ok؀DF yJ׽CW/?6!K1m!k㊧߄&8vXք;mXR5q{1\Yrc٤sM`[Z8kup{qI+~l{e\,_p`}N3>XI赒;i=w=7V7 t^+_8E|!Uѫ(ٱ-1b.kLav muC{*UR-eJwrO"Xz{=%BTKamhXB2^)A=^; 68BJkSբ93Sѵ4L/'mLӣ6#Hh$){]?6 C2(%RqN2lS @Fq|~)i] ְmPgm/"11ȗ$tTX$ 8'4pbR@ VWeWoTﷳ5|E&U:#?574CiJ݃g|pnגg8e12nt8Z=)~%}]K ҂=_D_\$VSOy7HFBv@(%B{.<] \\0ROLX^_h&jUdR%w&T pNXq]Q$7[Uu=ɭ'ɎV̶ekg'`=%H061!0A=2k9qT3e $;*|QA[wP=s>s[E%f;<ۊE1l7gϳ6GvijbGG GIR*yمf $+v%%&:cΎ Q{/6m(uo]#yܵp #$G`RZ/Ѿ0Vf[#hH+ϷGk O'ƒ LwFq0vp }NR'Ǭ${6RM7>`!_ F,;l 6xQM'n(!>v~;%Rl4؜WyH ̌.9L7Jw?=BS㜂^y}Y%vǷ~8guu}qFHqߛ(z !.is%g.)?=w.d?pwst;-Kѳo.SJ9F*vP 6>S͔4rYXMa}"4ǩ돯o2uAu7"3=9R EX۵lj5oB@O -g3^[ވj#ۀD#$&c*`쩕r} ArvW%6"2t$Y&Q0`.]ZHKќ`ҜL'p'\RGFC¹Ї=L=Kp{U-8gB.K@?Vy $>774Bf*nB!ܴ!!>QŸcGKye D{ ʒs*`qxV.J$dvj\F58{:nT溎T]G4t _-yITÅ IAQ.7~`J" GZ8f(az4~S5#l?pU}>#cleJӿ\X PHRJ,&H 2OA1EfYY^n=r5OX YVKؙvZG X_d/gíA.O̡ 4ߖ'-Q~uI=D+7ʘ gM'׳z NtCې Aސ AuCj$$o(Ϝz˩2p}^R@G#RkNwRlLϥ^¿EB6ҳ9^Ӂ^~.k8M" 3lU?3'`2hKk4Niwqᆲ!pz&7A~P6rwM|&JQޞH_oKS7T)~wn ]Ak7޵pǹFa91BD`+'t(kI4ȠZDy]xӁ/iآ/p^h3 WPfѿ }J2eA1mTl8 u٣%K1t= "N:䍱^u)2XBtW&E?]Ϙob䴘Rv8n>]ld 3y^%yw8/3aٗDs4FS]$/D" y>ٟr:AɜB4dap]ьO;PǯW-[["RJof+_=z@RTrYmU<ꌫ;A|JyM1'TƉN("qgމ/;~AwSܚkD6go8[Ȍt0B, l=fK,>\j1#ǒ $ RS8J_rg_g/,Ҽ-Bɶ-Ҝ8ċ&,` V!*B~=|Ѝjʦ#9^R @y2ǘ#eq`H#O Bm2JXEseT3ӾRity5 yy_ct[d\Ǒ.Eknerʴgl;;j>lUF|=*%BH`"ڇL"xb Z8oi0)TP{@Țs`N(x M~PY-.>\;SqaM61D\Y7$ ݶwQS'V!wm|%ТI\F&sL%-!MB>O =p%Z̍5Ტn~]^iGxJJiK(5 To$S@V,zcӭRL7Q!O01DsѸIgZHr"^+fUp9`d`d z:ؒGacdmIv!%1pzTY, <2+ԣ!*(qϩ Tж䎪Ni 5X%|5fAŷ0.m~NsBIUA ,i(F1Qtݢ|.TSg>HÅ׵J6"V͌~c B ڔj}0lwzLAT[e n6;Q5EB0X"S f&pb(5iҨf|h$CN񖷛4Bb@DK@Qk)ǂ[PLw~mBSc !pU%ZE45tČͷNS]Dmu52|u"E$㐕qʃ9$aEs .w-{/#^up\'8$7p?FF/lpjE%MHoŲy~d,>~Sj5{av2G{(ev|6ҮLi5ѠYUf̹=)evH&هXbbBNŔu[Qco]"֚wPӢ|5p+!ѣ&N#55@h~68?tu*s_V&_g]|v_bE5;7]K|\r)Gj szq&>ݯ'z7q$ǖ[4cxZr_$ID!a%RDHMҁ[QlyƟOT~xPYyWabfDd41Bk !*@.:qdd}IȭSkɭG?'1 ? z__|ܚWm_o2^x]lyF n3ayf-u6iA)ZRQz?B8*ېʆbY-5HTGhѕk"oZەkܗN/]1]Nq^ #kerYDGD$Nz)g䫍 L ^̬0B*""&`l1SeL+- ¦kNY"J- sjx/urjC *m4 f!i >*prhmL=/mY0U6%nY\C}Yp4 >d#!k*D%8@y4K^ꑏ>0*}NqjU1FI.)x$6qE Rd % pԬwsc|$&Kt;P:j?7m}|JL&X/"D$$,\)rD5k\ (:zA* W@LL*mq 6G.Bv\u[n8CR6!,Rskd) :,XtL ,02X4fmUN]Ri|!#"Q|,pib^GH\Z]ĭk#DHq R#whRq%fweLUV }3pSzHns zJʍQГy)h=*f6@p )&cٹu}Npػ1 i{p/v=l|Tr!? y 'pl|kdTki~u( 5 o7 8oQ%رY±\&Y%KwZS(Ð5o^v<d-|IO֩H561vDIRvWN?hܐ$mLEvI4<ĝ<; 1E%ŧŴu`0`qTq/?OZ@2^0U/,_Xne򌛤̮5ԙM%Eχ}6iOI:^\.a2'G VpNנرho= p #N)ݹ>$C5NpdɳH19Sb K 0aG'b 2(=Ӗ)TLJأJ>: 3= 쫪 'EIXOR<JR5d%8KhIZ]uw:/)_Z m@)(F D ,h<5$v-2-7Dy0xHB]|T߂ՒgE&zdB$}8 v)X &h[IRS5T)ޫ>r!m'AUiQv#ZOJO\iſޣ iH7J HY,87n,mė0n M*rN~-!J~xz EYBȌ )fNic.D^*LEɅQ?51Nі5 csýnLVhlV=>TF,Elf-[ϭ^:Kɨt.M7<͈<͈<͈<"ڎ 5:h-9x嬋.*u=!<:#*y[9|VF2Cِ6AGo&e#A5kTzZ,{K-C?MG;CY0S-P߬cjng掵f_7@Y1>9N >4 W$.(|#w|BE'I=vxӈR5ۀJ7k$c;*ju !BVie:W\77yl5{wr椰6XP{cKR` wFmTʐ^oʥd5b_yCY;4azC]Koz  NoE o*ƈ޻AKCoކqkt^+Z1TzEaa0 b6v(;@ jRW Jh&]hf#]Ȇq|"YIjӚa%G}>wo\|~0h'$67,{c@}?],7ᴪ0H{{v{2o'Q!W^#߫R׆8Bk?k@!o LXʂ2:_LAZ+ T9 9^>FTk++*zCF8DMo#*+PD<Ks _e}ۈlzа^gK7,KX?iъR܍M}]P9#ٗ6 |-p6>r,ƇUO4F)ݻt>z޼g< cI\6a7tՠ?L&%GrW8e4&/l&g?Q.p?OG:yvrj|9R+Խ{z<f۱fL2` py妭݄ђOUw Z0{3l&>BN4VI¡,4c(U/&=JeߖU8)_z})rdzbfI/NL縘ͮ׋S17?Eo?_QJ7Zgt\"ISe2ѫe1`΍TwJ}cmwe!l*G] x)7uE5RdTgR+r2_ ʇdn겗Mv]n2ؑO@ɏdl5Se˒  @PFK?u_!W*uצn=UjL~} |z䗗?(hwqbG^Qz%vU9{UjD BźpzBV]Gs,`tTXvSq':+8oQvg1Ҷh:n1Z0= u JqBL;Wr;ϟ?<x ]Yh91!]4"/+aDJ R˂W,H@I=@R;LA?X|f|.FGF]O\um}T(Mo\\O{V\Db5{JcY6h/~Tתԍo>Xx'[T׈\dݐ`քl)ԣזh*2v}l>VXaLU4ϋFx.b(0SZM,[Ec;#钣5l9zQ\{.-0&@Y!cv%*ky~z\)>.c۞qmό?3c=t+3  ?;OJ#F6B Zܤűo- MColp;}s&lmDɱz4 ؽLShKkRySy KCcW̰㢠ӵ}/ gHx]$6ho eBʣB+N+(]i gVRV QY= 3B2_W[θl83 G1$ͶFj 5$hHS2pN:.JV^T,YBÜNr jj%M!m\?kD58"Z2BtI*+,t6 P\J*zFB@A! b\Llh4Vji7'h$$[pAcɍx22L g>LEUFs%[<Ņ")Cf'#Q6 5gÐQbNR3ӈ .fC##ETMPye$dB5eû}d R: % $z4w0}jLG Ew(5o߻ VDKjD9qJh_^<=!v5=(NB5on̝}ŁgcoDNKi-K.I]e9[DT[bV앃q[vK$`z`LIEI X8I@b)'-LACˈg_$Јŋ\^jF`!5{3SZ w˕Cо$SLp]K3s,pQRUسW.XoZgzJ@\_Rƶ\/=:4d!]{jnl$xECLrDost8?ʜ&RTXqi*k˒iDYVQ:˄Spw9h6$yJAV0#1>e ü/}k2,gqPj^qܫ-&a_`Azu=kN}McDaT҃}0`v;xPr{e95ȩI4Zqe ,\ Ice?_KMč9碹~[SniNmL$z_47J[PwFu!V㋻XG-@47s2$qQSJ>㴍b"L[mGĺC kqJ@0[<<}':otO&Β2uDrAi aR X}nNOkk60  @D(llCnV]0̬YtYΥP*=3ȓjmW9bxDឲG~ o+l ٙG0f |N?k>9/ND-|CRQT!"=wzYE8ҟm%ߞ Y4|Hmu3Ā" d,Bhe cDCE3p; ՀcV1^$ctopc&Q2"H&DV>̕1\7@nnĸ''sFyH6_!N cO7p ikQMu;3}ix,MI"V.sD !ʝIT`,'xTmO#8x kt'iyHHb?-i7Ӌ c0bo`TI">`@2! ] u@pGo5a&S:;8_Ipk0@dbқvc@h,v&ԁBNMXw|}iH+Vc$Ө(Yg|=.ف䳌O0pQݘ1klyfOuBci!S =yt1dȈg>.۸X@xDHc}%5Y*}YҖ?-PV("C'hfEChdcW*f$K <F0Tp5_5Ǔ%̘ӐL-$9 y ]^ UCKCCc0Fx;ӧhf;8a-QA0LQ>A87wsi݌z Zs|\3|C.0jf]#@Bg|_(vqZ8-,{ ;B8|( ͅ(h,Qխ?pʰOXUxh-@^mNMOSD׹ {@jY=\֯69]14vh2%c2cn\,uWyS{z4&Iu'RR3ړ9T.s0 񮮐N{!1}NwOL@6\o{}oIb`wyr큦́O~Ah%xe#hRi\o~[׊y|f_ͤIP#~ErBy?- h.& 1է!j*6w_N.U"<6LPz= 0vtn>lqe1o]+řwBeӓ'4 5_Q`#5Flk϶P81p5C356&ufב rƔX;X*ӵAhX"= 4n܉8 /R[pO %1Kd違ݗVhI*O|>V=mҢCWyl(3"3үB~ϜAME>1: g;͘[q\8kq8nVqܴ/㬓 F%EAI(Tm,7pcC%+Aa6 ިGog~v}52Œ8*fFy6V9ύ1mzTv6AJzC0j^CIFXBfIgȔ 03_D(Yם6/^'iRT$Z[:}Q/:}錄քtϟ~x_oq<.CYq_G렲]k?h?z`]fAoA 'yq㎫W;nz6pQ$j) `6I(sPYc֚cP@P *SяʮvH>.Ҳpf!k0¸!oL}N%iaqBABLs f3S!y B"O@QY NQqdY[JƢR{Yf^\IF&)G}VF3[)Fls]% M(Ae_QKRSe,Pb"uUCh2{&9z>Slc8 [qUx$HD h.#HhoeDTLSc'6Ȱv Ln-lza=/5C"d_@u5MJ^,ڙg%@O>Ϥ}&V79#TV; Ny5CuEN}G3xjWic(]t)ٱsg,9GzVqu$q,gQL -% &3ͦXپlNs%rnaEiJ1Jէ8Lh+1dSx8-ĘɆd#-Cњ駡~Khc9\>pN1N=8D{uWcP` *ɜD HdsVC|X8s ,5ry#>~W#}N9/OŏdzK?L 1\sׄ<=J׋Z i\o#5IuI1>=Fy}؁T-^nzʱ?8:PBٱ'0kms˻oFezy5!O''U߱`eZۣӣ:&Cmщmz6zPsJLa}9(`!<3nkUs,W7&,O~l-4&GXf5Ylu,c!% B%Tn (ۉIM,Z]X6e Cڮ,rJxBNQYx'jZE:=1Jk.']%70jee'@SB:2W>6|Bۺ>Jj1qNlW)bA:Vƪ6]mZZYoY@Ͷ ochq/@柔Zbbg:1Pv¬2Q dugbM$!=TjǁZ+8>G 9l`ZoV 0)(V8Շw&-+U -#祀%ssSû[djԲ^u, ʡѺ.Bd51l:SUm:eKF˵r<~;Zܵ4`rZpR-r "ِ1) TI ¸ɐp:)dZp y:=ŴhWI7){([-e$VeTr.xElE` p{ M ~͟J G˖#6k9g8 u0gkGC' '.p~® TZKz)SN8ItJt3+ڔ׎zP@\CAS Dz TQN4$uZAA85Pa^LN 85)x6k5<>]K :\֔qR:%Phm'#Va@8)RxM۸Kuud7C`Ӄ'gsޞXpxaާwњw<qu[G4ºcoZ-]MW]eC;zZKA>ƾ ?t>J W#_!N~xPd[N$h(ۡɢc2bݱLۛ,j n"8]‰|lyS)0t5 mh0 +ΙX,h'EVjA T(J iգ5}H@ "v"S vœhCv1\*@%M2)Xk! %" F׹KQjM3I (ڻ0m`MШ޶6&-X,7jѭ $$\Ҏ j6v@ ,/i$[b('F+ΠȖ+i *e=fFJLZEkSh}S(O#)3y -]9$%H: v9 l2XTv-931J1\*ù%9i_uvô2\HCvA90] t]v2d6 $2=tivcjg@cg4BvcQZ֘UgӭW}~+!1%ecqxc:k9^O:m|9mKnj|+AE:0J8fbHrKZ{U6]z; rMXGcFW_RRJK`;xm;Da=rc!NutϜ~Zݛo͔HZ)3&a ?󳔃(E&9nET2+w@JGV}iN7Pk03e*Xs1J#:\zRV`4-a4I8Tڨ@h%#%t2FIbRBRNד^,s(> gѼO#h!Ɋ硍 T19GY3IF7DȢ6/gʚܶ_aenؗꇌI&vq^Ʒ(Vē-@lUsppc) ꫯ>4Vnrzu={JDz͸xf nQOxqpV 9+dw0O*b̈́ .x3!'=4Wga?!rPeטFh9AP] a]t:%gLOTJOě:ȥ|?=FrJq{tzndΰ3}#0'+Vͭ2v&N?];Oq[TJ娲%-[+C_s/ܑvQN?zY~@u2V {A JaFM|@T,P'\ uԼ)bП|M#ح%|qC+ xjn[Io:?npO8{O$οVK}ٛ~Iαfwc{аO(d}?d$ 8G89+ YSJM__d9sB AkoYrD/ |ʌׅa9ݹ0[j~ZS%R]J(N5}Es Hް'!j;% exͯӋFF_|RZ8պ#n ?`eu"J 3cygo3G[1Z,zyЖ~N4@?&al0m>:c ׏S@\ߑR'tg2HD˸5ZUA[Bkdo}Le{+uz SFxԾlC*=e=*}U|""SĸlN}ݘQA抁踾#vkR<Ԧ Xj2$+Q'bvC9b`%:ȥ?NKlfh糉 a}vN; - S+S$ZuG!1 ղ-s`HܵU3b&j ö] G\ XՍZ-bFcKgmI-;H$G Q|O3B {ebe,'{Β˻`/}xwYjiCz}6r8}NN M%Ҹ,%xXbjB,y 'XbH5ŒO]-ޞ(Xbt%ŧp%FW,a U*Ki*ݻ;1Ÿm`[ؖR [ԌxkIu\g31&v{6yl^5:Ϧi+R*(k=iE4˵k2!Sk>s+\!KDH!~4d}(p8xxƋṴy,˖b՝t"⚙Wбӊ5ee~!;}37jlC)/֘+T\[B~@@ޖ_ Q :GȈDkρ2 f%v 'sV.i<62uGxp8"%~8!焣P`B}DEdH(PvO>erl? \B5BxzN}1 X!0 pPBC(>BDVPpe( 5Bp ) d\0ሎ@ON`0%8BѲ$"8%PSy-hKeR1 BiH9},PH%KHp`D$B,p$ &CB\P,x[ql\O5|IW?рh@7PLA׏M1`.y6SCZ l vK0U_>1AՁn- T˝#v؁[$prP˴B'\33:O#, ZtD`mU<ʟp\M1[Z,XiZA͑Jkx*>pސϕT[Af@MO=A<[e;A&2(TCRG LI+r]*:+mׇ|QlDŽnz;R٧hDRRÙcQ̟f!U gz0~bu]:+v', q!a Dm/Vxt$r9tY䒸!5,M$IU˳?a } X(p>hDC0 O ͐'<b.\cٮzF `̔K/6Fj`T^pGdePK7f?-'ދmŚw\[-z|6́ aC?E 4[}lvk}<iW9>ȗ~zP {myH5x%8j.bu.j:LC^k""jzR(~?fO"-v&fRѺ4[dua7,%/lKuw WK" eʛnN>]{@c+.NcOaeDbPe1*jJJysF Jo]kˢ9YosB#K9)Yi%џmF'YR| 'Xb7͒]'K`J9"lrSr3;$nf tq㽴Q:nB:{ [WsS8xvMI2gC}ڻ9rzBRY)Zˣ?M^ >t:ͰI(69b̓Ib_%w?܋$^Y}Yk\6vȟniR~.A%1KɜxuEz"o9-+\qE`̓Ey ~6袪R=β*.Ɨ !90W.^qnKPm[`U܋4 2ݚ@4*+C K.ѝw |5v2(JF;Epwz, yUNntJQ2]%p /2af=7Q >3I&vwHTJPѥQbK7 D7-e@=!lĩQR?ͭhkӯv޷f_ERoڏ}|Z>~t(/!Tۍ\ P@"}N"(TNr،QĀ0j8#&C<=(=4LSDhc}INEͳ<)nΞ=Ϧ?|e~ʛ*At'SͩGu3;+u*QSoon@KqI ċDҔt{:klԭ}5ӟYdJwǀi.dzvozm8ZcN49B $}B`8 p($#QɑľIvU^`K^õI qe01 y^/(8\ZQ4bH PM):Z9YNtd7$;Փ맽pTP$f;4.gߣכK `z_2V:(w=7)އ_^x ) oU/qYi TQ G&(qٻF$WآΌvb1< <[r2vOER*^b:\<###~[S?nшMq\fIJ%N ZGEy>Ĉ1Fr4za.~d'\'W+ ?'_/r[DF &;o#޿\&i' ?'8hOkA}Й9"`Zh{~ug?xR.F t!:AU?V"$7.>|wQ Q9 uT) ^ N'`*1c#Yt c:ʞ{0ԶZ?8[3KXWFIArNko#I9O< ꘓ*GݺWVS@k6y>LJCKS4 !sAH9+9AÏP|jݖ ZYsvWWcs''!&>t[ eٚ(_}HӅqq+vS凤S^ :*h+MQ˓p6NJz4!.ܵj}aw1~HUR=r$*a` @E#GE=+P0ԗ3^/8ۣQSQ8וˉfo.US)4S%XˑO #]"z  ry2@2L`AȒO+?®\!(3wA!xHoatKJd.٤wkTpL#x D79#w l۫5GD {O|tp@:pCI2H$(\[l3 F;cl<)XÑMEjC=83 )&\)J2Ι 6_Lu@Vf?u|X!lʌn}Y۫<[W0vh̷V}٪}rVȏx;YW$Z Alů/L Eb~7bDCOۗEQ闢.ytzճ_r+i:?>n{3?5tv:ă09μCyo>+CF}i! L<<\/)q輻NrƂoS_BjoO>^^PZo׫q}X}ʣ]ʮg$w^fKrݛ7opw_/Ϛ6 ٷ&\-?_U t؇&y|ӹ<}Y'A<21rl yuҬ T-wp OuL)/Pj(d"ZIjP~p1EG62&( Js^Mp3eLpʕ.WRh4UN6TyjJ}tŃ6_6ϯa☯]~I!BL\+qdٹP+pϖhɹ9C6tgã8-qsTDЗkIϭ QfFٳFFv&B)5hqfnH[SǶC*%Ⱦ+UH%t/DPu%F!1WH¹^O I%T'ťрeS!p}VF?bd[ᢚ P镦d$EO8 &iT14 fJᢽL#8⭌ޏ 8́:I abs@JloFKv1ZJϖ> ^kע^v nn躖m7O" Rג<냿1c4 hSwJsyv&+yۺE;>?rwdR{')zSD0CZsyU<_8W%UU~vy9511Ƹh3+I,%Ɠ3^Lw\L.C\7 h=Tz "Cی`Z ,c׎ @3yX_ƠUPV|| jgh=U:wѫ-X߇#߷*7ߏ:{DךdSBH6$z!@1f j0͏y@]i|pϾ^]}qSo29h& Gms6RIAyFaN*fSoQBydA?$KvAR5m0 V#ҿK3_" 8{ OAa N^2GjJpAy*sPqfP,ӈXo͡rZXB!9NDy1\(Isku*: - RV-v/x7\,hB"\399yfΛLS,V$ i]A0o#a 2ʥ^03% X,NI,N~0#CRFtA"(r dw |AU1!j׳} E^K:!8fHk >(0Qɠg 9(oph(gR^ƒДd2Y 6qQN!av$(+"$I%DLԂJ:"#QwAkhExӗ;Fv@Pӓ0/Ԑ?.G9cʘ$>#'C|?m˛'hhX/WWhN𷴪+{˛=z_CDPhdwX>mc\ 'ӀeyҘCDb&@Q* r'4ie DAyI=C*[n; cnI@'N:clh ލGq9NxKk.@J`О9+ʢ蜰qۉBy,s[_-M-QjRU9 VHM4EQ4?--AMN9 Ѭl02µ׍JKm)(S0(JwMH9e1"4/ܙ~KK\LDU mc҅AvJ4wۭ׷3fQp p3_YcyJS--O 8|=2 HӜ'\la f%[ЃR:BS0R Z$ȡ^ R%LQ(`KA'ѩD}~B@Q SBMTA8BꝠ5T4 @r30bJin)i6*S(6F݈bκ 4^w@]ɴxvې@s.Ҳ%*!@~7 H4pCiK10Fc0.9qRq4TdIr$T܍v,% 6X[@n\D۴ JRR,b*Q;e7e|Y,5) p0BHA[M;!On:V^6)N{HuJP9d qyZh.u-5l|9X*`ut//xF9FrG]DvEY_?-^JR0X?K|DU%-r<ʮYU|o岶2gv Mw/ Ο6hGu`rSC cP!y=d-?CeuNV'8`Kt` J;W<#R .{kHBQbW欂.3KtZg')4绐=/tIS2)N:lPFCeSn}RF8~ÆyżG )WFhǿtZ*6ԇܿ.x"t0p\-Kl\ߘ{h>8mmMS$L\|.X9^5Rmd7˜G `:hg?~~O<|gښ֍_Qe$KaJU\^ -u4"uAif[nїO3VZ[X;HUDݣVBHC:lqۢ` j.oGɅb,: &MGG8vE!ܺ 4X(ݏDb8MFTIAs'Ay~w z(cU.@M Y_\~fo,W(Dxӻ+(]/FWCt{|#ب Zp*LF "FinYɀ]&dTB HlO|O31Q\](Ҳ\ w_'SMD~C5QZ$B!Mj50E+s5TYoivk@[V/ֺjlo2mz;&OoYiA;JC^YX^Lڰ.ၲғSӷߛU8ӿ֊l L 1u>e2<§s cpSftTX%㻿G2OR]Ý hQX雉Nݚ 泛rybZ<]xVޞRғy9Ρ~5c MMR_!dp#CB2% L#d(dWCR!*s}˿;xJOkXBn|Ejv])G x X:qgعR4z4͕N=2-?|?Ym.ϷS @`)كsq+Nbm9ň!^3)]^Vmf7ߵ]iW8T8]n _ ]-2\Ϊ2o7* A8W߾np!%4dٽ`g29uP` PeCJ^vSiǐGb77~HC+)Xۭn'"hE/PD(Vg~=v$>Z~tNP-\z}z-VRa_φ/)Lnq%)/fbv-b1 eܞ *\$" prr2)1 + }y}v 7QDi:THۺ$)ƱITksfİBQJDjA@2lpQ9(Uu|fvŹ\JUzjnrN~-oN'#[fkS;#&±ȭ{9X;@CώR.*5:V@! -3 2l1aZ,@vAE-8) &P20ҏ(F",Yw[ dKQS@( viJlD務l2ߞlO\M\!; w|BT 1k]K $rOѽ(P{1yyBE2;$^3tu{Ty`x~z0wWqt}]q|sRŁF2B-;X|9^[_p!(x:5Km&Ip"n`NX0둷اž_3u[#IWKMIjoݏmr,<1'?|c/#$(gjd%Z7]*4Y]"&\.I(MOؾ|w "X: =te~2 AA i:]н+|ZQz~-O+ʿ9{ xAwU35%8JNtU^%Փ _UZH͗a4ǴNsq(-۠U"V>A:"ȊX{ vޒJ/!ts^q}ÊnPSBxERi11k @2jJR%IgI LH@!">BaV.|\ _=Nj˝_&XIӟ;#T/\0EUlD8ZO#c;r"{J)k9Wbm'2.fJ8 !Vl˗B &3,K>O3%"vcL<]ɴq;*d1:sϨeBoD·ُilOi<$]@!tʅ Dc"(I2gj`sD10rqʂ_o[/<ejE_ax!yT|*zVrp- 4o_q qh ZXbǎwia{=u%HEx;u +hfrۍ(q 8 BK&b 4??Lb^i*zϚtIJr- &0@R4gW3spcgjEs#T" #S* hf9DpWùԒ+LȨTs1]GJ|EKS!t r L55ܵS UN3;>`+])FeW"LJ L25t9vbw5oTZcagVvy_]ReӃ)9{H~=Q{zvF.L~?n)(8h,A! 5AiU oG{&ÀBoU ,rBflĻoP璢H=*=ahiim/ߣA1'-u| w26ZP6ebaH$Aurjk@N{g!^ӇRSm1Ba;6b!vdXW.,W߾حaVi݋,Rh4k7U:=޲K9ta+'lt93e xHb(`g+=l-†D>yk #|=$tsw<#vhCzjiက\rL[L *-8o=̾v3̮KJfRQNHTL$Sr83Rf$S@o/ߚsÑ`J#UX4H"H99d9H$0v,&P4U\*TXii "Ţ|+@GjH "`wWll9꤄u{{\B|k /<|qLDSYdzsg.N1Ac7 ԟZQx=|My~7U ;YT_l-c&Ow~2›5E hxwvWލND :E7#tM8BB*sk+0ځ* :@z:!A lg{]9dX:'j Z婶%ȯ?5c1s-jOz]%]г46xr?׃~y5U/bY.g_|-nt:Y6|=dTAMzk%-r! /-Kz\B\UzoLC)k֯kUWdAjí('BX$pjLN`ޑW~ԋEb̰ hXg`|Fp%ғ+:_TgX 2||P}5V*17_h~E3ne 5ʙZ@ 6V 6ܚ Oꄸk1K2l( gè8h_ vXo֑pqٶk#?WZvm~;P8|x|+/qنT8ueC ӼMz'(֘8UKRdi'wi-K}3SULz'"{ׅ$fLBBO'!PnL_q6J~. uO'H`4TnMԪxH*1]Ms=AnsWƘd.!W$7C T]h `$rZ_?D̎Nqí X%cQSmbAf$V/ւtT]ĸ#W }YAkIKcd%}CJ-~֖%X}w;oihf|QwU~VY<I bbo,ÔY6~Q1-*'F1b_pӿWY"AzdsttNdPm$P0ZƉn$ ؈p i˰h_c`.~՘^vOIR-1ī`T_߽:vx3pWOIv5K@1 Ku N*~N7uWhESD'vt?ER~0lOט2em ʩ>soK'q(h%(Nu{-t+{q{½f)\i2$$%D4B$ϠV$R@!G#w|p(2nϾ.Oсw<f].uo X pfXBAi.Sq+kHj!vt=+vEՠD;?$gL觚5Łw<<7e&*O*0rMzi׀[ `)LZWB?%x+sŜ k.`솷.t @C:^y[*ڋ*Ѐ|hgsE5#ܹI%]F9Kخmz}FX0dl{Lvd+.߳#L|HXo7ј# %wGX $::ՄTF! +0ebHeGAГl8?F +gA҄oz$re4&Ø@ 4Ph.<%= ?{Ƒ TWW_)v`sd}8Y}([=E $G԰uuJ/f^]Aѕh{ JB0輲98bP.gA%REY);)9!`Ly*5)dxYI޺LLa)E0Re hs&K;"ՋGY$*ɬUHR'`$I -:HIE ArO;ϛlb0O _-Z ˤsp|Pa,jՎM&ՐP;/bH"͔,9^- B` V\IǢpm/LԘ7dA BF c:!"Z36(JoǂD>CJ% Qqg2MR DZκaf5;PRu_()Jb6qS¼aosW>Um鑊\?{碚 u۟7*?24.O6|`^N'SW{}]Mg߹Z xLϗ n[?Ll '?Jwq۠V1E1;˻6`n@8na MCi7a ̈a-,;}rXQtn[PB.).ҡZ` Af%78D,Grg\tκ@` X!]OU w1‘1EC2(g, C(3 gR5 jxb -(5ԘG$>Uܶk&`r%Wմ@CEe0RG!xVNH}*kJv󳒼t!%ST]»uP[\z'lxh%O)4{P/]Cw-&(O#Nу[E$V!Z!GSD[ɉxNp.IJe֙xܰ3۝gJPD ɇB#0X|hOQyf(2@搙QFj$r*bnxF[zm0#Ӽ@̼ekɂ@iC61*BF4r.Qz ;\& 1'&;#l`0H$wbfT :jrd)s-HtנU9'FiD?Y"b0GV2k5PAr+"[BV3gr4 H-Kq60CHFO;u"1w<]/xjra4툙bGq͵~3#tQ ؒǣR# H*D-ыII6 9;Za@"'@ d! @l)`{ ×6n [+4@Pp#Yف}i$iV9/:R$|^ߞ!u yWcq5n~cܿ7j$?H: -c-?9qgUxpR?A(rro$6c1 C\ﶊQgjim\? 薕3fgj$B yO㉁TbZ!&爸e~?+I0噚]»uP]r5_>)cm^zn>,(޷QH>G|!QUiyiY`:tϾ!t~Ay/(z_ @窟$B,uCF奰6bJ+B2N8GYO%JJh\]_tIݐI{zU>uM0_3Ck4mSjT\đQ(# p,:>;/F ƒ'7ysxr w74j6^kfT-5zy7Ap|f *'Gnqzrw#d+1#9h\bǤ7dHw$l=u҆`rkw׋n"`8&&#[hbKd}ޑ]"FRl7&d9.=֞gs{~P CLRfv}3;?.5L-s:?nԛե1Q/4$i9f*Ur;Ȝ,ew /x_-o^rqlR[!"ҭ"kXM?B"N =cIen֑1MoFDeNAi%wqԠ] 45ǁcNF$OKyc %fdpkA} I|=hI b키bbO[LWBzYgh\=^@`r۾-a_ {>o ]i5ϣ%t7}ˆgm8g?Vף/ 2uS*UN"[<i29?\yH|2{w5;rQJEyYY_;cP,o沵8]߱,~VWiZ}DK6XjAfvH*{ɘkq5VfT$+*q]ι'߅AK%)nԞ8O΢I9YX}J~ë%p.}@ <ш&2}h8M%hx4嘦ٱO%by>HS*+nZ,w۠VgZK,|6SL{M\I9LYh2Pf2Q-"i鍅\CUoPE}3J:KWߋZ<؋Ik7x~rMNGCxi!go,#J@a9Y2ȿV}  \HZ"dCi >VWC.{yԥ&AJ^rS`ɜL,PzPF|]iiTp%`w'Y+Y"ӷ|r̥RvWKL{߼'rt:0LYK+K*(,R-b}-ש9{ 1ew3W~e ;D%K4^,4x/Oۧ-(jGe/qTۮEKp$tp˷ZtEݻ}/Woݗ0 ɣL!pՙ;hjKsgMm3>~հp\21]ut^] xَnxAFfpӫIX:Nq⋃;r?WŭկMzI좋..), q_iG8_:yǮ\BWX@p .?g}-!#Dзh2Ëxkjutpehw~o$}.wg]ax4-$ '"h =#rޛ[ɞZO~yvDG|ږ`kSzZq/hysm֣hsj{mZO؅goCY!Gvj6Iɒy]Do2EҬz^y`jevJR*/Icõ@U1bNuZ*Swe/ _Y +Qi,0^3!$MRab TJƝ.S/ߙ%xrjX4l8«K 噀Bs6LnIeJHтLdԣjx/JW5}첎sѶZ04 -~Z}R7-XY$-K>.4q?vZ|K-FFE˔s֬L*e۷gR->qOA&ɳƜ1'T](hkh-ʼ8T/{WG`͠ybv< Ò,U%'RRI)FA' Fdq? Pl ~Gb 2#KI&gOrE* 1 tBEQ'bך=w{ t(1ƝV"Xkfg;v^Mi %Gg& T1\6!4/r=2ŎGKkKĎj:KYIm)Cԑ991׊ $XQ YB9M<x# EO8; EW!H`)l>o uFC'P\1 ]8taqN?}utfs~1xKV7G)+V5\6 #i>S9xEGUF1Em.6[qU ε.JǠ3RӦӼ DtP,QnqxN*xN~W;Mccm?@0Gt0%dGa3R&^gS)xG陻Αp-+ަ 2-~ 'hkI˩ m-ђF+nG6^#}9B6.\kqٙk&Jʢ")ƆH2B5RV)r ::-脕e`ڒP0E(#L|!S+ݡV4=Q)aCGXf:;kd"y۝ַoc;rߺwq-cqUKۏZow/UnanVǭvWgǛ6=*Ov#poDŽQaRYΟ-"ҳߵp<ٝ&|G'1n'OyCeH4PkΒHP< V5P4kii>眭7I#uA,q 9Zp%ye-y::hIi9^ėdD>E . ëԠ­^Wxg]<`}WfiK#*3x99 4 ghƣ`[9`AÁzZgYp,skHG(2 a/8m<mmՠ o':Kp֓YJe_ЉKATDz W'<2YqԣϵejJ,0>QK?-1c̈Z\a&v6D%pK`{Lm6p80,:P4m)@&c5С-PnuX I)݌PJm;sQ'%2)J 2:ȔkîD_c#[/`բ)SDz˺vǨ:iz 5i7DEýTrK5$3xo=FB>˃اuzOS+܄,;I,=:ZsiƘz}!궀R(?0Kݩbe.(ցAGCJ/77o&!@cۺa N&юƖo'@"XҔ))ИAgT*BV e?bn%4 $9d>W!l2QJZ&LXhӑ)n+XMk?9 r#o" pӛc!1OBp<&~! )h|X9_'`E$Ɵn۸Q`D!J5B_Jgv=yz.'@% vv==ݮx>H6Z֢T-vF_kV7= tBcS'bךdD7_*H!3?>=( .Lpɒ3ÍRijl??Rƀ/mw(8:;$~Ԏs*B)/j. 9Hځ/GR|OѼ*܀jGu{?!Gq'6_[Zj*o~^.{>=K⾮;x̚>IiOO 6Gn?dR5ZN-ck]4y u14ւI1XLE-q>? ăt|tG<]/fu} l%W ޮv#>Yxstp#JYžڲUK832iscOSB "㌋%$s" /6nI)@;v{ڹy=ZL.~YԌLĠ.{Osϛqw,\|8(LWkv%,mk.gC L7NOzIr'>&OZ Hou1OAAmD I ]r kcF6X(o0hỿkYaIfߛM?i<c>*(Dec 12'GT=|uwZ,#m2ژIF;ɤr 65xr,~MTфd P!6 DU{ƽW҆_I~%mtïC77`u8i,R >8GڱHĪvoRSgmx!&JU*J.R¿ػ6$0A3̪R/1c6NlzHZMK-ڎuUfVeUeGL(ĠA:hKq,Z-^c4` KMAI,)n`Uae脹(Au1a Z'$JbvK5\||EgH,Pƣ ӈK+*%gbTh\k ƀ.8Kb >@.QxDYd@"G[Kŗbs;(6UTcSEM㱩*ͩ%_1 a;{{z͂16hnx-S2j Z$$e\$LE:A1kL7 VlwAX*p|-]0Sҵ]5\3%#U#`븻@VtrJ\ƜǜLVoowŒc=opu!v>\S43%CKӥ&PO N9qГDe֓G]+yHD[ Jϰ4ދj.M 2ͪX9)ͪ' 2VgLߜe]!S` lVc02׫_eR ӻF[Pj֫N*H.WT 6'ж۶yD0io-F!^> Jb VZC$RW_ZE|pT4`ϧ=C:H-8λp Q@R 0 d)?iF-e}UI"~w2cs6s sv.}OiNOI@GM@r3N`Gt,0,IކW#MN>j/!`oV>_%\\ae*\;gklp = n%^PcH{ qw@Yp,d+@0B;E/eu_>75T sۛ3>qKaJzzlOCRu6oZei*'l8_-)aۚoS-I #-j 6BfGE ęt w Nݠ{QtKbQvϏ 4{ )l}ZhW\[DalOUohJ~kkkA_NYoۿ*Eropыׯm()2~un`oݽݗ/^ast׻;7o{˽o{??G$o/|ы{G߾A:AA/'4/;n38gKٸo||)nF9_NIV$OY7~|eH_y0ЂKͯyvNm\~j/Vy=?n=b{|+=;J>\jl׽ghѣW/oRKP !!oEn8&_t=Z/ҫ|uwO#Ή̺N˝cۭ]=Www#zð/^||}VFS;_GzC_|yҝn{o^-swqV^f^C[``c} Kdwv}:L)<yi"},,%s~)%( !.Ў(DJDaBHCt*g,Y&!ѣHW2;_{EW$/yH1WRr+f] eՏDaO)e4SJ_myfQfHqSm!Nʯ %t ƀvDIv&" _XLADW^j%rκl2b9#F3b9#F1Zz9$jV lusm3*5Q6dMj-1JՔ{rچA(#9(ɀMH9G(Zyčԓ. ε6떀+-Ъޡ:߷V[wJLJ˲,ęЭt0*^ Z8>V;I}OZ`ʩ:r1J-Z*瑺<x vNHԐg`˄IE-?F%RҢ3^8-4EDY$IasH$ 3&j  s-,+=qH2sF҆g/&=T.l+Xء}pդxuڥUl8I'2mD>sݩzDSgy574luq,dlVO+ϟf-&$1JЫ|on7ukIC]Kp0J,k5˯ |5؁P_+BR7D8O-5M/@D [y"]6h&(MyWfhV!& $*W+P-M("O7P9clb)^+cY⁽*gpBS1&i+Md T&M+B7bJi)䀧 h\:{`on @ٵR] @(F\B*mZ*%P+RM*giGkd-"ʆNZ C%3_Y DTjC4T9&RbJSѥ dGr6K^Jtl@W!Q! A7*؃u UVRrDV,UH1X!:>!1M)tM)列MXQ1OK0 ExM@e fN\-]K1$Z/k5CÓIn {W#:0-n~dtTGa$_omxgY=!<٧Qy*Ǐ\{yݬ;qb*x9ey)(SK9hEP6C3>n֐dLD剧6TYj3?dSm6Ti4fWmfLNȠ\2PRz)Ќ>8Wy9?@qf"j53Hr7s̥t>}W ShH&RM< poN/gy:Lپ]"g$3AE $ 1ēY%*zjyS#BÈX 44}z$/}Xղ3_Mp͟|:Bh$ %?!p&% U$TgTHe!jr6jv9imClj>6Ķ!+Hlw^BN>: RVʻ V:FKR[hGyngÏ$|vv0*ʞ_xȧÀzܾL6-I VZ(RrZ")$5ŝtrB- *mV-K8 jX:ϟ?5,aA^H8  vT3B(G xΘS!=  !밀up*꾫ZQtz|˂r'/g~^>t䢺beZ{? 'I"X dHMFHjRxo#@NjL.I6Mܢ[S_&n-Eh['Sa0 qyEA@\XÃ+D\?{׶Ǎd3;S@H$-o(ֶPjRHl]<&I٬X[ 2 ܊NTpbw_ľ--GC=Cs)=q[GZ,}JK<Ң)@PIJ@CG/dr8K&h(d۽{y{y{y{wCɜ^4%'|ߩ2e³TU߲⍔"Iˈ^{e\T^SMͲR-Q%Zz!qSZ8g'!ڏ>xtزč!k~zG_}}%pV__#c.yk܊?z9CriF}%/zŊ yd-_}{W=n ɠJͺ4gk2\אV "ԻVv/^,UvWK_vlcg;f"Eq9uMq8+4Yԧ24>$pIO#)^~Ḋ~̔^5 ;M6,Eנjqx' 5бGOXTB}##|"H =hsĤI&@9h)2Q)fK ])(:q= 4x 0g38ݍcܤ(ǀ21ro~{݆9F,uQB!c)d ѡЬ>X =Q=R#!|Ӊ[GzZ" gg5 {Οd3mf_eZ`Zп|z̼:/lb&F M"Қmomi9jt:]RuW!/Si~=;;il d8 :0M e՚@5s]U/gSԭoț@u} @;K*Ncdډ2~kvYJƛR/ l(8\.Ff}z&tɦX# ώV;F(Ww:Kľ pL]nv8Vej΁sәB?={'JQ‚y:ȈQjW`C&hEFqh$O'οt~|?q8O5v9R@mX,Eq:&RhX.hL4>SCxv>9a_9NG*Zy^PI^_cvwggg??櫟̖4&/]*_5S,I|wUwHt̾} R^ET2}]{{U@ [L%kAYh7,.f* @sK|d{\1eIXl l ҡpY(Ȑ5fz w&Y˖L:hL 3@2%iJBƉb5T"y8 sYaTa=<Œـ;w32.8 xϏ=3))NkQ>k!ᒟ9ۤ x ]I.FMI`SG V"" Ik2i1 7@N=*M@MdhKŦP\s]O6f $Ac|t9*o5g*[|V\್"7.+13 7]삱(hhzlF!$,` x2}V 8!˘1ìddĪ&qRL,y@"'Khe۩Ȳ0ze +M6y:HaeL ' 33,@%HjGf[plRSZRٔL'҄ $L}˂Xg$CrXԢ2Kj !un89ƹ$ʁ퓥ao9I|~_^W4Wz*y| .E8 ئrVZ7˜a u)}H,uIO F_JƂԕL=I'ṬࢤQ;C=="=J;~K7W5̔&`" $z4SBSOz& 2 kf-' d*~117gȀ-KjFq'm=("Ԓ{ 7ƋgYCULZT9rUd_= CQd 'uN"0"dDUTL kR1yfj,)EU#r`'Ggm@&YQ胱u& u}nUL+LӚJ_c,"A"FIV*$zRgo()Y%5D'GƥoD}Z.\En ?"M%2YQ(A3ǑB(c9+0T`1Et ǀ)7=hqvPgLcv?VAH뇈^,q=9|8g[.aT9uQgr3ʝ(ss-Pp^QJ}>eZÐKW~]WB}={~yͥ;!~g?5bqV-a3Wu wߞ=wɒjNԽFN㋣g rYkܡB`fgQ)U j4s9Y|aFߺQ#LFQb7j -wsr3,0VaN+!sh4:5 J K! %0(&I]$%q0,(+,qLtn`+RVq` *Ujp̼Ti.rJ$kWD)Fant`3.;*(AHJ1wTݩ ݘ*aĉS%wSZTi8q;+NEc RYb:JrV[v6 p},r1 ɲ&kjN; _j.,ʗL$zYo`j؅!+.P`uog\߅6#f`wD-[Aӭx $[VˌQVN053D r&Lqjc?CH.h{PqyޮF[j:]-VvWqf^ȝ u$;0\pްn&肶ݘ4j`pEubҴ[pL^QG+Ș"NNS]aI͘d2~hȁr\XmАNAd n <7 .Ycʽ]:W#ǤAV[du 'Zi΄8l~~|7Č\S7 Lr;Gv4 #C ]i+ƺu fd<,q:v2CyjJQ7&tu6`07.}AȚE:IyF# ŮV3ݢErhb̖-١Qh1d-ą jC>5 z肐GqyFY\%K^KaXI񯧌? Qcnu ׃&:8)+Xs^݄/> RJye1i%8+s%SL1x0ƪA6l2B)|%,27 ך&-4Bak]5Umh2lJ Tק``VuyPBDNF+n$4 Ls7B0EB <GK]-I.nlì˭ K=hY%46zSǃu$ 溜'YhIc^s^DYXHIaGMжszoT|f1\bXwԇFiGk!u8uY? 4nحxA6!ZbD:m6- RYY*K&^%iZC k2zG4ݰ-|R㭛mhњfNMBеP0np!LOC\XWL L,PG{S9՝ZQX6NmH'6sS#n%iRX.uB' {k(GL"oaIiR %Cܺ) rf0,ըFRq !-%=:An([)ۓ&ƨ iniQX (VaFi3c4G-W/AtogX7+ZHȇ`;x&a$[QfFrWlrkC=68N,vWze:r(ocj ?ԭ=yIIhlqZW/!W}zlMc.{Z 3~jlMZ`XgkZ.zZ56ZMj[ 4V*ZC j&r垆HS@&ցr23^ufBwI๒,Ȥ<"XbqΉ\qᭋs.*y΄0 B`6Ru@`SNNa26~4yƊLfA͘l|@ gPa[V(9]@+Ay T#Q0 ߕ)bH]E>XZ8PTh,hCh4DQ4JD#`<V k *sU?Dfe}{LW8V ])%]y}Vr.bZI̶?̠`h9sI ,Ũ^pزEۏ OwۚM#!mdi ?v$-E$Qbbj|eh8c:2/#dR߂:2i 62Am Ecp]1sն˥1(Rީ1=tAfRR21zc@ǽT7j''8ws= E i4^ʜ2&VoS Y]v&NɩBѶ * @V "L#b+B`M":[ɝ s6 Ll]id~RRfN͎ &b/mB39E&ՆgT -r Zl[pAE[0ZoOJ2^vRϦn͏.Bd/?e{yR<<r1'*{SKme@s`WT; K0ytt`{)$k{ft'R浥INRZ3鳘΄`y3e3F p4LyYNE *TgSeF"zB1UMD5=%Q~M6K^ỒӞ6rKMv-_zK])toIb6[ISO"[)ٲAC=oaf2(tBcL{`D1AgcX_~k@#וjz {;,?F&y?m{d X&v2 'h_Y4 H5P\L-}K$&^ҳw-:oԾ:Uc2[ȍb^LdwS *s-VTM.V/DS PE7LAw*bz?3)Treis+߄D u: (g޴.5NŔB7%%³E@h/j>O"wV}:#/a1-냑a_7WN&AOr%򿾚^bu1 +{}.SɌ8R38cɯ{)z0)kE{ ?VV e-Ρ]qxVO̎o7b3(v]X:N{Dyf=q!J3Bd;Qkʧ§/Sk!b:gӽHmIyXYAISV!FP?˷ J7QWF c^VDB+8 'GՋ΁@VpzϚљ%'qb xذ׮=I+)w^t3(/u7B31$=Q\wtdrB繨yQHQzƙmݕxd[WLʓv?i?[^N]xc*(N'?=%\SrOEܯy8xqWZ ND^hs=i@Xq?J#b2_Lg~:{?q_'M?]Y2D'Am'[ߜ>G?OW&SK1*cMt?nHQI [ooutކz^JR BY[1+Ajs+Or쑱_9Ϲy %I~;+]^MzyE@_3(Tvӗ?(V3~8|k\jpʂ,4E}*3A D`$kXU۔' [dRfIx23pTЕz/j?]NX(hTPR/ B.Lt dH/KĦgN߿$ 7L4ݗLRA+4Տ(EWT^qNB>/HL| dؠpԎ m" @84NO9 *:Y|(3đ ړ-:+$eڤ=qGf( /KOb͎o= f8vZ0>]/)f3HbH%x_s_1us_1uQ}UKx8!903UxM.@;ʤaX+,c%xM?Ԩc.:x+MpEF:aRVu6 n@s)|tsK-"գ)@_X\{柦@N)q^K#4Cao.W ބPJEWhy?EcF2 ]z g>6 s RDOJi U JNxa'ṉ艞yЫ"-m"o6ֈ6 &zGMzdj]- 䑇8Gۿ/Ypy|ЕdnK$3, 8 g0=Q,ER1@B"^ˇݛ"#ZH%S$Gfbg%bNÓYi@>tr=Zs1NZu5m ʚtm`&s0[hCW=V;QW;I>ROs90FMk{E{l=|PivG;NǶ A4BzߊQ/g$, 2`% >>@g#F+j:&zDS9vG ߄7!|rPjxMҎKJu}}k;cz#tǎX28Mm:P=k9a N< Jp79(698=5}Z~9s !+ ?Ʀ C;yx1~ø꡴A:ڔ36>HֵF[=ee lx4^*lKMWȏHahJCO% (n9wӄjFwS:N]wkO/?'j ɻOgggM6tɑq/JCy2$҄?{~k){ r^W1 7'riJ1n9A+ Oyn/K~}թE9ԒaPpU]0s%hjAk@cVX`O.l NI`k&>^0%>04 Y4&j 'P,JZQ"u-C#7IJy;14Dg7F㈗j;9UKRUb YTS`'\/ h> W $O6 HO0[pU ;} dN.Xű㬾 *mw0`s~ϯ'?~j,\nj?%&5 ֫9v voV2o瓥|G kܐx3gd5k,|kknE$*ٙVM6%S. mu˲[ݝ|d]@v&8i]޵BDZ9q=&ϣ煮qw?ɞϜ9nTkv7*g7[Sq^e={wY1n%?,CoimQfwbW`p7Z LW)(pLFy1[Is] Z+UI5S1+gwO-4yjoRSoRu駿u0kL-8Q,TjigpkJ⡘.}7&F#R),闯g_3JTĸ^_J!*8wQ˰=ϳ}W.bƎ w3*- cȋ RX)v+DQ(VR MslrqiIQz[ 4@$jR| =M#i(S$MCI$b- Ac#l)s+VBhlQZRcKTn k##Y ՜mozaw,o Ba,PPQ("nЅf$2۟֋|@Ѵ;f:~àB7]!wHaG/sr dS<$9¥*h[$)/D  ɬBȬ`&~ :uRⷌ ʏ`\if`)!)\ l1Dz4u;ѠӵTKի&'|*zՌD" yI"^{Š`<)ҧ;O p:U/wPP14*r2 }}4RwJeO=ƚٛ'e5g=fOqS.zTdypē݁7lz T`{s$bw,VSk8\6kWh,Y)--RX.'Bےv 0d)im ؊~48 Q:bqO_&bwVIȻ9iÏGc.hw/5#jac3/~kQjl]=FWH迳67˪=-ӡ>T'7/h9k3|>q.˗ϜfRlF&> Z\V-.Og[1[!-Lhio3. s 3e2@E 4)X!|X})S[aQہ.F~>☏gpֵ4b9hq]2vض𙫱:%mfUۂ˪meն޶>s Un0"wŞ"' TuԝdH 4\cjeN=sjA\d&$$_[iIֳrY9 rY9 uxaBjsR8| %u24+ƥʥ&9ĭ]|#&SX齶jm U0{Ilm$|P6ma{F pNU|Zuh[Av}y;Bӳ.[$u-d ӧ9ddWWR}](lAp`u,vYNŔeƩri8@U_>CW٤e|nNh|xCkWjC輒żh P Qiꁭ8?DV)C<*H+8UlQ9{j'ovJO)y2Ua!6DŽILdp.|B& ϛo<{>~ۀEw</>1(R$cGiORڿȋ# EUkڐjӇŅbbу@w`!G9e9 nں %DHkZ`)QE# S.MJm!BpSnge`eáM'bVwUs) :֩"E؞  pz hyO 7ͫIQq{v8&&o;"|2QDS:VF%+gP/}7z}B#UIXd BsC"%};2}Ղ/s;rLEh󑢪Kuhζtٝ|8-)m n$z_#1ll"5[hw[oo$2,~R$Е[H˹ҝ?JZ ~w-$(Q`LMP0gR p q,3HRᢒ<B/pa ' n$fΜJiw! %礠–"^ es,\Qth@9 PV08Ӻ:Ð rCav5jCg]>ls ͛`AJ\[t; pS=\wY;*t 3P Q9ER3Ϙ _:—H&l }&]s"ЫoWcjd.j8,[̹oЗ޷v$4ɍXSGץcuu\Qe*ŦZd}(X|6us;sQ"3MT R okƓ[i1g~@r+,E-g9f6jZ} H[˜Q2WSoxVk7!6귰sMhs+)Զx65Uѻ]`#rsrRg>wk㟪xksIj+/w٪5h(=S1)Ơ֐>+c۹nEv{_ $ ͳ_]wBQj>٨Z[t\RQfւ[Ɉb*&r0J|?O$9F_{ݱfvv1z>T_+pmZO|^^+ ozDP8vCemĚb(wvp~;)=}D R"-"II(G\v(/OiZ,h|3Ebv_OV TsvHm3}Ͻ-#!BvRuxM(ϋE;C,5prRyJ0 #^B-SHN SqaϾk 8#\fs{;UC'D;6ly r2VS֜ X;J:F0dQRʹ|ݻ`_P)m6#DѮ=8OVn_toWdgǏx|  ^mc4ON񀔳gEXn.FHkZSulN`ϬqI;<_S!cS͆`@Bi|w~K0ջ̳)2#7,J:ٻؼ{jZnr55wc cDkˌ e$d0%/ͩ(A-fwW[M# gCl4,g#!Z<1Tb~tXl 5i`?p$LxŅZ%;寒BpCe - 㯲$0 -Zfyf1*;&alb06&AXYAZbc7Jn ҄NHͲ,tVﻔMջ Z_C8ZӴq0v{/t*Ҡx7n8Sj?A y)0W};z8 ތ7!ie{=Ƈϟ1j*|gǾmBr:R)#~I[wX{)݄N!~K~lĦBuĵxx8/UhuS͎h= NfVq6\E(e2A g@EV;)YMdUXl36ԹFη}Ȱ`FY-C0-rM؝7 #;1KAXt۾&N`FX4%R+KF{8#c6Hh.!px:!;rV׬|ȥJ_o:Eɗ nȤe/}XjO`{,+ nJ=aS]xLa(\6[+X,}[ kM٦M|r NQ7^* elSF˵œ@~Rx]M|LՒ FT|Җ3^X~<&޲3%4Jk,V<3vecW%;]=F?lc2meb$F )T[3x`)PaVLet% 3E{tCOG opj%:rWIKXnDf>=FO kB&)nĩW<9hF•C%(LujHSzWЧ o6p m-?G|y<[f|Cޟ!|6Swڈ`g8}`>ϓ ]NvDwx;ns\HqbD* J#dJi5p#M%kvdGZꍊu.H~ips;7/xɨEiE%D\=n*FՊ.ƚn ⱝ6'-q ~qPc `V?qO4ȦYѵ}h(XW d!eцVmZq5DPƓl?no2d<1H3[{{P]pKI:ƝVuM1X>h9ɼkrH!Il ^b<ʫ~8 o6nߔ\)J.aP3.@H $co^ialIL%|,;uP@JI&ޛcѵ0>GhhAlv|c-8¹ b)ʫ0c57LYnt $DǃLͻ}ZF&=ZDGXt}Ek%L;&2q8L!F<7Nnk5y98֟{&&|y1Llc}i$I,$ݛup'Z| ^GQgŸq3C)En^7~b׏Go~[(yN9$\,rS߂w z'1I{2} XQ7_uG#4MpmW|Kᯯ&S ÃH#e+.\GGRuR \G(v8¯XxNP]Nv<*9R__ClhN WJC}~{/_ݏ"iДVapvAv@o,flvcP-3{>k͎/Cv{oݜLJ@/{{e9_8/g{1A|W'Ÿ~5MN~?G`wm~)4h04sYP$8g 5gTrcG#;۩ȦSr3<\Ds9]9kƅoٯ"RْP*}ߒ-L e;@rX3+WáS d82a7Y [ LYpǐ΁ "<p[>h[@y'ٛ=T*S~FOnyoyyL?%!LW*/q> _EWRK3d<'oyG:i>U>F.ps/@8^ρG@lGU TA:[I%[q$_|e_$K$$K'dYÍT/|H"ȢD( )ɗ+B=_ɯ"2Jv0 (o$nB'~!rG[e0ɖo鯜KpؕQ*e" <ld 4cO3&~DA< @]]XV|΄ui ~.A!\>ǏW=7(8݊bm_͢, 5%~'S!X8a4Njx/S%`T jkE6X]R]\i*>|zroc1!|cq%&*ԝ)D?5.aѫi'V؏ -D2jaVKnMDh-]49m?KA1hl1vT,^)GFhYxۨ$nF)?f?Un9SQjק~psp}1nKTKN'%Z~^y}1^'޵Y{E zQGܹZ:˷[8KLW0A ED~ Ccez'sͨJ*XB"z@rKk:a@'ʤ@ j$6_=fTG|4|M=Z6o" "NDXJSwQ\]ck:2R /r~02pBERu&rG$?M*Ru3ѿ(#!߂+?YV5`j^U epBBMsԵ*;㘠"shd-*AAY4stu#^zvV8@c1`8"K(u['PME,S8ͦ$^wu«XѺ!?uDN"7|9b)kI6eo@#̵8f~k5mp$ 7v@ڈI -*P3Dy빧;TS/炂'(թR6Ɖ7DnFHt%fkt07K??R) @)zbjkz\a1y+xJu#\R*5~?ӭ-> 5 YVA[6N mJ _@k1୪A-SH0QF. B*SQ-_7{7_I|u9A[[&nLIfP \'_NQ'ZհrëPj HTpĄH*B[M^U{#$$$x$u;΢Gf!J5iN"L 4Υi@UP@ YoFR 'Q Vp @ƩFS2"HPcL'?ȹ5م~,K0bhwgօx,u ׫7*] CKD-ZR*Uӧ:n6 l=ITMIgOU;2]!@7݄8_h9dd˞7À2 ;qlAӄ --FY5{EԳ7ͻNlŕ$y[嫼~50tg䒺!#A64S[0A;~فQUe %$;NIIƑso{YG|5f&B˜B09fXb{Adǃl3Pȓ 0UұęZELXDzdH2aNHLNV 96! u/Ȝr.EYj p.csAP!E yBnERh9HW#3Cdz72y},chpQkt}+ z02KݶFOHPb0dC޿FO 1;΀!!k|Z &\`"G1sZzGƧ$1&S۫tԙ#)46Qҵ }Q@ߗهliʲ ,zd"UJ6Z2MNo9:X#iat9M-"PM4DSR^Yߺi1 iVWW+zc/m!S 4$! c%qk BהNc,`>zƃc5^fF(lpY-8".&I~"`I(EBZcJ25;Bc8Ea`XnİJ0qTLe6)J$R.0 rQ͆M^z, |:^Uʱ ] z~i,?ʎea.W,0gG022p:DJpJG)?Cd>_ܾڢ{6|d ۵g7= 15P (O@D (r ޥ,d 6i:,TQ;kEL\[aG(0+$) R-ƥ񘐲@ sVh(>xK^)rr W`$2R:z%qCV@(U@1Ǔg1k`g݊sXn2HFfcC kc! ?>9+ ;Dn4L cV)ߙF0F>9ۖ Apb%=\MJ)N/%#f7ggE_44n3Lkc{LY BZ]S43 i@ IdV`ʐL0{@9" 6q<Cq6=*ږξqSFa}[!$92CF#B9Ҽ)hӞ=iN'y'YNN;߼Vl(Ɏ%#vj&Ih.''mpߓ+qa$Z67I8EwnexCpC2B>Ih|)ImAVspX[e׵J'ƴ"b0\`t: '#o07!U\ks3 Mhs8mq2@PǤ77gŘ0?[p _I#ӃoN ,wr0;]ҲƁLje0`$v`TyoC>uyMZe{!XTEb\SVlT6\Oҕ+2}{,\tANˏm^;Q׮OR5D(ǎ9ř wk}RFIle#hzObCBaRYs3( t&PF$%I^hg4e }4N nLι'gdUs|Sl.7G%7)SʆR W`u#\ڂ97.DL=*J27vɘRxCP&m| =֬ExSY]z V n|_hэhM W6 i9sfb|FC&;1X᪳Pk /o|en[ZL!ikCQKDB#osDl)f)i<$@)Z79a'׋ _&3"bqڄ?{𚋮kË˴-#ͫtVT/7Ƌ+Ќ\mH%ܢ7)Ek{aDKAqN>pp4' A[N4c/y-;yU +X͡ p{yB q[j2RHC]Zvkj/ܩ5e`%(ړJ)3He n|m|(Vn춼]j:aęp (GTgYrl Ba^u>J2ETguBfuT𞛳s`ui "2UeI:|I#SAdOQíGjV8wf8!2-ތa),tfW;g$ '>f4 .4x x j$Nn#6bjJYc2fg: Kp\4S9Bb l4gκN&FSkbjҖ̿XE.MΦvoC7m)OiijfAnqi0f9viܔ#}t_^tzuaRfl۸MEm*#ܣ.A+?_.[}ƪ{Yf6o ̇% X\ttW,__a-WVwRot2ݔ}lYqlIλce˛I$JÝ>cF;|(۵mﵾ.o6\6I&-`=˺FH![?K P?7>nozɹ:nN׷ՃS/A˿ūR{~z]ZbϩXAD/]_n_/o^/. >^\mU.XJPWLvDE.,}̽eӇqSlֶI9j%`WԇZu.p rP߶wg4G('de|cO=zz#}LH@|@xߥ!Zӏ| }hآ2HWo!*cp5vm5$x,=Yqw֯{ٶ{PVW^9XǁvS34BkqGie(ܴ1nn91Ak2R9R4vm)1& Ĺm1ܞ']7=rQ~^SΕűb~\%>!:e-X%ywfE1mm9vSij+ tʱ):Ȗfz l狶4Y;al9V)bj,o Yݐwz8{\uT}vA\N:HWp=ӉN3S醻3:G훯!jkdp+-jq@_`7̌HWv)ѝap}d~~\=wr@|D!mZVѻ֧mQ]AttOD'h1IP>.pWR.T/;6զSVHЁ KFH\O=<S<~+8'Fz $c(ZKm8z5ݗ?>J9nX|.RA?8~|qyqvyv9!Ǻ$ J9UA\XNRCyg'8O&a1$y,Ӯ!D9K<;cs~>_ܾp+^>;unUo$|`.Ǐ{Cfvx!t=a3dT g3O_zZAzzz)7xI>~qz6/xd= e=Az9Z)z<DFVi_9҂ɵ'azTt,৳01|=R:r:$9S b>xKA@hxUw ]-G3Vߑ4e$m~TmÛ$k8Bௗh?՟Tmy=o,ǑFsțՏCUkG+T7JF]%!a8[ׯ~l. M׋! 5%-S>u91;7i@۳~>kԀPѮ{^]b9˳KН*{>]g?^(s[ '7?euX#&wW7ߪ?O{VO΍w] X heo06(}VaP` #>>ܷy;r{o7=_-Sɑ3> itun+/)J͍bLheޒVN/F+d m4pV[StOF>G{1n zr3FMO.ꭗn~r?ڋ?kbR9)^ҿY\5ϹF2Jgŝo~(Wzw߳j+/-nBMʽ/PPPP׫z.P__\U2{ͅEFy'Em[_xvz^YLv̦Nٹͺ$ZȒ*Jn|==%٦LɶG, pxyHP¥Yj2dM&)3Y88${K#:!k[R"@,1`> 6ra4t1cF+ w ekx6Pdlf}nslmpY20Z 0b5$jjm%AGFc9=L`bZVYh س 8tЉFf L&qV`R3cU`BIK=ڔB9)0@ $qu4ɒ$3j(eHf)*q:㒖u4 0#nK( 21P,"0,J@lfy R0:=UK5U.RX{@Hne~M7$G**à0SԦSrA%0)D T >O$+)E A4-SfX9M90@uѹ=,/ Eɵy.RԣA)4XW+KnyPZa8 &~&lD> 14("`AL{>+ LKuC,v?.:ğl)M%j쾹 7au\"7g&C }h_=~I7h4t-%W^d@[|Og?)8r E&Lغtx ;Ϳn׉k wGoN~Gnqxn bQm B2<+?`bǽ~j ~w?:G~cF|߸{ܹoIf'Pݯ?y}󫳟w/2Mdg)hr\V5-\hhS c' ށWp~qӔL7dL5c* Vj`'HVj]Ѩ 槖kS5{z@} x9 N/fr/8aO]Ξt9 Z痤?z|,^%w~:u Aty~uLv/u)9nq ԫoP)+&wuc<``{&=E~\m_ hplw v M*2HW/Wmg}9fC9s{ΝfӋa~6߿mVʿ1_vM_|˔߅?7ߟ77lk7u'nxָAD3P>=9k'gqyow8' AAO B7A6miV̰醴׿luR^At\PB_! ޶2gM2}̃9LyVXnƛ.˝gPu;s537n^z ʱ]pu_޾`FsLo[w=M)Ϋ̓Ə!$/ (vƏa΍灿_EP(WvPǹIŬ1Kڹ+z{ F/40aa/-P {˒a{ z5~ ӫ^Uۿ}l(` u]_25R_H{MUXq}4 pyJM~Х 2*(*|#y_I}F4o (n֥,\0;CD(ʯ;A(pmvV`P}{a Q%tYrq=a-OҘ!BNI *o'He0 m"-8T///U7 1A0DGveb`8DXlw n߃[E`U(mA<Ӳ`dobqxApV8o8}`|K`k÷` %o=h!iGӋnfmwfPw0@i^A2%pbEJp &Ի;;cY>G$RZgۘq,۔&YfYҐ=/ܱ@<imJEiyח]N@ިs0& mj (YSir6mr^Kd)n&o>XyH\ R"Z?XV}IM}|-3J8E跁Wz>>v4 8 \)5!|) pq.Qt~1(Z S;# V$zGv~}f/ƊLpB9+M\W?ZGf\r`/:%{{{>t+[+߱諷lEuޝOlA^oϤR)ELS JU`$ǣ I(RKD4ݿhwӤ[P:+SWf~^ qծv؃pRʩgMn#*roCL Q-%g^2 < +DJwғqkٙ+:XBM~?p0uJnN)?_Wf  B[A^ Pr+ݲ&@WUp xzq,~4mbqEؤ$( I3|[.B[IWIZ4aOI{T̂78 ȻW"vцX5uW+QΆCͬ|+hdX?Lz f?#.iMHyK-i6%*-l$U#_!D6X1~:d9X(OPI(1&# êRK A,Mm uq# سщpRR)"2b%)6X& SNra]XA\`YHS#xp/AZiA 2w"58A}@;yqZ \g_ nǗ^W 9iT߶ZO;G>"֨/儡5[nSo*RD˭Kl_%)K[ 玴~1nx,[#Nl38+8g+?ddT|gllRx_Uo;ѭ@.!no{Dfv'/h r>vM.#ݾ7 v7l~W@dm"͆}lpI``=?퉹9>|<6zeU{ rn {&2 +Lj5An^FP.W-UkubkA ic4'ǖ Og~s䵿Vq2wO37q쒜 |= ^ ryX?woo :%ʣտZuĿ)_ ěyB[e{)_b8xڑe-fս-?oeg-i^7ҢIcJO /jQi8j^}R/\Rɛ k?Y~@+TtL6!)QRs+ˬYXhT Y' Ƣ6pgiOm\j cASAm&J >%ݔ`$ :Q5F!{E pR{g+ vJ(HK,`JD Y)g$RB36gp4ԇl%β*mPHkmfBC9{#a^/A (|#n Fi -##"I(s"Ea)nV9+[`%e.~SBQ 2 9\[eVW;:z0Қ)ήgOi]7k y* 7Z+U}^{.i0C:a8^HْkgM۫)jd ġd,{ӥshI!:jSg33QOo:Ij%tє'R l1qf5|DuR2Vqmu u7V@SY0 t՗7OblEi K  yX17|(v9s;Rl剉9$&N*kLO*&B葔$[GLXF:c!„uL`Jc FQ021ྤVZqAx)4㻭$沮oSu}-w9&n0-tܡSzW=uUמ0?tڿ>&\*WѩRd\idGn$M~ ӶYX!T/%2`$_"dg&ym r߾ϫ,댌vo]> c u๴2r+;KHeDNz;J>IGRΨ ^FX:#ed5vVRRrF+C޼^~=0m>V&n'O:`̫.|B ?X}xy/JʰYg١&"Mo,yx;^ꃂ8}k*;b|g nOd44JUfLKĨ1#baE" 㝵[.=;3k)ysDȮzJ 9BZ۽l y:Uq5 u+XBiWIc vUhT6mR3(|j mεa{ڸpMUڽ˧Wuj-c/PR]3>Ì/F.`a9-(|Bb@1yQ .A_NjBI/K[a8JK>_yuN #RWITN0kyIKBsV麮+Js2E)y,ڋK2(DGK.,]lgɆ?VWV<^]'pRsu xG#Z5WwJ8bj#vWZ+^qԩV~lA[玪#uޖrPbG//c^VCU!nƓA,oY MĔj:A`-5jlS!cjI\% Q !\ʼnZws?xg)WlH؊VqyR ixRK&EգHY)cie“DjQKjRCh,W ,GD؄ptmկ){*TPv%d\GEåTk/\,SFS/ưD2bαL qJ@B03 MaZ*1 "&mByQYgQ`ړ> R1XG{kJ$r4=&&9&t.0PVyt.q.G00\ѭ&H.vR2N`i5D&tc.@1[|xRb>Jb /Q_us#z09\56u1hiǶZ-o檶IJIOv^ Q1&"Gw65:{Sϔ'WA[n=X?wW|Gxz:HJX#e Kc!GL[)ȪTRHmZvh2GۛIL5'$_  eOHHֱtr)҄V>ehK926-aYbQ1&Q!81Fqz 27C<4:Te/fr# \'Lp7c a{{2,9b *rXH)jH7K~: ':шH8"61e%)u{;Kq',-ZWͰU~m1z:)ʤ?ϮOvVf]$ʈ-2Sd 7}<&|t9 VdB&{!n[\ VV;+rHnL{ jƯ#{]נvu;i tv6Ȁ_JK$ՌK߱RJSN[־U<ܻgnAn~tqȷ4y^2;bO=OW_`Ig_:G졞F1z3/Kgc3KƝg2i'.}",(xu: +ۖY^"W}Ά?S1Ln4wÿm?~}IDkUL*"DnޑGWB-3(Z"D( )GBs09 ɝA #]{{Rș5jEǟo<)e3Bٻ8W} ^U!C |/w_rfNd.OrIj{vfKA0aJޞ뭫O,ѯ8g$ 'Xv-md @hQv]W#.jWd2$w5vAVyt{OvB'Z$BSehɚ˾i[}ݶ3l3Rߢ )lPp.Iv$SP}1tgָHJi/VҧiFF~'ieK5땬&+ZHruVfBAB\BiaI/m|Q) ,oEլRX;:u醋?K7wwK/?';aݤS1O?|w8_$Zkc͖qt[WyY? d: )Ť.$vlfhl7]Pw,FΆWIQT(zIo%, Y2S+uYc!~2F~ { o{5*m.7RM~ '@VZ$/w݃yHk?H}BnxG2ELpSЖa&]f|{?g_ C8g b`~kmj{ P.fbf3my$ۓ3pqcF{ۑFtkA|3zSˡ ׋ /~3۞`HpoE}Czm몽E o?tu0`a_?x$B8qI5H")?dFfUfZEm}#{}O= '/uochό`;F^<9v,Ttܨiu%ְo@EūDg&^b>95)r>23EAJw[Zl[b_`k2dsQtJ(5,˳F>Lj٘ԇQ2=,'b 9,CLn^C wKpfvQ Eii;I9+EdyPBI)jA^<:_r Si>@(BB.lgΪ^亊t(m?rZ:a)+SOoW0W}-&A3ϧy6 ]=A=c|ެN)>Svo?2&G*G}X1l!G_\ņ#';I-|uڊINjᗘ5HFQq _f)H"Sm&W\N{SbMԅ#C񬐣jft䝌ƺU#e:H$y?wUouZe +_tN9 bxa7:IxXDe0VoAbնjy)j34wlLHnڿThOنdws8ܒ&D-6NO˗z9mwsƀ ˜@83O3P)Yޥ;궻 3=Y$ , ͙P,ۣMXyBMbiV[g}4hasMSTk NqߟE94z\)#g;cݴ晉w<#C^=~~rr=>tp"kZG{c.3a%tuj浸XU}hN/^. f ~'[M--!^$IDr("AasPH  !,I2 /}~\9ׯ^_SV7K>ŏ J6]ӢWg^ 7<{qt84䰟s\xq\zcˁ$ oownr<}7o) AЙ-{ħZfNC$^|ۿǬIBfw53,= ^1"/S֨4> K}{},Q~g:>fN#q!-{ϸ fYvlfD@]+[9 ;#u&[߅ ujjWgAcvKxTD9R%D~ұ$FHe'`t(˅QK-DJ)Y|Ps$*+6KhevcnFkWP4 nhM(Hu23ɰ*vDIVJ gD&D(| M%dR49; ٲbަ5?A[ I$(ٓ t!BLDhNy^ B)e yRRHOSW^g^CIGg'!Jl·h d˕M@j ICzv* Up®WU)vQ0SƔ&srjDz B^o+_iV7usW@(Br rHl2wJT-XN7sr*`!W/9zYȖw>!v1˚%KfE~p@ oZM(=@jld$eZD QtZ : [rlN "[l@x")_QP:|TA`5pQlWb$%1:Ox֝L( JnSD-@ҧRrtRNileRlYH(Yi J$ ӠgJџ\5eժ6R *>.8KrzoYo38XpƟns!O;EeX8VEUVx% 0ar8nL`Y&8de3"1HjDÌ0W Xd Tz4'=88)?)f!ߺ,%93R9SnQ baURf5X96tI>xb"*%˺d[K]VQިhL/7 tA)[XƠMDDOJzP5.y"hm0MX5;d;, S\-v<ޭU?/ns&vőYWzeٯ&8O'ƖM19`'Q^y9{V،rzi:əplcpdI7O;X T{5%ՙbGdKܩ"CpoR2S,ASAJ}H. oO o(9WMs38b~VvR8MhbMa2q6grM<hTWlU)#pKM[bhj<^|Öt d_zw۫w]nZBݳ[6|{(5,M;" =8cȁ Ų{$o\7k;Z7kK jeNB'x9>h90^\=GPW,#]ROFc9dޭ\}ͬ@P~<I *d#eH %Ba7abdnOkdLbWx巹ct_(IgE6ƒ !PLk<>Z}E@̘)/JQx-ctN0I4sN o8a8:n+* 1WgUA`xPH{W.XI `4I6Pb=,pk Ũu h|GNтu,&r% t((-2eYD)E#Ehrd!@IEk ?Fr9NkDm)x͂6q8 6ǛT1}tjRӾyw_{nkM~)/'wmhi{Yw T صXϻfr\#zv{YϹ;KF^"’?jC\ډ\"biq& ܞ](:ߘBޡIyZ$f#0 .Pkb6^.h4nqfCTl)'w :)fCk!e[vh4=;4iM6Ԏg6s%ٶ=mU~8ˁuSfM $L1,,`P9cc䛪 ɷz8lݏoOMjk:'1 Kl^<{$ }jZߎ/?4"ڄg3u ˎ>pk+7|1eFs-;UfEC2:l(*~vR#j{F'9n+S۫Cu{_6si˅a3#ޭH5 ArD+F׍&~h混ˆ(ޣxB7r2Z;"2H%`~s'[k<|1Lڗ9`z1zV?D?0U],.?ĻO}|N?`G qq{U_ D?`uz䙡qtvЬS$*Xrmmq5I:`~WKFm@c͢\gZ1h*$[rvJFxSٺ4lKң&'HzDFGMU}w55y`մZjS1>. }gX- >A0ڸ*%>{J {Q}mM&kxhR^3~09r~+/W6yR Ǭ0Dق)c!QS 1aϦdKY˔tWeJ`0JxKU> %˜ V]._/CCƂFd7/ &YQsbAQ{r㣕d OGs7e~y`!?;cצV["c Q|μ3Y,)N7x0C(mx D\ ^{Q6x_+*#RVT;Q|_AUJ2"jÕ.0ݣ!xjE6S;x7xO+_yWlyrg47|U?&9_CN6KNsTbM?xnEޓ*t\5F>J&V஝DvӨrD5fLv~UP'Uh\>͕vz%pD05a"&-ˮNf )jbS#E{,={Dȣ=Z`7$s4 CtﴦIۤRHi<:O *4?'Q"֣}Q[T *@d(i@ +H5^ 'siI`~JN9 Q\ȣ"P L .A=® }21HFr)z4YN9^ &"*XTSP;SW;԰|#(Ǚq #˧PTtѣȕWW> mۈfWlI ZpVQ`cSHoNK7E/ݶ# @(Eʚ3Nr%4%3iC옗^` B]SQ !}lf-ݪbwQI$x!Md߳/oi6hbJ0sRX 88F; bT'BS f eI$ h' c$A}$`B¿^hu((!tQkbh A}Ed]e^Zl7;i$!-8-߲چ,%%R('FҖ03wN&G;pǮx)Ep5{En $tkJ"&9#pT u"bj&CxL3t#̸&%sX3'ZQZ|aSRieh1o=!IPJIuѳl- %ٲ?ASMH)n=\%>m3d4/zVQ y=[MD~`ԩz8,Nc u5}GtMNP L+*9rSB9fp #%!@8Fj@=JF4e6hnAT ju>oܷ˹d^q]/u٦A+~C_JP!8CYz76ەE7_,׶ ^EP!tQ,^0~v ߻hܓme #fqj\9l5y=|!:R<(Ɵ,fTD}U /G"S [xfљB3U[1Ϋi:? >& ٭co%a\{qUl8jB-[}I#,`w7S>Ɲ#fӂnb|*2%74` Vu5g\fT\f%5n" VeSd݆$-(Z]kLhtcO0ǂXQYVGa1zakiQj ܁,}hǝfp,>dګ$#mcэvRL܎Ԏh>ih'k[L`+)ɤ%AK,7qƖ!TX3A.r,aE "5*!ⲵRNӉm?jy meNkZ`S콟kJ:`Pcg 0ii&RrXns1z3{bM_n""l/_BcAG)\0hQzo _[vk9ޕ% I~ocͻwo͊v?QXS|"i_.a0b?^(݄ V8gG#]}Mo, çO~f2Rk>nn\^.=:YY rR+ k- `a <z\͖n9c/b'uW~ZuM?BBzTPa  K*p}tt !#5 9Jm!s^ x{#::?GwzVa-R{WRBb-H֊`zP$(α䔒(t-wOIOH>[rxh1¬V45lIDF!>[{]oH+]ZI s1iGTJf'$%} F*KBqIu@KS49z !Д 7*(y@dfm&X .GbHzD#)|PMdض60KjiOJ*Q|uId='tEB M"!.4i6>.W +m$GEv&Mb{ЍL|تߠ.T*e; ~dDHFdR 6R;9 Mczzvm&GykOu Aٵ.7fӸڝ6۷MSy3%TAN SM{S6eNt:vURvi|zUf*FNߎk=܎cD6~oA?e V}5orIDo}^qz,sr-eeC#T"U*]{nؚԽ<;_tg=-{P 3AN :SKuG-R$!g_W0/+7־>T.ڂb Wiß]iJ>5>GS9:{zpROL*]%t[tV#&~Oӗ<>OxA.T  Ei!pҙhb Wx't?8g#=x {4 gc,5[` 3MQg1ioeB{Rt7!Ɔc6輺$iaxSl!pf}h$w^nГ^1sp=Z )5Y+p_^Cl(*9 G}ufbg8G.-9z0ep:NT<{T y@J4Znxp8t̑s|XEށ@.pEk$?'0&JMϩyK ta4A !,6W^|ܙ/OP?[.Iu֞\l>_Rn,}>aq3|!F苛`&7rR=qK'S<Ƣ|oraRvUx XS) 6": .,,s%K8xj]j޼deٟ@}j--(a6_;aP)iZ^m>TӡЩq෦%=1r|)(TZ$,S<,8nuHuaoKdiNZ_R\wD\fkqt4v{ww҆ͅV<3BlIRaox%gIܠŸ~B]:anW3ՏKKdwO%AkMKyTRtҡ!jjq֡!tR$a[:!PDu?;`Zf~xI^`5-ԤD{'RnɐnI:hGD!JoUznoMdށt,Ù )+4RϘTV A4JPl@mQ)rpXPNU\.1JMm oK  6ʊW֑v/Wݬ,h5THCǻ*FG,Nj|Y+a h!=_?tB{G 90\Dq~'l5aF͞dtܨGE6,J7:ꃚ@VUT_=2O]%N筽[~ԇhM{LߏO԰w$4k׻cRGic]\!/U~n҇,>g69992FaN޴\Ypzj^ [-@`! D#䄊 QªT߰U|E3UP>JX Uj,~j{ۺ#Q}fw1߇zx>Duv}h3nKpOn+j54t1؄Hy / . | 'ɧJr̡u0%i#S4u :o w8Ĝ}82{zp֓H>G( I4EsɠF}cn N[;=O kLz.ʺA66B3ԯuK?{=k=V8ѓ<^Qr)(܁NPb4RRh1h%FhP <%v*&dZ*~`~_Mh5\-4_~*jd\鲕I} 6fK͖]Swmy1"M?վPԊ"*uKu%\^&K=U]Ia-A1Ǥ>PQp^nfisjMB=$ [|]0QP7GOnE@VJy tJ!^؈=I8KtlRz!Fb;ͬL٘ ʔάBJSەXRD *+Mvw,JXRC9MGJ e.2e4 W?EVo5^s.ZNZo:=+J(2ѭaAJ>u P'u$Ф85fK<5FjvЮD($~8%踰`4:KQRBBKr1X'"m(3أEA eHяQFahq0F$ѧDb0eН^^+JZ,!D9RkA}zڀs;U8kk1Sp2[*&{c"7RM)rJc/wFsͥa;- J1Ժӝ̝A-j*R026jk3"^}Vn̔:9FK7TSp?S cN&["ږ1 ן +\0'. ng_\n=w[?k-Sƪ;B>FPF=|s rfttDZC wVS"r ή[> Əi8Ds0`nř+3HEW>T^>6vNRS֏BDk ZQG9ճyl-Dufzpc:lA߹ra"]?&/zq)Q6I0/c9(eM86c1.Aj@JIlye#qR,; ^>>$$2QpataK -ag:2Ƃ9z¯qk5@,ªU'%O0Ea[zl/m[mPLp9Ԧ:{́S=!~M:}i[xws+3߷P݂h4ƦhSxsڴ¨bيh%zQq9e[/%1emov~kkQߪYLB(43(iE R[׌oO9#Χ\E{ޠWoq N:΂]Yk 9#4zmCk"\͑YsG; EL\-f %ӸbU]ڣ$59n;eFC|:q@R"]o (ЁfF7y`">QB` ٻn$WXz U~=Te+I2[*\-e%J);mR<"s!)3qE|h| t7HҚI*k6T5 usUJt}#!z+X( :U["mBiZqJ:|iIЀ0˯LU$>^ﵣuB{6e6pi8CJ]c<.3_n(!ٛ4+N8D$U]N:bˤM'gsRtcFqHAJZ[wd Iyz$_O+ 0xP*SJt>TӞ6F4rDo9eG5jfvhk$w^qK\9nUco60&tv+VԒHu ʮHCI-#|A8[;HH,Z-_:Z*$9+*tǛDS݀6fڝ_-^}vF*5LOhs`)JJFvP UxF74 4I}̣@=GpG:RAۊj7VFH\ &C`z45]+E]돕jE@\\_ۆ"5:h-v(Њ=qJ*s%؟f3.egPk)i|;juilbOA_C"T +Q?|͛QӇ*+?|MH'==\|OyoNK^1yo|IDq$ҙLA,HD<9.$?ckԌaM)1DR {1jBwPv!Rε z nT"ÝJ\*7bC 2˝aN./||kdZi'!Q,' D(pVO늘h\r'h R3m&7Ɂ .58g~g!}EӚ ; n2℣0CFq*ݟ>GX|tZws 4Kf4!f,L嘉3qVk0#RS+ R "E(kf*?Ձ*|=/ඝo }! 4Z&? 峁|6pguNdkj(ߠhhx!#[EFF|kAPBUfjOFz_G>(y;:߽[<= %ZmH[YcP.B#^ 91r%  !cTP1dkE Km+qLA"+CwsʘN| >d q#W$#02>% Ev$jͅ VFԗB*&qX!%&)Naw`TA'6 bc8W(YaXa!D<ZwH)l$pPU Dg &D*S76 !hvF>]ڱ98EPak5wYZI4Gtkq]0vknr8~63&y &3R{z 1 QolxOj &GˏƏ~Bnmn aǴo?zwVv2ǻKoOSZy3SΥL~omJs}?cljl(ABЧoS"@̝㊇*T"N" uh&_.ҴLo'ҴxCU2 Be)#!BhK5% A& Q.$ھh"JiDch AsƴO8ٴk\gh30 0 E4ȜehA!'aH@ThEDH bBRzhp- ; 1w20ffE#+> R*<1!-ӄp!-?jXFr &Pfي *xew]&d%M5hU;(%$ADb F1*ΨiK"EBVgBF00l%Wa"r%uJk{@ 9xD r g\KdAbb nmЛ} j$&Ր\M|Rt ئ$ empBD!biT/qog: WOJ r* Kf9\)X{dI\ V( 959MX+KdW ‚mprȒ"&I˽1(=]̹ VXH)[6Z o> PWà=DEMR\D\W4A-N:zG+&s^bC$h"Le<d36H}8ٞ9%8$~DJ#N6t1ļO4s(^ {.iEA+V& Q1/Q ZOd"xh d+̥ pt63M(иQE0IqD+4%E鋶!D'h`#{:h_FU)ecB|RriW˵G-|a'IuJy(1rlE-jNN4R3qQC&5*^T'ݨq67]l#]MQg8N\AN+Fc"k 5(HV8͐wl7z}xEi_v[}V[pM\QlyH:*E2cJ%*b6?6-gYϾGIJdϨ(kͨ Hoo_?ue4AkO޾2^]b[v7L8Q.kxy+Fp:xyb<"ÖmD} (խa$GT:$R 2J2h+뮋p S}Wʒ^/kƀ˂j- Po)B{kUoX`kGln~6=!kC_G0m>~sտNZA֖+]N=\t1e q)cRQ{ -y@0W\ /R(A>I2(M1 -BNg%poN!GAXBIڳ"a<^Q}01@eWi/JD+ﵥf+y_ #P ζhZLz cU*'Q$9ѹ+dG U N H%m( `6TІ^7`f>K¤d$LO3k6d>Oԧ (+,Jdi) P ]<*E~0wCB (攋nɕZ*Wu:*]_Ηk4#d5swV+Ei6Wן#emBxޢeoOiKUAb*^qj}Nw1L.>о .I@ RΕ2rv6QLʖٛWQI%oK _!'!^~ofR×$=Az\Uwe߻Y4)(RNSWGK9*r8ih$) x8 oƱv^{$X{4ğ'aNn+?T/[,@v#IbXj Fbݱ8zblّ>Y겡z57aJQӛQ_AGZ({ȑ@R: HQ;v#e+{P;#iD({*U\ bP^Vk1 {CM:2-$ qf^C0Lo;Î;4(`nUƗxgtT 7Da"h!̀e4o2e5:^Fc E.q{S8o=0W4r 3=$I*S/-fxmXw1rd%m%eF+ R;i_&2gh3s%n|9久l-o{3JWkP4ͷ@)'-IWP:*KZhԿTO,-+cסh4|Q:^HfH!m#CIm%mc.0ÔjC4ӊw;r|H2 }({OV6:0 Šo;,nfF0 - KN̅_ɭ9IJcd5EF!3aJgTi$ #X <_3Rw7ñV oVp/7x^龅f9)؞rr`3h/AO7@^#C*&jOuɡQ{jE6(`VycnUSxp$b"1o=0Wcx"6-ofp$ٻ6n%WX|= uR6>v `"&rSCRҐ"Eki,K h4nFъ"畫ٟk~ǀ%n.NŮ rDspJci-l:.?bdhܥNّjo"%.f@J0iH@U >}m6xٵ*Z@KlMD[kMҔ ġ$(ī"#7Ɏ:!yG ⑐S|8 !sFI8a֙Ds!)NDHvy;J\ |.FY-xMmhN8ۊfPgAljHD"oziWux1u^Dtm_v 5SRAM9 bSRA;M!祙F1ǬGP~H.S ?ލYn?n)-Ej2B L4mN_[ VeƝbv3M׿{:R0ҍ hQHC'.K7moyF%ebow]%mWja=VHhǠ^yyCgt./]qZ2&Y(S*0=uJ ځ]Fe E=3q;3624rg'7Ҕb!8U6#ƤZ ƭIMFP`+u`/WMw46̋3`R5 mD0ZR+BT&}h@#>%{ MŧpzZ-Y(}>m | ~-yW诫t:^tWl=tEm0٢hݦ;H+׏!Z]zuxkҵ>-ꕢGbwHz&6o_/&D+ X6;&R78.KK?`Z3gXEEքf"f7WbN ]$z1-JpjdPѾ)w`.CF$!/,]O +zaL|6 suvY) ӰZt>dd-I0p/2Ɛa_܍ 9H;yp K"X CLajS`8/\V.CNn 4%4j:rjӫ˪a+˪Aw]V HhT( ܎hIOPHnjiC8 vq'6N|LDb1gS;zAn'"X5]1a T6EHB \tS8/ҍӸ]+ѥari˶ eC?#Զෆ6U5TJ7A|5yv'"XI(Farј5q,V t|  @04_+ *mgSֶNo Fw}cm-eٱi&qV6\ziHlm>8GFuqg.NTImB`JxZH:+eTS'32AȱRn:Žޏ~RKBe!~߀Q(ha./n`L;fk`*x`2).ϭ=qq.nx[X-{ ?; pE|j]"ۈKTDmj,S#9B=g[ C U"3{G`" 2k J,@:[wb!<^PȭZx}MH">>@+[拘3۞\&0 ??ݻS/xRSg_u/7NJD!PB{hضwhƴ/.*hy{}KRBF* Uɬ$T6H#4 pLcL::DRK\7h̸p)Z(.Kڒq`Sk5DfQf$L!Rkib {hrl&\e0o6ͦ.߹ʃa/ui(E@R&ڼ?/{[_>Y=amHpDehʽ`vSZU(VeHPdiA8ZjKTD[l#f 3Avt ^JZPΉ 1RS^RDx}N<44G4Xe v Nvm HE(V.V^E J],z,jA/B>" FY8%۴1.E%9W: rX_C zIbTe7*_mUP&2+-3ݶ ^OrGs>)ZrPl(ޛb6Y&rŧ^Rƶ8< ۿ|Lo@8(!hF#Jҧ4~MC !¢3c(ǯQE*]1'QK[=p.V/Ogޚ?Q-V#EIu9N a\5<=]yQaIt9\7^9xBJ'#PjO~:ȾG_QV+MkV8c8;oD)E8lU獒7HwEZ IԢ_q93e]6߱}z77Dܢ;?3t]OWϗĥD;g=~q:8*UdX$UȠ¡wb4q%w;wϾLy^K56yd;+x\Rs(aUsu^"vfءjOwQuU]h~ٻ}vٛ,<\>p`/WW/' G*<}RMG\;$ :Vh6 PK8B:˸J_ck ~7A.mgTcΛPD,Ty{jqՃ(OWeH˪=*^IcLB !Hو;6ObVLy(.4Bv>r0uoD]*ڹgQ͂_%x] a0!::yk;;Y{q+n'ZdOحhZ}8Kz IP]BǢSz8D}ZP:(|dv #N(Rvhڜio~U?)t^>y?|6P)Aq jaOZfz^yIp[fRINjdE"d`Wp|X<0b#s~Q%2hw^ E#5@|VdiG[yƵU:1b_7!#pؘdf+l~Z}VY,gF I0RSKPk%AK$c#'^)H=z`kU|z"S<2KJ ۅ'Ev;"S2]frOʻ*KYǁ:~%H!N*vCn>?Mo<'@$'m&}|ҡp0_=_g&}0wόW}FPpl`Hjt;LB(%W3sw_Ӷ*$C Իfj,fOzǛ#^l/$Xػ,W vvbA,67Eٖ-eIݎ3n5[עXE-&@nK:unu Ww?n&Kk>yuTrd1)V+Eۢ(3(a qkX$#jg44+uRJiww[RJBar{S@uUS!?rZu*ڴWͫ,[S9SVٛ_ovV.E?gw[_+Kc۫כveT҃UA1VznoE+p$X*ie/r5k>Xv9m417➿ӿmʲJlmnfO.sɗ*WE8fȅO&sJVjTslj7Eivڭ)>Suv 1C.|<~!C>OS)=˄9r=kuZ(WFeQܚhԣ"籼!4">*#$3VbjuKB. ,B>JF&t%`9m2UiM1Xr"KAg.^(z戦嘺 *$5ưF hT\n,'2Ig5 7O("XcuJR*`W}>TUwL0d,WDHQZT]Lʰ= & 'ayN7%y„I*H( rX,8!T3p)23X.+sA@!bWyjʲ,(W,Z[N|οM|5YL&,6XGmj \Vc&y+At#<;̅c}Y)Vo=l%}=!ߠH,g5/{݅H*u{^27 blè3IlF5[m2bqDX #oiN}t34xl d@%mz&5Q! lڷbpm-Ѡ4JA\l. CsɋjcB+b'3b&)a \TMT.7=ixSbDkr{v,0Ԟh/yV۷ ¥NľnVݖ|<-lvNjax4%蔹ͳ(4ZӧؖP_V`Nя}`L8;Te3*ٍ9)% _˦ 7e? 5 p/$qw {^h)ߺ+^App%БﵖN[p~1OjeXSI[ ;^kyޗX6uW$'h|<[.wT˴g ?ТZA&)UYze].l $TĥA;YNP7pCB"E=;<˔k9!Oa]] ޯ|\|wAr ;<3V"b{cq*Ք g %[~훓;o #X#@:w-I}eT{cDg`3Ğ~:{QgTáU7穳vT9 BY{L%g)JW1` Q*2_ $XZʤ %7ZYgu@VfH;gRkhGb`519FłýI-nm[x)}TGc"7lP[Cr*:is'z|v(ۘU`z/S؅} JI C ?ը:{J?׃<,b;10&oԃ߶ ֹyíE!:ƒn9H@Qs\_nfAF#\C&D(br%Sa^|4V3M z”uYi.VIH8m)=}9ђ:C2؀f8<55 BGM ]S*">DHkq , Ѓ,pvZ<(*'r U`&*LaOd'dHGcY]8EǡTA4Ov$z>Dȁ!Ņ"iR}v3ڡ xEg_Wt%wGeRR^o'w"@EM Z6Za,s#'w }ZQύk@Bc10Qҵ{9o)Er9:Z+h-ek̿5`V9g)픹Ȕ)ƄS)!WRɚnܺS(4&!-ջzԃPK) =y)/&Z6G8##Dry.@3lxp-W5=Qd}v9,B(ҰY0Tsí50bb/9%CU= ?f>KԒ>Di0$\,{9GC\tqE3K$tz?Fɋ h=k3i*(>K#o|pş T>ه>EZ@YjZ ["L&m E(x>`KEggj5(ĠRb5-'vݩ3mm_{g#Dʥ9;ɓjxu[lWPWfnz ʲo6mbWhӨ{>]h(r{-*OK7Z U ED3"[xwJUFf u /@=2NEœD14~NE*PE®k@Zt A:WrtQ־Rk8xPv!":7诮88VsVL hei9vaGv5|IQhȑ~bx=c~I[Vi|hIuet[zע~WA1҃lצ:Վ:r\.\ę+5}h3䥜VH;)Tfk>CKvޚ P.JPrbJb^%C0L1$֟ &B%C<0gauizmFNpnubhl@m!kksIBGAD]4tŝTmXUzZL } C'3R2:?GH)ݕ,Y1NHfCP>Ȁ;/媄-j-BCݳ'VXQӡ#Jby&kQAŹIWQ6O73慹,wqj&m:AȬIڳH%y;_&P҉$>k"(>ʼqeMFS檁(YD Z sUE Q%aXﮏ+HNr$aN"IIXiDZ,HA{TV[sz )*%gׇ/.IEUx0T ܙ[C^j!EW@|ս#f=T!TZf*\2P蟣3%oqyᗿo?Qff&Orx6OoFo_]]5y D< Ϻbf^ߛd^?n׳xZT9!"L Ť ZN2dZ2'D9ͱI%(4mj&|jM Fn.7T!ajcȋƂ8ݻ@}ZTvwm a=,f]kuRZ 65!rjm-w?;.gyc<˔tms)k|tB?{Ʊ ͞3R_ !s`O6F) w%R!)[M2x30bY|_UwUuW=4}9;~M˙'تzM(#G]B+q0l%.E'YX8D8*lrbf7=|U"h3N*FzS%V/c{T*O!u=*Ѳm0;RT"dSRGαB=j2#y!iE̐q/ikOlGK!;Ay]fL[B|F&" ӒBZAA' 10)XjDJvPg$zdzcdzvins6> jz4g<דϠ=|&\y ?3?sv 9zS"m,_7l` B(b#;6@ Z`IAs*)iSB.LFK!e[/1ب>L頥3)5AKrZ7FkV\9STԬX c^zXI׬arj"I":6Ty H` E*% AhjPVH WMMűm͔zC&1DPaEa2Up#I pV0pybD7Zl8ft|qD}CtW2(Onq@^ǖUg jÇ+>P߱=}/]08>epqwW}F/Ժ7 H_{i0Lxs3~㧴f;c0Lg!^|%ۓK^ݻ[^< ɒ8{_TO8)yQ]kWGbnǼ4qù>9gX /@ #a8)(šFtio\cr(2:g@J*0y>F6'_S֢ӼҳC#Uu74,:lPj%ݥVhn=2NbcsM6!H)Z57P"q<IY-ԡO89+msS )5uo?OHJ)`Eǹ6d|{ 7]~cㅽ0dN\| x%v+<6pUhAzqx/c I=XI@W@AG=)z͸ if2Ycz?rwm2~G+H;hCu3Z#h EuP K LM(hvUr5N\nIM _䩧y)i>Q$NpX#]0lV1șTJQ+JE  _ T2+n*iI%**;=t9|c*)xEqI}A3]RB,"ppxe{C`6c4<^ct 31>G|ytN]=ug}Z cxP\^6~jW쎴"d)"=kkz géY*$sV;5@!*%Ɨ.5>k J_U]IU+Y{Ph촧9y7 -.+?fb|j?t_D73c̲KH.U*L)k ;]=(i۬՞ 2ڃ䳽WMLQZ[{q\qSF1DeG^PJQJ ZU&aСVXc9"g=F鶽^kT(3lװ治c> 1- }r|sP%sxtA+T׀k f>3T?3Tx]9쩛_B1Cz83y\w^}Lz;Lkz|4==xIm!@r軇+WbU^QC8p`-d\V|c5d2`:ե{gu {m9cu"d-+.;+d7BPe-Ek`w/;[Je{HF֬ݫޒNd6%Nq]<8cmἱVu R̶ i0k%)[N1Qv&DXPc}~ũ }{rkcrI꒻$(Si>E ι 65iPsl;xm@P 4TғڹXZ1Nk`4c|5KWd|Kv(W*:+Vz5]=J'Z,zx=9aZUT;uilK{7ow2ւ\drCnl vR 9 Kӕ9$PԌ^w4c ~ޠnYH仿xk<$(yMq@,TY4q4䐅xovYWe pX}ݦdiD?ˢP'k"D)i_i# pkHa PK. _q؀8^Ȫ9ƯrҬ4>y$GmKAy+D0AOm&(}2 @/>Vzi<\^+^_ \c@lAt!vR="̜&tko/OQx\p{L'sۅr{^0\%qWMU#щM%{cN%?[Z9fh䛇j]U`2=A;Y!vfI5Z~Έ\|?%tfV炚'@3W<<~B}rޜ% UNIyM %btkAi:bȩ1jo[ѭ UNrS %y?-uJS[) I錨p[j`!W֣ǃ%xt_v[6iu&kiTB h(׊j<#X󌧮 vU5kwR:I5H=Ғ폡zUJK %tWԫƫk̅hǝ<`4\_ɺ6uJ#Ur@ZiI!pn ޠNIR#RB39{3t+6.Ū 5OL9Tx)os,Єep^]GVL 2 ,S'\ԭjNp![iru⏥M֣1 @ȫڏf2۔7wzs6 {yZ¤Rg/ ?R1oPU.:ڛW߸x?)ojl?o*Jn.YjRD4~5ȝL&/ZGni(Щa8? N*nB]p0F3][s6+=ۮ)P3~&n'Sv:M -'/@RE)^DobKH|.TjMJZy8!_Lac3&Qn[?L?>`SەzMˈS 3]v7;#Eѩ1<;?ܱ }BRpGbȰ6vA_BhsG"_na.`j t WJTP[Ð*BUawZSrAO]( bJ]Pc>p,1n Rt/t\t^F#PW2Pjwt&l>Ux>o;1T@wArޣ;[~P2d:HO * y}x #4RL]`-mJ'}<%|7("VKBucrX/~w.y@|!i?Z=B^_(kfY!b-+8Dm(7L? ՜3憍 b>XV0"}].lwwmD-tѽ*Ŗ6 Ƒ89ln FrYY.h5םP83D:_"8uYɯWޕzYٯEa}fxTW"edCShԩ+m!8>hwנ5΢\ZxIQEfs&롙,%4T即emb+5_f2.8kigNuG*j3)^q"%*9%9%ڍ/NQo_,wnkNtn+,\;|(wJ7Nv1 51ZG&*L\| jP ئts'([,8[dhz%q*ݴ PO!vQJݪH[i8iL{dRQY0-A1bvMxu&u5 0.J7QQp C|jEaBP+8־D#[H Ҙo4n^n;~qhPp[(zڳ*HԊ1i(C(Xԉի$HJS#UGx#h 0Dܐt R ]RG!]#ia,4Yo]8(d֭U3Э&d: $l6/(ҡvJ׮TKy3&aPw?ݻp=1vzs8_Bl!^5S @b N$7/g 9ve'rmO VĦ#8ıVsXZ,y>9DB $iyؙc)aICFL'OJEsfOQ=m|'wJ'ŐhpLk˓M`]!}H_-vwo`{շtTppR Am=㤅 TST' /֥_s/։3Q̞D%s/VilɁ;͜ xa]3do-bư:{i̓VX9i9՜ҙLCd;zwE`'aTx -K G,D1|\KH.ׁL$v8\c4I(qvAUZ&_"ITw/(ڻE O "LPP1-qZ81B3>\lׁ,ֳTԽvL;2Bz<Čޕ΂VHKɸ\%djLU]!ˉ(bmx!G/LrIm{&%yF[Yńؗٵx! :.)Y̞#JGmuY cC,9-;O 7Y.WrђI 6lW$`"4a?fލfl2+9r%=ͻNpZȄJ@[<^75AF:lL^6v*:i0s+LNc6ZIkaBpԒ=zSl֥xvil m1Nއu=₆/9R]l{AT^$|x* Y9+/yF!6ow÷#Vo:E畴=FF>I}j9g2GGqG8II=Qy%`vΧ\]b܉AA]yb{#5@-?QRojHKV]h0~;> CAFc [9^tWOl4.gaL0<;|#+L~$,׉8}eت@dJ2s2]qΧSyqg L,@SN_S52oSݢn.^ibD-a8k8ffl32rB{U왲LΕwwgӇ@ڱk_?>ߝ1~P,=O{gLn)蓺מ~?77^= Nfӹ>QoH_^z|}nnϻg|;S( zOy"|o@`.DG|_gnF/Q:E/,泻'CՎȷH0>~8_ZHdQSj\kN4aINwX{_ ~Aw0-KvoݻdJeOdWAɇ*~ fjڡ;)]ueY_*AZAA±}>OKQ*Z˷kuQ 2^]jK}__AM/U15G{l>H z UaRg.bCgs?}5SCֽ^w6'japu?L ],n@]Ε<"dz ,tgSZi@EL͸Go!0޼~Fpqq%"D]m / 1c]5cH-icʰcj{㸑_K 4 ;Cup 6mYi5#9M=^m$]|X,`sNԬ(ͼ=cuDo/nVFF{e'(ɨWsCFj04;ߧ8uz}vުZ0f1S|L)' Tk9T#=0Jt>W2F *-z8N/6;R;ȵg5K3OU\o-e3Z; ב\JfS&7sSDL`Ȃ.GĤviZ)4TjhP 9CMo˱b24ENfTf9s83qqE +zS|wh, 5 r zu>Aye$5,AmfRhT SR 1QSUe#9\-rHfSJa 'F Qk^ b}^—Ilh>YBeD;G!>x3̙\P8$l-D=|3J5iɻL]03BZ->kʞuSHf)~g={;6k(HIy C lV^rGJp٦Yn@hgv!6>ƧDМX3C47tއ'8(ۊ |ZoS*kW9 g]t*^0kR}d9p]u'Z\S9vIs ](NVM.3R|*T05F6i0} P0ja:oÀ "˪πKb1VMۘo{*tDDấ)Hp=:q]kR\K+u{κVӝX楿zv]FZI>|CQÞwt٘`LvopG}hpwi^2?;JyGaM';aw7E"8"\z(ܪȕt0T:; ji]uLReN)W'L>qwY(Õzx(w^%BR O_ϟnbԔT.Ϳd/[l眠7N9EϹӭ[N8ǜEpsDk9MnzZג͜N18|ۖ j%X )P(s- W#ϜΖ!wKgVZll 0goߘ)`ǀCFQEQa^TW5b-A^C M6|DF1YX TB8d92yS*R;ZeMB2oi @7NS6lou[: .!k"~:dMU}! $si.g2M3PL4\ i(9O<U ƲJFsU6ӥ+D8r@ɸBk Zsp䏛  8Uy\nhU<%㮘}4M+\t臋߽`1/=dinWme!¹Kwb[WBS{J$yQ5^E$NNE+bJqU|Zd 2fҼu`kI9y'e[Umֻ!$ZSg<<_s]NM(& b_ݛJy6d\pr쌂wm~|V9?P!h5hH\Eqe]ۋ^MKfoZX43EfVzcVUʤNfvgf ksΔ=,;. q,I\,̗Fv=fEgH0(2=D3?P-囇䳌"f;zL&-NӢz})tzljboagf/\Tk鄊/UvOB[3Whdw"leLJEcI񘠸f1-xWn_8k:(,YkqKbwT}&䊱cwxXW 4hNXP<(A8];u@-F/ޝ>oIF[ֵ`Wh`'#>%.3Y'%d^Y eeBȮ8_*&8"yuE0-FvG =Brv+7칼h`vg -2;g5 pSHvlzX.u'h?O>G?I lm^k.!N1׌\e㹳p= TшJRC!xecL2dԌx z28䷫7xlܐأMo}1AЇ!u<$n=K#eB#U!cij>;ƙEZ>/CŠXB]CpuPő!DM, DYC@,sDzJεʵS2|GTqa<%T}ћAtO'|.Y-6TLcpz /2 FY嗟!2Kqe)ZalH%ZՠS:v+@`-y!69>Cz=}!׼GX]J'-2v>9mL$Ttˏ^>xO~=ʄ@Rt/?Bq2]O7 E# ?Bsk'ӿ/L/`xwɆ(FBAۿxHO0ۭmjWo9lxCB9v :{ؼ|o2lD EŮj ܰ/(hz[a#"ňq1^^׭֎O2ۧiXD<"ИjoP=> =\q*h-/"ړ\}i I/,{|@#QhT e|X\nF`QLCϸǿTC@HczѝaD~0!yE>Ga݅G"P>XCxᅩwF4"j?>x7'l7 #}sv8q<ޔcۧO3;P`W{OV`+Beӌ;{[|:abh˚њB\tPhےe 0}4A njyR>/wy;V áL3z~V$829ĵgI\)=< +ϭi∕'2iFR s22>6:])Y\qpιr}CVЛeo&/㓍ʽ;sTJY؞TSTR*LrhdA[k c JɪM !7 >89N]Kk,N\eT/KN۔h" cNgy΋2笣'{ޱtBy΋ ͌@n~$=&Cs>1xA<ԌfoA Q<9Y2nJ\lk cZ_F4`h&9$(9/\2ZY&tTQ]Z{ HEΥ7*s9%k^z+.F&Ĕs\d}VUDfۧvrcpkI;0򲱊\ J39ꓘ~~,J5h"2&\,1DS'K9%{ޱ$j&/$okږoqEǦR4-m: /'+jzl5y4Wz)dO \;Gn~hNF/ ]u9[:SKϩ05q<F\יHpډF51/LH ˾0(tF݋" %ВWc=F1Aǡ>y; ?+ʜAHJ#]Wf)7>]l kRƢUӵZ͈XIXJ^#YHA'*p 4ȍ#ɂ&R}yќw=T1{0R:4ȉ(_Koºuo=!b$^c绫s\7$=/Awu"'K0Fjܴ1:~M/bEp|1X!,8$R\;ʆո a6C)4cyEFh%8UYڌ^Cge v?k g| v (+įmM]MGXiҕ)Iِ,TtaUyP|Qy&dsp)ͨ ]ٻ6r$r76,ȇE[,v" lG;d{2߯Z%uKlCXA6Z⯊bO1*ͩR!p]JN6XGגC1dz6 &>% 込Wyr *161AةtУJ3mT!Q)' DPY!̈yS΂dt+h (u rE.cY)qQpXOA OhR* qy'U*897dN8$=\ E[h|֨u-uZUӕD4@y>_WhN;{5ue2q Ʉ8]A@XJM_3^x )}LrT*v_&l&Q,>D(…36ͨ"+WE4&OuD? `IF(@y, ΃*q%p t19Ww4G!g@ T шүDWa>/? ?sժ^(C;45>Ao ba1EȕP2ysq!I7-*^ƨNp1"F)A7g F#'RTBʝ-'q14YH"Z WF߿,oZz4.t2 @k (:" 5$ qCT!J/Fy)1|EzRqTGA*"l'Vf4x)cRd5Pc+I^\^yʽ2r=C< kNjb|bLTTT8BQE:n+jX}# \Y0^]oWr/l-UH%^[/[2yj#\ӽsm}Fm*ٹ1dۘhbR֣j^mCO *NOrZ$ҊsﻢMД;2 ?}qnNOV."o&تB5`~Z9|˃(F}?&zqmyϤVVPpzd~䗰ϊ!blBi.k_E&M'ף7.G*9.,nM̯޵$die/a:ξ rϥݓ^YYJA$8SE+Ԏ fa!_M;J=/9>X9a}=w{#UNu`GHR%7 ph{*. -W }(Tpv~gq3P9fZsy&_..MsYMKR7ZTLs;1Hht>bwJKd_fɿA6 :嚷w 0/9j<6y_*XQDLeH½I~+RQԮ3~J?;;[ W @TG螜u W+՝)מL[P7#_}WvT w-?w6And z}̹mzro}2. MƴhYNVbªTOc%ڔ쾪";E5eCOfvmg%D9Bf_h3+d9˱q\!chX80ԦaqJsAȬ#@d~'Be$18~ڔQXzt#d"$UeDւ+ fT YxqS+QubnS{K?pn+.Ezo5;5u?$4_?n ƚZ~V'jeܿa;H < p$2w9#I;4x]nnaA։)KT,?F<7r v yvtmLT/f=mb!Rw(~ב|ZWeLlo׿ۮ(V+g|g(dI~գc]UǣRC^Au׮qa$"ï{ EŠ8E{8]4Temܩde"=dsN}Ѿ7@aӚVQFઈ_w])߷G 1ϼ}~Ů&W8'R\}tBK@)08RNu&~v̓QOT'z>eR*3xiSdߙ]˽i=#8sϳ.W?s bhSbU #ދZL85>y7Tt G$lYWL*:{*AJzkYc= zJ+5ktZ;vҡ0FA1?-9!8O-HH-r)-eC_K$@~q)i6׺# 5KN;,>6:o)S&W4%oSFw;TXu%"y^NNgLe6,ktԬɕ$ ly kE<~{%w8Rg<2X9:03őٖq8:(ӃuEwT]U+\wJsF~{'DܑIbk5']C8BO6'Xڟu窛S~ opF40!:޶;E=PpwP_Cz9a@ +N WC;`k3C^ ugvVeby3\r~$k)ͤ\Z R:C@6j`]8np?MH7װq $oFM[6>e7=-z5~M|xK@+hKwpnOn۷wgo /~17Aso$ŝ'EZ[[՛"$6Z}ZY>rxw,:Odzy<>8BQNX)P8KxyO˸i;N/y.`b{ݕ7YJJ;s47]Ǭʍ[hߚ 亻KQ. T!Mg.׈Bu\cJ:u%~Xr; 8?~kP !Qy=gsvWU#K_9pxұY{xtAeo?M}4!\ jJVB.og~cE$Yn0iZȱym7ξyDlCrz_ƙfL[`S f^:[f^g+?6}\shKx |X˧>82#,C>it,цwXwf-88% == ΆZ 2[jWN6Vc>G&uFOk@c-FVZ ( ێ UO͒Ksհu/&>NRњF/B~33qeX5I ]'G03*مLqD3'4^?7`.UD?UU⌗9m7yDl1?zet0e2Rhp{d(2,fG6X`bPGb*LSeӼXz<.VAMUBhM,  X9'Yd@.eL+j8r%&TK^A5 X"c2 ˀ ⋰؈Pk@aB̈KJr4g8$H F3Z LxM9yrO/K=JܝAt2L |moxym:ea41=jyߟI:WVKYxT>UeL zZ/uf"oΞxA~pirչq+aE_+}0.U5^P]3 [lmk\$dP n68 ! :/5μthu7Jۿ-r6',yW/PU: B&ە0e݅XN*wqّA8fl $m3"\dj#wh{PY4Wbo+Y))]?o}m-LzW_b!XRR G`A`n1JR1exD)뗜(D7X9.\o)։ p@U\CG]Б$&_ EkFk{1omZ>Iite_|Wlzj󑗂]_̅3غuˋϓAק_1kN"fsZ+*{qӷey*[ײ|U^kx lfK:@Fs*22`̇F"XX)<-]UĂQZMǔS3U++LbYR)^<]lAWV-V' =sPDÚN֤k$ݰ [byufcwReII"`nИlwtuopDI(F̩ b" 4W(9?fFvAN<1qӳ ~wvRsj;4B^T8lX ƒ;8)e_{YWw>.]KIqؖJ y)|З@|Rr~2`FjCRR\¢d%A%4oם2 1]3'^F_'ԡ^b LHRsJYԞX2\3Nv[omo5eMؒ :^+Dv3o4fh'0BDVYi=Wrjfgߢ̘ hG~Sꔁwo8S,k$/>%X7I|̱`jJL=FlэVo;q#k~h7}[li迶;Hۉu t+K_/|Y, aUSz8+!ݻ~ALlxrh[0CK n㿳y3GT8؁tB 100EŒ cM`vPUevb=v`n!Sŕͯ&w, { 7,rэt $n'&yl}ho?^KEäO6 lcu#?‘7ɣ)+d'a W{󷴓DQ|˷g?^?hp!pKNGtqQ,Qv mW4Crg`y ?蟌k7?y?+hRa+-E &LS-0.[ÿ!kq+1oq5Q ײ%pZe2jnߟ@i5;I{?#vB_∴BWyL vtn˷[{}`6o~vWnΠs<5Yw;>X8w9u%Tӻwu2|L?nsr|>;q#Y7(gWD;?t [jbf-h=s9y?{ vlܸIG9,M)"Meg0E̵5>rd;xtMĦMzt5k1x"ήd<1ᅢ>2v#eW?w?g}Yrm'6WDѥ :0R)hfN VHUuo߽ dvK:?<5;_a4lg+u;c6 ނ[IW5|Tv5ٰ̎ F]kZ ^v{; ;hVͰ=ׅNߔuO6sڲ;֍?|/0v2'3g 7/IGDO{@M8i/M}&ffϗ>u>-1͖ZX9%!B<*È ccxXpQEhiB}o(Ē0FPH=.TԳ h9UK2éQb]moH+B,pṘ${X vnv 7&qlegn%Y$R"ER`8SUOd1$A&yDt GCmmw=z 2Y\$1E Z+8Pkqɒ}8(PG)T f?`&sKyi&n X"l\:>}̅@s!\\T19 /6E\4vqn`H@r c1!2ynL(G\ˮz7n3QLM(M{[f\Oe TzbʺeR3@YZOl&%`:b3tCkc)X傒_83R*~ Xy./6Nv||忝t:| M@yƘ!ehR&Xבd5frkέ&nsnTeGk5uZcR8b>jGA;x@E;ڣDSLJt|ww),5U=ɮoXeĻſMps-G_?]<_Sʛ,Wq=yAj{ a5:bZrV(t"isP\O?vG1& mEo]vrz>^ЫeS.5l'vٲ0myyy >>{zL )ijГƨx}ζ=8.Rxi6ߜy9$Oժ$dz>`\'X lx)o7 EY/\p!,I 0n Ԝ 2|*_Yv>S ʂh >~}Zv% 𴌆e4 c䊁R(/I.RCT4 B"j`$:U#ZS_pF)zE#ǩ֐Ǹ)HS/Dާ If(+΄ *R ex(8@<8k\ȇd% zVpc AhRj  abk+Ttgh:Xʋ= 0/Lps b J eS#H8EhZTLy$l (x#7!2Bj{R򬠉)GLRBF-e 9A(X&`SHi6mZLa ̷rF&nɠ|)9q6}Ere}o)gAA3Wy!BT`d^Hʄ8\@Sb!G+5r+Z|0~}-d!͹*3 ZTGv!,EKv"X3C&qGf5 P*qK SVqp7,&8ChEgx+oj2bdw=nN/2LM^hĴYWT*g-ƳƱƘ2xAA=,Tm,Y /c,( E)PLLNyqwJv0ɤ'00dTv 9PVQg2$$|{Jb#Oۿ|]zMfD=܂F3ʢYU/̯@Q5W^ :@CŒw!&9 sge}%vA;}7!?LB=뫰8cr4oH*u%)B?tjUKWłe+CxO3M{Â4Z3%XZyd{?^wT\fpnNjD2ȑ*]3FJl~jP\N {i}@書/9E#-<e?KЮp@`eGK?áR?yvrꝮTYUߝA7)j~c̰=ѽg6 }XF俙?vǿdU]4Eo<ܻC0XR+wh2nC.ʊV*+B$h@ 3;kCYA4[3*Z}\үFR;AʳjU^6ΘBf̸s VXCޞx໳t݄4pʕk-$ftn2Yu>.΄`92qiBw~. SM[[3( =<2ՍsPlCj0Lh]jk~V[I dFa-2~}e 92}ؖ ed}Z1J+;b vJJWH'e(yfetޔ:3E,y)%t$'APJA̟+t7JzRVZmt%| YQJ;bo笙X͸ʨ&+a@.KԈZhj˸*<ܩ fZ_k#U@Pcfo  7\}+2lqS#J(G P'$n""->Y;?y; 5R@=x7pƉz<ŒE cksb}ZŬB5x -CN७Rw ]B!ˤ^RYˡ|ƿjxf[Ʒ\~6/'FR&Ņ R K v4]^jb,f,]^j39/s-Mr 0m(^6os'$F[U+f;J/*M>urnr{y,IhY隳o޼^_G{N]=Di/v,@ْ6'n2PPʢ(&I)+\J;h +8YDBH2m*[g`r#  iȪaPі /Aͦp mT!*q8@a-G^he7^`P8BÖECČ)[na3̿LϩP"p\ .Cje)x;5;޲a[!}=i?M74`)bTt(ݺ7RV94h0DPA7YLP4 EZtnFh1b tʚk;~%삽,vz!5ޣ_is:j׻w"Uk05X Q9]Ìr5a:oOiR|p5 cUtkaoOj371ȚIgpn:jPўroܛQ[0cz)4'\\}k$h]ܘs!FOm4*)҈eX=lO6 ?;@g˅9r=& `},\THtG㹵Zp:/EZ@(,;5oŷϼ'.cJ}[/03`@H>'fCnb۷+(gBjRX-Z;x[;fؙf #$3L(Zuv\rb=VXB2 ϭ}܊Vm˨ j'+gg>{;64c4͕uy{K3aXj'.ABD#xR+KrE  rP SR:R̎7ko벓`rhqP%NsO$NJ>+m~a/' D}}iSrOyܒ+s_OL Gerr{W)fۇ+ ~ב"K 7S?_x޵#;D}$&$3fP$nǖ )C6eJxͮݴTUX/Vy"s]_?y^HKhc@+Y=x|6e#We4W IQE8l^b<А|"%S0}5Y׵BdhSEb$:ݎ8Һv}`ր|"!S2ДZ-Ǘ2;iQ4G+ܩ 1_kKaLpaY7cv[uc٭Ջ8fY|Ὣ#~;/~8 _ %N/bA5dKYv;nN/ީoOs0)xR&)$0!dM R:5}KjeBk[#2(h 0y߼@F 4ˋD;Ϸ߼uNj,} dq' YwL- A~9pJc ?[o(r |ƺk֖|]pϦ!n DaK8 1C蠖3+CY8YAJeÀ5ZYО͎ 1+곥]8AS0JDkNJ&:DB&߈ AM-JsS9SAlNf6IA(@F=4 Č]P, w/ K^rEݚ(a,3H:̮ |X;OdDȍqv\`*7NprQN%"(Aj6\N4G>AJ[ vljVT AP҄pfKW-/[`ur*E(qv$5 k,E2"к]@(@r9y({S5Jؽ fFQ -T ,B"E*Y @$٘ ,ۃ Y^"ECkUn (ĤA{ZV 6 )jӷ׾ɒֶů{0@)ԋs{B*!8a4;?k?i |"^TD!Yi'uGqb'Oh8g3q]%9S#r6T"!(A݇SA&b6h CI{W/-[1riGbge{\n}R8!~6FeM86Y7@T6Z)wOi#KIzUH,Ҥ'SA8F V*P(C0;5yĚ| M?~(S^: ~ N,gL o+gvpnRe8uBƜuO`ħY{#P+qnla8:e0CX0SL13- gLT#0r@k%0#::}mө]5Xmq$en*t+Ff\qK@cPs>Q&lXa }dH9MF CIoie1ثj@?՗jo{$rrN{_b[:ǭp}lvl/{p! |kdﱓɳaɽ?O|gc?nF~\S>ίڷaKǓÍ~X󇐺Jc0̹JifA NSJ}40fRJld'5xIw|6M:Y|73{-VL?^ %`wT8^ܙ}-' 4_.,ya<^8:lpݻ߽z~9,%+5oKarK/?qc}P{U",ng0 {t H NW }]_r>w+s9k趿דcKHzl!o|ʨF ?)ʷFۀ߶߶gEϻy:V[wݻUk=Q}@\.hijYLŚxt"$8qsmwWZ-b4_#ѓvi(O̗mu~VSg91p$' <Lo27AHoEn8lz$^#lN^Tn <;Ba'sZPiaS}KIww?@SfpV ?Y5y1[SAq;sΦsw9abz4Z'a Epf O ngne J6 N+}uN.>0GixK۰+yeiB...Z6B'V5 խGWKB:(r~~ uł}:,$c7dG8od=_Z__cX}qt:"[DCrTOMpçn $D`\̗{<;}6:K\s (Xgy3w B,O^ hfTHB(AB[% 𓐠j2@1L4M j pHpTZ ;B-դGpP~1qzn pNy,2 Pdg 2Hf)Jۻ3( G 68Lw#- I73WZBR";Qѩ05ؤIB)dHB2m(tA !'*0c ')z)A"ohq5P}R 3]rRD?lfD qu롻#%o&>s3R /&͋u{&ˮ~r{PNm'I7._ zpVFW-BaӚU7߯0GݥQiιҨ'&l$#Ja!;4qtD~#[H=LnV,bQK+ߥ=0`k^EuIںε:{9|WMt֔p47u6[JXsKf .mf EФ3Bh&Zy\bH7-\*R ,`)tRB3 !I #K @B)"%ʺ%zSYU k_7(a&l2+)7ZPb D 4֤d&LҦ K"a+ULCA靽T.C`f2M!O*"D2Y @$JC ˲U8WCāc~hTûZԼmlIMbY~?s_ {S._.{΋o~^L~:͇lvP#g;}1uK 60N"Tv8 ә;Ts knLO bI__7WT .~eۗ7sfoYFGz<5΅ɽkJx-d{/b1?Хi񕳜yo StؘCQ 5Jˢ8N泉/-L-MQ3 +2AΥ ֕ŏ\Tf5#!߹FTƆvb#jX BD'uLSRTڭ EtMHL |u@,=k:?/2X rيktH{1y9Xoޜq֩i*)!HJ3#y_eET;4 "d*ѳi>  ɶ/ VC.P^|`s$_πR(ۮs8⏋0fQJG (f&@w:o5F0}~#8](x-@i XΣ3@Iwg$ׂg$՝ozhD8GɎ14ۯ,휑[hh* 1}N5%UrghBBs)){nԫ-щGvPF$~HvkBBs).kkn}h~3y$9gΜ'dJ+Nә m)qX"XX XENܧMr TUuګ;[LQg:Ngy)80Mwcȝa\AbrĐV T"rB7A"M^^b^^|y~F 眥[G_M@b_[%]ecj2 ?Ĝ\ 0UI3Q9Ĕn0#LeV*[٫\1H.c^y8?A k'[C xsK ӿ3s &8a]2l jڽ BK7c\l[N҅0 pgsss;1qgVGxkO'iOOcEl- !;JTlɕk-(Z[E]rf^:K4| |bS,hZ2vuَ!$5UZu"p)ܮwϣk׉\#}ءz}q5ł mIƣՈ"btj/dGS Y]?zFWw`ET4$CliHC08./?ߔƍD 1*乥A}4L'$F}4H.y % 죞oNh4#A 05 R(q8n+R&F_&,9cA։y.Yf~v[LX`"w.Wo d"4w컋0e@Ȣ!:5ٙ]ȴ^|.,4]diG`JJafJʶ0 ( ՏVA>7,4]Ԃh[CK߭R|8P^br8P^-gYP-t2vK-dFw"pQ:{#aQH\E!.JV2Hdo!& wgMbARlXƀ`2p9:Yi2i=X\cB9EM>G<1x!j9wIբb,OӬipr5(?j6j\q&rʘsCCTrkFugVR:)#$x0.ۍa`*Y@Y2d,8^GD Ϭ7dc<&e0TIqUI)W+.L,r9H4Q -Zn߇+|9uƩ&'mj@k}V LMdӽǣ]ʸDGnJ*f}nדjn)H3N̘M}ts ѽo)yx 0^r :T1x>ZNK \Ҕr;l`(M?6}0Y5uÚ*ΧŝKJ:!c]7j1j=U=lss3 z9WBt5-3\D 6~jZ[Hg%ư @u>!A"\sV.Gu;^dc &CJ;#xքbO. fΊh-X`ܛ3 ҇ttdzؕ۰<0z0zخ"wxϵyJ2Kpzo3_Gz]d0xg>Ę|#܏] EWx4xnLBkLNt%S]?ީm_#|0Q:w-Ps'X홄%QD=<.EZw5=bDsu\R ɒٮlhĹ<wH?jHUDu4bx ',Z3˺ +1SbJ8O؋a#iUp & 9sF ;P΍e0} Z[&(Uρ&vҡj;A/Ͻ[L=XF1C4w9L, :CsAsI#eV@OpZ (hA}1%F`[^S!iKKz`X|7"3(Djط >EF[ĔԿZ/ Г蕮qfk)X'ȉ4yp%!$C@AYiqPItiz5:)q$ׄ:'Bkh;0&ŭ@=$adfmSN,LE֦ܛ["YՏ[Su`8ׅ0!V{ ۬C<D$V\Z7#kf6[)ѿp(^GSw ]jd iH2`;K(AI9 zFR>0m8$6zSE9R0c 67BArp(#Z9A8S>8gz?^7qY(vUf29,)ҘAAg<6h"wq0%Vr>GN P-f)Ho{@f VⓍc~YL7_be97jBњWfOIvCM^Q߿'=#W?_M=:ysJY_s緧`wv2]9l BX[,}o=V?f(`l@P1p}2}w8;oh|5KwCq,y !rlV eBJ^>y h0yAO"B%’7Ҟ-Y,3Q)Pd3L&H&{Gy9 `o -!JB7z))>%]Zߛzdz.|]I%q1EM$ P.@ȭʙ\l)n!?1n [@P4%̃ p$šHCcIRo2NX;dDо e"x8|=T.V,Q%.}7A<wﺒ "0[wsejfb'0zܗ>-$X–;s5x+ ?57g2-fb +pE9Lt6,?4GҥE:,v}ӆb^k4&~ vL2 j %$!4^^+rS0US_HUߊQJ{"lᾛ@{~T[NB VㆪBn$(bӉ0|'oӞOc0\x=Iwz^q mv,ergI9xXUTHp#NK-z |/q Tkk5ȐgL*YxfymCi wB`ǕK"Pߐp+f7^ c<b?/FW&hEl!%$ |V M)?#O^$0fGO"ZTy؄ʃj 5sWx޷誮 )=5ꝩuVhIi (Uhw;Lb=S^ MM8s˞CLL=2uѼG[E/x έ)ChDb,Hf`&y2~y31Ο}~iZ^˲uۖ^o]7둲l%ѠSʋ>}wDHN`6\HF]]aD(`J6*"|;$ǙsXaY^9=3ɕ=3fnziQfq9)2m& ȩ3H59iO#3wczd(${3kmW9CR|rl[lw$.T[[v-;l %![,#Ɩ)rpx6{kp9Kr\뙺[u'zih_G nK.sy%J«„=H 3!Es>3m^]5X_`)|㋗#;7dse.#7$Vγm"=;?4iI~?R&~pT43f{^sRC_n1_nVW(J9/F>(DYc:rhHT6gLX0%y &ٓC[$}8YD(gX_aL H`=ʹ!PaDictt3e1wG ֔0RtGS[F+H+m"bK[sÇ0LIG'vDb)⢈|mhTM r7o,YHIReM z4 BخB)L mv Ru^wz$kB#̶K#7#dF/qrS'u{ !aQ4 fzZ"5L2=kNvẍ́KnYO4;Xh Զı|0ߕx^ 3^uwz WNxNNc!S4VIF+ @J+$`(H-i=Eҵ6ʃN[#ʈ. 3IC<f+xxi52?#6#T-ssT]T Wz6 z6.h|a7:d 832QԺINXImʈ#6f58`Dr װ&UcótVT5;Yu BDTЋzʤ\yn[`U5F3Ԋ~Pe=7W6A=֋ݒVlٶ`UR'bJ>柗 e Z=l6!*rm̈O}| >,wmlcI(a _>rDg)j.iB2C/!ަ@Ag~namf^8oz~{Vg| }ᄟۆ&/~x[oeCQk(fo߂"sc;;ִ Gs;w*nCT@44ј2jУgQP-G`bX#>W($a,hC-TqSOaz8ڦ6bp(U5rC`VHuAķŠkpSƸ&mʈ&!LlRg8< 2[lgĚ7tE#Fɣ ;W ؘЭ83'P`h[[" =+ʻ;3Ǝp -Gy%$_#0!Q\Ώf&6 8;$bБGq׉YפFwM5ng#_a?IJ&$p$EȠ#z?kIJ; \e!m̳Ka] y>"}^!|vQ)yڧw`@ ~=8=>] /A4ُ^>"#Ңs&4#3.Zd3̤|]scXl:p4{7GNmk5 ~qNH{釰\X3ydʖؘW򏧩LAy OIJ&6 . :ɄLV 1Q1|$s,a/8[C1bj)qum뭚ѥMBTJ6s~[/0[Qi$N)ӛ1Wחg⯛ӏu X+6nM>]opc >Ao:N5N'L;fbh }}f. asFpɝݔ'+7Lxq/j\ܤi^߹MvC`/݇ra᝞bqjK6ͦ? 6c9Wd,OG{`J^)Ya^OPkxExx"`$}=IB5&4ɔ짿OnzzV<o'|GxhDkJ<$OdϠ5:I}}}ZUi<ݘSj[+ 1&5,=ot}L./ތ)k\ oFh%nunjGvZan殚ԒR0y_a{soޒB뻓7jQ/zлo}~L3a fƒ7Nqt$o>6_oaۯ^ ;ZO0}7E'QBLދ.{a_L:#Ni>m,h6 zrP^2ۓ #='z fa6! Vtb~>ey'b}$Q̧Dqi)GL[mB~07'驎&5~&Q咎MdQbf$w<!F'A`)*43J[F;`HMLOVȳMo;԰,$ք̥I> qPQ>|[c^\OX4zb`>gncx>`\_=S/BnKܐa3槨f.]#Mz)v]Ńl-V>\^?rL, dHО^Bᖶ,eLGEcZ)U옾w^a50XM7==+ڨ259 Oj^7Kd=°yhy#P9I^ʡsexƨln>D"h.:0YJQKk{VQUEst}%'εI4(Ucr= DKP1^Wos{&n1mY JgBa7mS&i:tds"x}r`IݎƧ >@԰(MXp 6 "O mXVxɏ-сhKJqu=@Z*Q]řLxs)Ĉ9j94p@RN ^=,μ{YޚV<ϞBqQ;_3 u׌Ui9x8vDMn %$5/C|+8GfAeZ_k$$۟jB$ȯMv5iSp<k$aF445#z6UM{YʓXX9kkJ>)X~0"3 Z-cې/-s]cxT"ZrXمLDhz ˟_cޏ~5Y(8!a D;wEb0Sʢ|8(nXq (Yv0%R!Fdڒn eP,] [vmCDn E0 x͑]D~%x8 0cÎw//_7#IoEŎa4Dd M:l?6Y/22"2kݭqR)F{r_r8q^d|N {58d|7x\?4=̇ {93?,.gT7r7W O$gɏg3>Mb6;:}d@=A<F3]%/C%$U^TqRISR"t雧.'3 ! HMTdi yw XFW^(eB0Wn,LL4*H"0t sZw74Yd 7.[i 4{ @J'MҹDhpX}>`f H->7+3m߽yI oرr$dޣ~'41j?0iS >&rLMtXmO RvNj˛+̣@ت/IpI4 (ٿ:_-̥{p+3ZiReDk|K^P =*@b٣:B/GmozbӚ(3X/Wcy:ZW|coY.밥ݢKOE\^BJVECmrQ3Y(]bH*|Ӄc ![TSYֵwb[wm,8wq1hO:ޞ܌n?;oI0Ҙ}uzŎ//.}>vzz9#S~2>& itֆܳLta/n*{IB.u~gn_;ĿݤOMhP} |H3,Ҁ:Gj*#!1oV|ѫZyR%{R 1Y*jXW ,,F^[ea Uz#jBIIzK7tJ&6VZo52HGyN@s"BhCA(`Ȉhj=WX6޴b{e,>!/O?>K6C1DVle / ѕGnD1F|•ϥQ$w;`,z>Il῾г$nNn\ gƝ檻ji/TP.xL-FLw91( 7u*B M+ر$ƒ7(,0. <ΖLF 1!!*\Ɋ ]7]Inj:Ϩ^}jcb c 9VK+C,loiJJH"*4MNo-QzFiT&+XORSP*E;x=ĘPQǷRzj*V\q&xݿ]~YtT"gwl(ȓ4 X&dҕG/(2!Ce:-5<RΑ&hФYln# Yh :Զ3(?JӤhy|?U@g ^"4£X0VXF%yRr+z}]d\P'sV7F {$wzJҠnWT+.^Jj9Fpl0DBXAH, \1<B<#Xf=g-R9u4%eF nk)jsk8!`_O[qÓϻ'$_3OރDR]_|N rT U"L=aֲښO]T78?GE*cCQ-sZqq>OMssT4Ar:Yջ^)2cZcM /)1niWY~.&}MNp zE9$-nq'hѠ-q)LQ: Jpkr ]#)"z0 ߊ6PovӴc0L7"jcz[Ny5:h |VE48i,!A1=n" Q:-[C 崈+B?RR lmdrFȞ`6 V&Kwmf{NVJp>U 'RRѕ_#|ۢip ˖<`H67fXg2wE"YGH䄠$Oїi'a"~[5tĿ2\_?<?3H ߲ozwwά|q[|0UZ=ϯOO~Z4a\HeQ">zw*q4%ܜ\ڻ}ܜQ.@Jl}J IST#Vp} [I;.un8>8#̍OO{җ>wKB#%Q-vОd~{^.rT htL1ŽRk3}[CjXV <[8 n"qjp_&q!JK#wIm%w1S!\+emViRQ.Ty/GAR!ɮv|c~gich8a6|s SJT&p2J8/-ZV+XfSbl 4WJB{njqF|+1htxwyT r@%uK@d,P+&L`1HA*ѯ—WdjAu:9V tZ`%",+5JǘՠcMG" ܛ/>ϓ*z(V4{[ʃ~ݞdI7.JQ !!Խ+ m=H(ӊ Ƽf@1yG`fTx#e3eNb(ac 6 RADh18-*0,T(Ud-{b*2o_䣛vU.'gno4ЅE_FSoc8BGn*8 :Gb!TϑcNUc䡛=ӽ&}H0<$;_-CJr(t}:,7XPItM`%rG!h_s+jh.b0Fʐ xG<#[gM O\p"\]Lgfg#`,~|5s?'?x<5M+͋(>n͕W"%`QZGʫ/ $D"d G<2+AJ'7!-X3~vik i5Sqz՜SޡԂsZ0d&Oin(dBLJ{ ApIşEÌ2x'ed_jܛzz_[ЉVD7v=iz:&X `fϪ 6Ft~ٙN2ۘNӚty- \ބ\ -hgסJyBa%JEHΝUBrŷ`-U͓ fgJ:ir:! 8 "QD*!VB&W)ѐ&EScWr~P:B -BACBHRŋ`rWEX|P@7?x찝ĽcAA[T+ ֋l XԩִvѶ` (@mtXg07oyΧnx&jmºYz'Jg[ŗ/F*еgU#PqӀ-Wiq_WMw*.Kjֺ%/YeUцӚ8=o _}A"~>Mi`Pmozbӆ+9Ϩƞacti2 ^_ӱw83Kr'"b/BJO+\@.aӹ0B251…Gw C E-A$}'M}swk䉖dxWњ';5{c ӣ 53hfzdD2fKܣw2 #z0אZBkwBK2^ 08+$GEЋM 9^< `v/[ꖶJ `T2TAPwILfc02"!WQVq))^=Y+@RnXtBU=?sw>qJh]MNzz%/3Eӓm][jZ5Rw/4ͤR4b R. ,~Yhɻ->;)5+ ARC iuYfl" b ^&4B(X@WlݒoA\Ѡzޞjn&<9pSKt;j6:(7"wV\rhQtJbGǙb1[Þ$+ ?J@I1S SIf U.s>L(l^8mԞ"OĦ=:- r G<.X^T]H(DvXCYmEgai`RI_F~w ^y$v1v9(aOH#$TeNu)ԍ لjvt;4?6(EP!Y @tNȾX4YA$Q1Pw@id8!(llFE^Ⲗ>jys E͈4Jvw`\Lu(b2 +oR޽шTx%] e2ʰtR_"ymVX%lAzuI'M$0/iu>&\{? : iSB/$΀'H)r?Cq]l +!|CƠݱFz!eְ!V3mfyB/sf+O.'΢(УVx˭iVEH;ˮ6U&ʊv3!1̪H' 5Y$)%)6ekjQbY& XaWly1/a/ºː}QW@Ւ覈W JCѰ*`1zv`3)j?y,-S!M0>EػMpWJ6]D,&ZhkV#(5HJ>ø)j65!a-SkŘJLQ7c/o-[0y 5h4%rӥ#wi^|.ZBEVJ.D;Ejb剶E{yˮ,AA N^QXpNS6an)___/ 5NR1&1]B[HI`S8sвѩO͇@Px&yLcp3b 6Lt)I lr5?L(" gWֺ1Xዊz[䃨 v;.Ss[f̒sZdPYo6;?->B =邟ߝ>r/'Cgu)'ROfyaW>tcg%q]\<mYxw:0f=[g{jhAI= zsZf}Kb3>, k8%%3'%utVch)\@t?8U wJ]i If{C Vॺ-=l.;'>t9ew;XTN}Rj!fr>B'Y_y't1:ֳeO&ǝȋyQw"/NE'89?uc7'?Ȧȟ+V_}*WF]:N.?|de8_4ɸω/9E='s5{>bۯNhK쟭b"OֵiIc6^K*{mW Bz5^Cp_f䣿iC{ڲEzFⷱrJÇAm僚o*BK5`h(#4d I2v^r{\'s=SU05@ ,4#@L^mv 8kE,&ٸl ^|4:ss Őv|r0{x{.&'Ҫ=6 ֝P~_柹obݘ޽*:x;Ev0[Nk~{A>.]kCH.EQ<Qel5:+Q=;ʰgLiȺPIyFC=&4nCS62x aĿMCpt nJl]Cc'a%<%4u$ %sHW u 9B#!{ QuqM9kZ1c؜1GmP XD%b!X K%c1i6c^HCՍkcj-zo`1&=C5ZB4FDPS *3jNADN$Tc$p+n6d"cP60J腯+.ە<ӃḦS[oZmְ6bS,r*o+ϩp:Sɗp~_ݯA^Fus iϫ[#Fgz zz5 9hbNxon;!oy[ksݲo<X- -/x~vҮȍ/l&m%!*&PZՁMh&I,N:&S{3oE/h穹[785-!SsMְ)J¶)lj|SH7[-4a BnS.)l r|¦={v>[-fa Bb'/[; ^]dg݈I[`;KwLG7fphkͱ&*L~P,yF[l=o0$caH֓5@U3ZV5ag85=3ckYw[H`rp 3p, MROyNݧ_~uXKoBMTѿ7y&,^?/ν,!ܪٶ}m8r\ZV VLOJ4]U{X:()QeV jAh-Wlz1>K`y_. *.8ӟO5"M}~S5^O{]{1gn{ڷK!.믿(&IȨC.p. mcU!VOQ8VptxR 4TWvQ襸=T<;m|3gϷ7Wl}5郿}wK܁S((GqF lʺ+}@s= pZ?w{ 9" sX1gvk0fDwW|wՕ=gZ{_=%xSݞ/=vPulחYN,rBY+i.29|Ggy4h2&.#/hq&^T曽gqaɑr&gٷo|3Q8^&d>/R;| ݁\l%+Z ǀ3̣tf̸+KЬ LƑ1( ruvټnbt3ϝP UKTYd@Iu2=6:ImWOQTԙR"ڨ*okQETt&UTeFO5zgPt.5>8TX;BD g 9ZЕJQvߵ&I]֛-t~_DK[%k#gZ[o<XNZv͝Y͍_^V膢57EH"C5 5.^nvP7D`.&uyp1ȥοS^f75a!^/ ̄/@-ub y#1MUi~|۟u{ /Gٽ2#.O~/r]$Ia!w$*Mi5y !]] )u[ Ock,5ֿ(]qԡgogiH_!].ǍٕUh ҎknnRZϚ./,oVUxBUX[xubUlm5z,5#8iI1I’?_HP/hˏr:W 7 ]Iħ6 Bu**uhB3 -RmeBh4wI 5R GHI(w߿{>B$QJ.dܾW "w38&e7zQ>fl&ũjK)>ӮΎ_vbpWũ0(m}%Ugꏟ<4PU?Z iLuHWu!.q-]`b9k 4Uws}C̻Ar􋔜3[g g%< o{ zn O߮x O=t;e WѨD$/BTr/ -?aA=9 BUHk~m!fsm1%%*|ɂ  Z [dWU!>8_i$|֟WAxϐS C`=Ds)jsß#S"&Z7ΗiMp LЭbBxxq{(Hp=!D7Mp**TPʲKpft,MF1~-b].s"}HLLJd2]FA`az\yҼP=xϢշd0tS.VE2\nnx?~ W4jAq!_pv= 'W=A{#3 ~n,tX#ʏ֯Wh5@g/ Eퟓ dn[ErN8޳2;c|tsmEoٮV R.%:.DX@DOCe`Ua攠F_3}H_0uv ȞGj)Cիv7 \_}.0˗ѯF s`j.~\F΢?_>Ե}f4 DAsCk诣4VQn{ wk@XD\hг &X'TC(=Wi)TRBol"=6d-6EbӠa+E%W6Q L!@א70.79K3΄g+rKL1@nw|5Y͖q:MB}TM@$#҂uG R/m5pRw>U!g <:ek: \|l|F^5ZÄk ^ލu?_nQ:K2)wZ jf1I,Mkh*hq g҈4N$֫UCj^+iQ 6F$a` Hg5e8ͤt֘L``WCQ)u$%QaLꗛ)Fߥ1`-Sg32׃*[+jtMFR[3J89j K@)2Kኊ 3?+!5gEWް^Mr /~H5(]}9|$_ᡝ+ XILߑwo>k%֙o`V4sj@16EF/ӛAB_LieEFT3ttwo k QZ2 M&5 S>p̴ƃhdmHPa)uОj$QRAY}Q.VXsZ`?`!E6edI&`4ERmX)BKQeʠmqu5sVSQ.`}QJYfl1㸠$2p4U+%2ðUZlf5̑R30e zkԑ7k)%oRM ='v(ַo߶%ӻ}(.U&a8`O/K8ov8@Q<bRX@^;w%xxofrn(60ʲ{њzPUpޤ*jSsRE:/QltU4>WYPCULξ* {dCTG/0.R~.%m^I!NӡN^&N&(k?֊g VH-/cJ BZwO<-t_}'PW%_c_rlV~pˏWCQiT_窒XLw֘ ڰe&D6 =Yj8_caeCiS@!hcpQ9#01\uoO&f h5m}`Y+ށO3M yŦ:gvP"E bQZ+BbIIBjx*PFqd2e"!\;r)8E($A<->T3^Z-P%EL mJIa2f@kJ)'8Q2!DK !B%)f\dZI`+[+'2110RS,RC(*! )J)`13%Xb2k `;i3ʱM u3- - / -T::W8uT% ʰ$.Jf<,u s~&YZ{E#A+/?龔Ia4 Z\ *F2J?J&ͮˈka%%GkkII ɏyk6C>ha4jZ`=X $F0('`vhkݲF)X[ޛc1ठY^v|Ry?l4Awz N⿖csSd"l=>N#0:+aef4^]4]-g?n .6 /3.C6hig^ o` (fq2M /̿χM} T=H𤈊aO "6Բ8M)R)GXA Pɺ/%sޚ:IX*fX*2m ?ZB%hǹ?o >}?o|#%1خAfY+ cJ ~|!gqiٛ@ޅ)G<0䏑$`\~KPgؘi*c#qsSb 2OϯIn%hjzg`m~CѶ(Oe䷉ԫ: [(7˾=x|˜5}lrsiDҁ 3xH1YP~Q!@ӛ:a< \Rƹ;f`D ڵ˿{&U6a}5ф݅׮DS简&sT!TQ,31DVH)Z3Q^VvЩ˭[KEC((T't<<+:Δu☟kdғt2t#kzAW R?MhWEY%- / xͱ"!rdhw9gaz7U,},]lM=qPfENe'VސFH(䅄BJ"azv33иӤU#֦ [\1S(GX%`#Юc"ҫk&Li;5EL1W8 yjN'oM[q,qHFy@:eL9Yb]^8]Uy7v-i*ةkkj@o}y |V᧛0?1ǫd[3UL>cyu_ϗ?^Ʉ@;\FVEn =SNz ,z_*6O=di녥U%!5掉ƣ"2拁}F6^"myx[rCkLY̽F##Z7_ \3XɠS+uOBpn!FȅQ1U}ƭ/.C}ۈ$)aY7!Gb\SLnj0iuO1)Fb&HLr@jAgdI~/L;vi0f1zơ $,%[w1RNTOe&AH8N`B3 e7Rɜ)pt[{-!+gv9cU?ӗ{i"M1y0n.|@r-\?9 Cm>_!|K7dgC&忋;y';"gXσGd% sxLlfv,R*]h`_S*Ş< TƲ2cӪGD,va?ծ3X'@$g+'lde`%-#v\(pNd=mM+2z9W f|XxL&ۛ:7GWO^ҷC՛/uf7&1٧ 4lx760?W-k޽i7hO"gM^ofu)_<3ͳ_dB֚)g*Y& -)+9Bu]5ĝoْzozxX%=` ?./S ~{IJպdnu6S{]y?}|yg|H)(],90̻8.W}&.ZŞ{ʱ+qT VԸ)9KRrR4: ^5.7qy6߾7x>8&AFB=jB]rL)V fL\ "FDQxY* ^,ʦ51U0>K'c{y;#䵂2 +[ަ.t]5T`Q)yCR) 1l&T.*^/M^kb#Z5KizE.NJ&ڥHƛ\cƭoɲĻ'\k|n=h , ^:OЈNni:{s]G\wt3r>p5g/RMp*XV4zՃ[v1רpݿg_~P! ##]ը(~  T[G$Q~Dgx:udL 'l. m7͉|<^Np_E^ ̣{WCW'DO(es?h8DХ j\f$*W0-FOYffp}1j//^><~RFϤkd+[ʞ연ʞg9,&zܾ_PnH?ַ`uI5_!8 jYTTb/$F]*lQZ4Qty[*B2",6 18-W >@O`*p;#faa1;rAz5fWY^2cvg&)CZF"nbeܽ)Ҍ4P,e2'}Nt$U]}:[H?n6AU'=uҴē/15CxWz)B£F a\|)`ePXH8Q!@oZ E(UC(ږtqRF?"pʱbT2Y*JXZ LU$iLfn9+v+m5hk:h;ũҫ[& ei^:⬳3ImȍeZ: q0hL-hi=]Z< C ͸Pl*Y6:Q$Dt!*.ЭXl/iI& M@2t?w%7 0ÃU!5,n떨AN86VC)m pcVqQ2[h04ve^^i]NֲĠ[$+4,Z)Vp. Te7RIdsgT#(",1Ӥt WUqؑm{at`3,8áLBY2<4 <0:_ /X {67b <ڟz\x٫B"2^lU8Ew:q|)a{Ry t^5]p C4bٻO8ECS^<@ h![jLH²b 1cL''g^V՗ x-{ R0a Wۈ@ KQVrV!:bXWQmQ҄T&A{D,gj2-mj a83_1,P:mC!,'L=U a Z.fjOC^ U~R~SV_LP&׫6i@ʰLZsm`&F/``I}=i@T R)\+u0sI1$) 67DY9/cV2B# 2`҉̽',q˅tA8۞ڇK%pi \]KXvjW&y&K7uJ_Fvo-PL7neޗ}FKCLZ{:tױOE$L͟w'DI$I#Cy5AWZތ)yk9H˂,i*IB3͙gsK߲HdIGNG]'z;"q^G e7,QZF?9GO2oŧ$v|{դqO7_Z*i{IionX}Z:ՀLDE ]V2*ex:ecA{Ì>-ВZhGh雓g'HB=)$酳Fi9nRLnbѹ IYr:0 פMb!~[ ޴g5c4r7AB2 S>A (*9N+RK#p:cWu#F*Lry'ka XT) LpdtG Jnau͛] oh %ц;$8-C)ґ{IH*$}R m2k  ]:$ͤ w-YzJpQ ɫ@cV#R`5{ e~M狙+] QYq#}zoVp7,7 'fL%2ߒ1-8X]^5f||9]O?~{BlEfp4=<}IAu8]ϓdQ-(b}<ƒb5oSlnB0-X>a"d}|[x5 VRupH = j%~,Mh:`]{l5ZAgTk>wRR "h4+25|$/PEtdO fCj:Նe_mj<\xQѧ|3bO;4WQ5I|6Vklq,Rtep޽S9+~M WrVW7l>ׯBv) _b[Dɐsr iO5 )#\R6:E$V7qr#wMY""$ *~)d1;\\},d/D,JW֖6!~Uai`nU_2&z\[Ё~Njf _mu+X@m!)ۇKn6}%&+뻳f6(_`\<뤸`,p3c\1rk/O,[Ne:Q}*3T3As"F *X(ta1V)mTȃ3&HC &NϠZ.k+pSjmym"W$ -gN1$j6kEJ\@`܋{_l*Iz^3ղW6 {iEu Ii7;/y΋ov^ZFN*)R:#)j & (LOηѰ/Ʈ FJH,u \|>$O 1>Z\Ȕu'̅m' 1|:PҪ4a(w +|!`A[|Z KU.n 5JFzAf̃ [\vbPT/Fq.٘hFyxP#Jl}"K}n-ӗRb8K[TIX~eBK'+ }E"c!qE+|W<"q'r~E}7қY"}uYWs`You h 2%9磓8O.d:ڳ Q*ʯ褫/5RKeˋz-[Տʹ=EVB,-z@FH#fWdQOHX $I>x"״5㫷ohNT'' pʑzi*GcmIf+!EYrkgm4_/b\[.-7(9dKO>`H)' .(n6)v8N<8m],)lgI0ȓ;kQRx$.ZNiذB;yn<׏!#!B.IB(܇R˓֤@HY )$ K BV%:6(i1! [erdHK*J }!*ryv;n_LߏvqqwG!F6Y{Yr+8$ YsČc7g Q))fzmyW_?+75dPXY㼬i)5WaɅa!yaI>M;6QH0>ɽ;OJPM+Bs95b}<~h$kZ ޺Ae)r>iF}D猩h\.n E%&+C#>KZ5sP(dHd'4#*QeL 5HSO˜XtReRx(R V񞒸-S˼ot$n+a^{=MucH{H-&bQE%ְ}%&]u:SeT:#ɹ\a_Zu#QT'`Eu Zg7˙HVY.mmO˗e>TԿԙ ]ikIkJwZǿuٲ@^[H]0!C.$ yo%@74f=3w |pATwCaf \ (>!pȰZDx2#OLni}FPffNaEzzx fs:tM8}8>ѯ#8vg`gd#heԁ pQݹh-brN\%0hHƵvN@8Y&k2g'o>ٛs;BF۹7Wԧ4~&.VuE Gucd'1fl1ɴj}>QO,b+ǩʡ{%],U}ʘQr4@ui3L].K9&~HL!NAJbe 25kBꙝ}u]~9 Z3D{hd &R:^S5u﷐&qK$qK缚ĩBʄa Yҏ'4D'(]Y8A]BuNv|{n 7ps_VH.U`zh}]KKLT'41(8ZtЉAQx)uaSsk*\pp`1ړ)9jqnz6 j7+;>'@0hvf:|:o2#Zr` =%uƦZ,yf Y )H͝w>$veq}\S/0%N@u'UuZV3ITRLZHF&u D%K%#+1LHڋ? w5u-dZ50@_Gzu_ekd"3Mr\VbgʱI6lʂ̔5/Nr-24Tǫ}4G2K ->hnyۍϻe׷v~-bx6<84^V{@|"gxB?T+у߾>^{ 4t]Jg^o8.K!7l7` p_"W(M-e'].OgT4ika1ٙa>?=tHn+ྍ/O%/ځߗs'H&k^M#/}LrUt(cZZ(+z7RSFM׭*)긺wn;>مRҗli;7`2tXLT1D}+TΒVcÄAv08`ݒ_=Z;;E ƈOEht5C81D'BxQ:սiQ'݄sJ#d/gflw ‰GHH 哭 ы͢>*8Gj0H"V4 3Rģ.%3)42 t$)Y܇ >}l@z>^ys C tZ0їTgj&kyG?@^bzYWZOܠVVvVe[C Vz{1<[=3E/o9j@MyRoQM<>l&j{ˏWGe.}GS*1zbAQ#螱}D0]%Sz{QqMuw0zn"QERU=tZT='cU=6KʌZe%y2Ѷ @er>GG~{_% R~tI+xwTlL=el:{X΀o_}A)u;sqQKr 9ճ{QDzz1{\>N\L>[>EѽL,xA,apڧ0$t;<\g_11`ctx<XhXz/SS9-Zz@{;WDYOIWѯ^9s1:o uԿ|_WWئkMZo?Nʌ0t[/S>~'QCh_/'ϊstp=~43gMIСQzQKjv32}s¯b(\ҋ2SP0=3Y4Bh2qtqJu_J.p?>a+ߠxcM}oA˿\- OHëW˓5uf9n=1{toע\6IC>smSZ٣z^+Q^Ɓ a~ѣ Rgh嗯3C*8rB5Q~(z Q @:QuF\3AmъI?PW+B9-T}x5Y dHUܓo7>O L._mgX zVX@(CW!肨ќyVaV ~9B_D(G;^rz>k7GG> Q'30 Te =OXp)g ;zд6SF oyA=߬]Z[Mfttvo/o>q}1 Fl6g@QCY'iəN.'.j"wOh]rq)r}6|0:&~?\d\B )aL0ڰ KZ\ 9oC>f?' ~1BD{PZGC!3 im|@L`8T)Bn5ĉw! 0)^qTvrqo2&ucocήBi=9ɔrCH>H2 N@e"-aiձkhs 9InxאM_ҝ}YtiɃɗ(<5;0L5rIH ߢ&0r% nbhxG8&u07ŠsxAXpMO;;\Lx$r1I )"#VJM j41L!do,@X3肕$( 1{lɫb"`'IsʤHu-q$Ahj(nj%KItGĘ~q)`F9&V=lj R,>;JKp2EĨBw²j `7r6X``S:N9ֿw Y]3蒃Q۬j(pYgdLݒdc 3(}ڲh Zf,P)ԼܣƐ-mڒ QnG1f?+c1#ΰzZKWg?#3NӉ#G YȞ`j4Fvnj5[O2A66qqifso1 y#?~CV[l]ڬExt׵6+ {6n(C.A1WZ$O0;uGٙ$[jex-~Q\='jf/ϋJ Q(#3G yh@ r3>kc>Hr!\"3Okp$FT5??Q迌3ۚpl6ڏ2<,ZXst?|@p}D'J^1rk\OKuRb$) p)ZkJH9K&9FRu qR^G[NLkA k".h NE xSKRti%8|= Ah*Ƙj]Ru,N_^F- raE(y,z So8n*7"w xlUܡPv!%!uL޿8<0-hw<<~< J2X(!Ǖ7L"{ԐK=ND`'՚u(MKA.N51,YBqSJ/rZ;:;TXi[r4BؗӖ i{`5Njh.>!1ku1SPDΝ}^҅&B6OWm/֥#˒ee{qܱ%m !CJ 6Ӟ"ƘMJ+F@햄[yn̾c6vg) pĐ AfGx%p1r@H2s=]Nj^t yPMDFg-37:)鬡۔ m8i&D2^ hw~4X,n>d Q&H))#ВY)1nFŔV#Mu2Odxfu|C=i8"lD/M.ἥ'^ ># "uj5Ua ePU:%Cէ?~޿b.ۈOq1y7͗"0N\in7wPr6Qň\ Yt^Ai1"IK#61Lq QpI["h'PT7T*l ?>@:A`z,@ڴ)kC'ӂ%AUCe F+^}*8|”,FNfTzD?hf*:p&bk /9M:1=?-oM]p_< BV 7x]k eNRUwLsđ)uTdh:>Ò'JnPPt6a#!:ocAÉOCPnfJef>0Q"i'MٞwR{eK7=dP#)+^OE 'PSSZ? PC*ogJ?sm`8R /jAb ǦBhɟeuocq{l^od.eOoog!,Rɋbof2./2./լLF,s##RCQzGgR('c6Q$ ?x~X~{sv>Z]_5d4]mbΓŇy!,v~ykPفK6 JuDmgr=ĩhf GdDJTDJE@(A l!y'0Dq ʋr0*m -}n7] 9Ik.FH@JU)6<-ʚ8I0tĸ)xȫ5ܟt~pV)%BJ:@%g56)<$yL[R;Xo@c|Z.i^.3~r{*Y-?brv-W$=l0"?C6dkYd2𗳫< den_vw;}Yڧ"67l3 Š^,~DJP㥯Z[9hѱ/ U@HxA:v‰T Nx 'ɱOҦ8rli xs' rlnO^s{և]xcG|}#mqnbں~MJZ-FODM4yΞ8KSuISYD 4dQ4h'ږcOrdn͸1Gbݞsnj0(8ϏSAqIFIjR^/dj-|5Q !9w2F4`}pC1*$XTH 1x. cRBUnTչB'+jxNSrx֛u8[hUS*"DsDf0BWC\@ڬz 5! ]L'mg9F`xT&5∮E<4o1.X>[Nۙx>ţ_|>_ٲefX.*} XKp^M2֨IJR #^[nd* =uC[SQލ$TNj0t 5*Uq'8<^z˶l%v)+Țw@ #*,G[oBHJ2pܐ0j`>XLh C0V|?j#I%+h-NJFObK?T'gtSW[Z|Ad7ov5]IN;ZLqCEMH-w\ ,?}:yǴV;ԓ6;Zy /w] #ԫi3zFW<=zpg5/:dgb[YAs 1h4]77Ⱦ-aheKvΈ߭[1/\ayA<d~W?կs ׷n?x-*S * $$Zg@!)2D"L_ʒ:HTM#;趖'+Ͱ4}Cj)5JgY)Z@Xr\^:I hF w!9a! S[N# ';H<9rfHsJDG2皆I†qeZ$*ԉ]4֢Wƽ}}A % X_cA= i& OwkSƇ@)ymdjdN{$ j%ŧ˳ůՖٚ#I=8eKԒM Eu*1ĈRn*E>&wl_x }@/L1 ?v5up~1)7"fUZv~?e瓵|2֋`%]u\ QZM^j!%}þf*'T)YuNCc!e}HޣൌM-R-((g~\]vSjA+9g 5U!)I_wI9g7Ejž;O9OE.b%3wMc*"Ϯ?y@oUYd✬SrK.Y`[<+{]s64Wg\ӿ:G)X{,),}ZTTʝ{gn]l=z˻~[W Nw4nG1dFnZ׻a!Dؔ M5Bn$[W Nw4nGz8d=C{gnm t=FS ; txP)D_us;l(-<z|ѡV:21p AW]E6o`ٴNHF{(䤊¢*GMYuy ֨Jofh-:$&F~vWCgw5#*o6TMdQ2f8{uu`zZj{ײAAVcjkOjZK]iTL[(E_k~>=wKw3:e'6~AjO+!VxԥIpcp>beysP@>1bCTM)(2fkԡA S֏N Hk4hL,i I{!H#G"Hb$_RޔRU&mm4fJTtw"95%Z% UԲzQ I8OsZD/J\) cod@W9iOY{:H-fF+Ғ` Ih$2(,(РjYz1/*6kRD1=UJB^5y?"쏗7?+-(ʙboL nJhdUFq݅Wf(zX=`wv+}78%~w.VwC}B| ? #uv34gXJӪ3l%l cX)7󚝝-"qB)OZG)WoB|vzßZrH\>u8teT?^Yrvi+ɨ񔴚 =ϙ>]5,+TßU{:CDӭui/h t[_RrHmy;Q7z%{DF,iRH6*^_|s- [Zx;6s֢;ٿoYWcȾ՗춅&ԨdN5x*$)S®NIg'$Vg,"[(, В$w_: +knFET77mͼ̆ ZPN/@R("@\U`!e"Ph:%1g4DaƜHk$ R62|>|ȩύ[|"ڰq;y㭕Yh1w0rJr/#Tsf:L-m2[Aы": GQmtv]iV# 2*ϯ6VDn|x?"={n53vÞykw;_S;4ͯԗth0 Jbx;hbio|Pxdtz^ h ]Y;K$:ȢX9hGA;p2(Pa{w %#Rx2$Ob-Z)U4`2XkZ9nnnzd##T۠@ű~jh@۰ʟUoN,-"$p[TPqVr* Xi2)C@BBs-Z58Ȣ2ňc;ApX.+e%\EVRZBs5# 0< )ŏ|:)5@Y>+-rX{P1 KcPL8A,NSO5F`> 7.R-H񈵢J1)SGZQv4VT,VT y"Z$SK hli7$[)9SG6!x ݊jH ":6逶vc&|DVAE?c|4WՐ.G˔%cK-)!5{Ե$Ah% B5іn v ;yu7j%NolY!$NT;Arg&ضUHX! iJMP ̂ғnvϯͧnmxkIx'z,acUx9EB`Ƭ ())c* Nx.:s{PQl<UkSӶ40Aw3,-8dARmhbBB5QZ_څȳ&t{ Z9yLj)gJCX%5BXsඥ i_Xwu/7v9Ϸ|y& BG{n\ as('V 6V7&_w6o-5\n(D `v"Al6m^mNHCk]?T6ZrE k5@Cisޑ@׼rE; {HösC;w2DŽ A(ܫN[0!l35l'*W;D+dSVNl;mk<rq0a7庣CRCxaeew7p:pSp=~[eGs[s4q.GdQ'%9(JWv԰?ib ωrpd`,\L{|pnŀu3Ӆ[)1M,8<6 404Z(/TS;x \H"PV0"̀V!Nd:`3# Ca2^yi8bȋ$(X'cl0I !d^::ď.@TMס ς萄V ᒨ'@{^%Vk+ܲ;. +/)$ w~}.m`xP/[cl4Fb7 8ɨd<( E ܬKiRGr/yzĵ@sEaUR8!afĀ2iiNm *  0m :B 9A1,r 5Rݑ  1nðW S O~GzjI*tJ1 (Kx r(ED0xY@b ֛ED!1 xDPދWLCuP>^1 8@𾄛G$m`$*jS&{ez-yk}?.w e$vܱwHv=P aXILh݇kwXoڳ?cǛE#fNU`PW%Iuqx;XLyR)iR2lI2A͖EL0&SaKɒ[NyGh1m3 >1 ^/p$FJnog_SFwp1aOy&1 o RrRbVsTqPcT&δD%s`XU~W_Z YyU>;4]\G>D1 |gnױ2]kyIk7~39.U~wBwзg/+Ǚܮ77}$pm<^$1UQ7=~/*le1.J4_9#\r>]}Q0it@ >n6o6) F_5_ n_z:@WX`0z̵V!#8{844%acF0Exhl[ H~<("k`vF]qL۫?ϗIrkk4a%ru+(0$| CH-"{[W`E`?=3g`G0{p8&856 _7qɊ]?0gCΊ.aG)@3A?zK%Z1R=AghoTV =mKw(1 o~tvti12F9Fա~]4wp_-NVKgO&XAV>:-n+Tg9ɭ3VAB^F|w[Mzd,SOy`{ҁf׈ !/\D)\JubvdK[)(byAYS䓘(N|IhzJb-hᠡ8Pjc^7tqdrʒSFa=L~'ʕϑ_Jedoq졄cec^4Wc_&;I h[&XaFY5|.{I9۬_ 9J%yIQ)#jð V{4, ||',NZ+<| { ?1,e櫾_>i^-,3QdbBŏ3!avN[%S0>H^:35 ^OrY`C`N-E)AɩW49뽤;Ͻ5[J啣ZjcU[jQ-,N[#zl18[3Gͫەу^~PY=Xu[=Im~vjڻ2<<|]xx઼*ΡWU{Y Ms\RTwypT=ٯZ HE&CCJ8W,Q)ۥcNQnT^9xtAe3 -Х]Iu?𷋿~Uy1*#2#|GXý-%w9"NqFp l2Jjb8-m-1.Ƃi2zb 8DSњ`ۊV1#Wc i|J] ޭS:$y\}e,axx]\v-뿗H gt*beP z:PO2e}"TR㻥_ JH6+D26QН,Hrłdn/]E7"jSk@;إĔexf$b#sXaV󄆕hчUWQbrUX%E%+jKS Pq XفRET_+ה>)M%C$n,⫴c'M[cqoigSٜt3 䠅-e!#d0bN}W,1yFp0ei7,ܷMCj"}od;`Mp̘|fώ5U1 i?(b#)~T ϼށBlsS%8FSANQ>!Te!!/\DdJoi7.$j.b#:}tnlD&P}kjEDTr7Uϖ-7v+ GtJVܽB ~)kԁ^ '؆ ~Zg.y㥚aLnr7Р,.9ѕ +Hrd+t9G^ΑMJ>s$c9rlr%0.P+.95?st9'4` *^C6 "GI%vxl[ JC_W" _lpق}CJd.=wAUHMIè^5Ar؀X]ySE/gu5䕹"薋^ `Sk",;OSC#3 JTF!m1V "\ U!rU5'ŹTsVA6^E KM Ĉ(=ƵӚy)CFh=O4ckIc{SX"7J9i OT0 qoPNGE$k4V*LJېwslM2 I[!4oPȋ8%++_>kwۧ_UL.L|w{6 Z ~ ~ՠN>熐7←# ֐޼#ӑ-3F!甶4%2 y=VKWv5fxx:fGح}|vΎG <#ˎьk?a^>? o_m~6Ghn=M(ݺs&G4Q.f^&Vtκ6tD3/nmZHۢcb1΅Q2p~^dg`B5`q}oF н&&dLNj hQ6ulѥuǎͿxZ9w$\)ڵX:%0. YMpneRĉuD#31|9"8'BaB @+i~An3os֯ }UvvO?\k96^hIHn 4!!À"rV`xF6@#k$7jHեIuA5NIi!)0V$ # pad CZK rXXS`c>y',RZخpȑ!B-jIsJrH<^$\+[ʱC #<\:CPJF(Dֈv(zIś,^O^L $Smf`#fKaX=3R5WTkҗ$r+[n ]`,+IQ6b(圶Kc>t:l%C'\FZr$zxIR}z$IN=ӝw)j#2?RE4IQj)Qwul4*LBYaD3#8lI 0}Yf}?9IL;0wMt|BZg)K{hn'O}!ϐQ3F *m3sgX.{]GHžùhoB-щcԪw}:˔QRɬz^ԉF5o@B^/Sp`M֘`DP^&Η򹨋~ j-C b ƪoil Bh((&a8ͧ?}q qlt+pUBzѻU^:U^oq!dELdIO/6 4Q]G՚'@CBJ%+31땰b!RlyV8NA[qbwiR(S)wVmXyaVW^C9 <ϝP_üVI,Lk\0\&͕57GO(|G N匬-qw5(~}`*ճS{/O&$\#⭒# np :4!XH[ #B 7_4K 4oGget(D"/ACR̭aCB kEkAJU X$j?t-/A|]A70\ZJ0 ښFJ .B624KTxzg B[[d⫚V*̚ ȅrZdɖ\L&z)fe1OLKyܴU?a;AW k%ɘ+ Bz5Иs.r*]o8Nnndy :[ 9n2P:Nz?BKTdSE1d1BT u{PM^jՊx'ՒjҨ9lX mi_iI)[4>n[MFU+[kib {.Q2 FU6[tW!  GK>ʒZ@ר"=,"4 (2d= 9n5GiV p} |c2m# t-Hwe+xwuʐSb wp=sNIw&Fd^)#>UM1|;x=XjCè>u_P5"_,'PZ<7,#n֞ dӼa7;3iO_5`g^CqCr=nSk8-EOΕOcGq(hCilqgmOÿ "NKh8NaO0'QB1.I5+zb NǙVŀZlPM֮ۜ>⒰E`8}vnM=u^M8x$QtUJD=\+K;"xKX:V *-TReą`!d1`AX 59塅cE-ֆQ(\z|\(!@K5W%2NCn=  hl哯@̧*qͷ}jaʟ,W'P;S [!gtoXuu*1 bEMqdH?V֜J%nFx۝gH{y8nE?W1U[z?Ń~T}/g_[Оqki֪EZ } nIq:☳QO?MOb|rH`YXZw>ތvhG HjmZWV H?q -7kZCDtb B5gba'.C vzc8Jm_@jƷw>V_,\^?<6os[\!Gᶟ]ZS;N@+0a!ǹA)smBIjeQRa^rl4ɩs+,aߖ;8ŶsC-p,B-diөk8qՂ9١h9 ݵF2^8zao1/(rLD+RyrКpK]G8mRMXO~O?V( LAw/?f _ʴU˙ 2(RJ"ciCL?% F"hŀߡ)\b%wv]Οd_g#!&cܔlQY#cP2SS ~]KNxԬ2 ()M;jlpTL"=+\gXpќsښx4{{YS1iGWIGǟ>^GjPC-X߾m6irno+%FK(1|˵ VL@흿 Kmڛ?}`MO2 ]xh6w_*˩沃IkSxOPjҽ >$ ّG:+z$Mh89#)s;XQ:+r'buȆ&Hj%A@*(g;bB9>nO<8A -R4skb ѠB1mWn:][]n~1=-5Et7@U:م*M1.fZⲬh\-FjJ܋ymeEv #I׈Sx֏F8K)jP0ۧqYl%Cп |53y_! l ^OGC_f|2? ~Zaކ{mu.z>`8"d$ aXέ3R P-)΅o3t# ~ Xb j0R,c9Hc)sfV@iV9}A˨ơKxn*!ha+xٝVdw*ѓe_>]bxG%)~ViROml")Vg?q hׂ xKEmE,؉a}$÷JQs&`1%ɒ_Rⴆ[%- R=8ZP!YyQ(kԜg\4aUB=t,&=}Oz,<>c^hFǜ9Mv0+O ղ-9Qd¼,|4t/@7_"Su<jfOkAJ;f36sM9x}ʑn)S*mRjgMJ2 o=f&01wS0u\#!#BK(ТDF5""Z͹ X#GwOqU ΅Y,^M3[\Ep0 J1T4oz㎛R.X߳IAp{IYe)b̍ #BJc$8M)L:0ɣ#`; H\܊B9{} ސf!c֣p=FlA{Ô0un;@οY{.Hޣ exwtQiu{8uu hAŭɻ^ܷ`!Mpq5 o{2iuz=k_Z>/ il[= U kQO(O8Z͔v༚-W1ɴn&x,]XFTv؍&sŮ#qڑ&6ispa#A+IpuZo6?L>̍ xtLl{"P9}BFVDp(Ap n#f:Y zl):Kg_RR+~_B|w@ێٶ="0 vsgXD\s)C4hA6[KN`5'(U"29g fF9EssP-LEcO)2K(̥>ϼ|%X3OʵR^.a ๷Y@ %41!\%%͙N;q&਍!hhVjBF3|Sd<0S,TG"NJA|.3&*b}b`ԁļ2y KD爧)}9IA!&F(;e(">ة80t͔UR\׿q,}2vV-)zwu ¯GRMb R|r(VNF36/,,or41SΟNꩃߦsg{L]N'߬Ļ&J²U-qGEHrvcNCGnN3h-1O%L4Wu!!/\Dd 0%CyGnN3h#. ݊ݺ.E2Ab#:hݎ#I nn]H уeJE'em[UۂLFOQ:!q} nqX#i1&@<^rARTo*l'H!)Um50߾e:!!EWS>5t7e-ѾH?! lU4ZP+XU' ilSNf= SuWwșb2\-4SvNNP7>8d;8 E4N Uh$͏͓ʎdK8!fF)qP=ir6ԣКt7#+[pGL>EdBRR8Ф|k[IAnǡBIk<&ؿZ'|^ޮB34 ]'g+o3mKA eUy}c/Z|r%trxtNa~e.J)e0zx3U&:^3$\Ken3R ?Tei!kkYRiTnwV=̧Kי.+\+ŢJ̜fPlŬ1W fNiV]aP U';۵T,5ab8什_q/S볐QOvYX?y\]^{o)pxɂzwuxPV K`NTNXo"?w PR{kS-Re[Q_ukX«/u9eTƙVȞ ˟wg<@R;H*y~4cP:&H΁},E'W@aIYi;ƑʵPD&~" AYΐA+ |U!&\XAfTy655ַe^Շt7ůӺUR_Nw#I|Ҹf6"Wi]fnN+1?Y~Uߕv띷B:YqսV3x6JiET'2Ա[jb,*IS#lrڂR53C6i ~#A$j>[oӯT Mjt6*OL~_׷J<)TΓR9o*UVבFxc"RpI@ZPDcDqTR34Gϻz&D-ܩ4ku@^Me {EpICa]pM}bJfI}_[*SASotj֓Et"g.AŤ?_zʐ .a>Us)k>xd 7,.jejqg7W/O+zbut^E0tf1?<\|sR? ~mO*bkʊcB*++ ܨ(ix#ü4~`-lϩIoW +R9ߛ`ba}O[g_'gbj֏>K60l/$̎nzdg7Q*@sq-jS..$'NN`HVJ$@MlBڔV_'Ih4@UyoBR-A&;cĖju 1iJGl?|O" =5Z/t)C#D { %AI瀺[ۧ* [{|9a#Ƭ 0#Tyb*tc-zCઠ, #҄<"-<+(`xD& zISN5TF$!PЀ2xl$zp [MHDfI"@+ 6ɤC0XW}.aHg6, qM.X8;> nd]Z;+8DAZx CvsgXD\s΁)̂wp[]+;&c{>N}>;2 {/7{N~rw79'9_6EM>N*3b$!$[6~tPTk }aΛkKuHkϾwaêO($Y/U!;X?,{ecNХ{x2#֬j:r֢oI q*p]`~ד_O18YH<rů E4H' nQ۠B1wnGt zGij*$+ n&6-l  .m|i*/n4?)`ֈ#t82VUc ZaxM4jńcLʸjEAk\ՊG7FՊ W(bَG+?o Es앱ݺqw^w6%krݛ7u ȋ^q;a gfC' VKf^g pb8#ٻyq\I:zmg3o8o3#g̨y92Pc̱R\[Ō)҉z!E8Ƈ ;qDPH!X2h#;jR ,J(5vD~OkoTJU{^1Ôot LdlR_q)6ٍiqae[(nb4.p'^ًw W䳃xB%W8oxrZ̽dTRs&u _\n t-S g?"rҦm){AA}^8Z=W*{_h A$pZ0\3_֑3빽пw_$ܐ:,؏~ǵD l>^d3x2M~*+A,zx2mDI%IV0-9Z8'0B,! Wn`"2aF&EZ-T`i$Ht6E/ }|! B.A:n$Q+s-OcYJQBm*4 %D-#A8NI$I,De`:!TqG ׆)-I%J$qU24~$1JcL:) 5h!XDbb9a4őH%1HDDLbFT$bhDk h[5#cIü4TE#*_*F,gwYL"f¤HQb>gbb?E)C@J`a1b&F%SJ!ZE($9eip(G0Ti#+;BQH,~|HlH,bNa{3;N;|C~tqH\o7OwsLG_wCoBOu~uthUŽOOξx; hsK͝b.U)A՝?衜+I1ȋS^꒗kU$)s&t.-)@,*nE ǂ<\TeDɳ,V(\#ĕP(&<#Pbn "=S<[H>.h!BEs Ilq0@z r A$.p'uYZ[Px!HD=(u)/F.eERg LƑ}d^u] lSnnt;תܽ)RL;s;=d.tÊ[kF/e_\@u%.,j$VJDnۢræ~ӄ*h*P:j5r-M*RXzӳa;&8#dĸܱiٷn{ռ2>f'#|u>togF}e] }ٙ?rxu߼ x`~^}Qw'3̎9Id*kKm!֣M8` A ARb7 oUBO{@CGzFOԉ'G)2V$BK1NTD"SUgh*U*k)3BhM%~I5 |罯(r'$s<`4TpͷÇy-Gޙ>Xe"mzb"/ܻ4uIiߡ)pvyeK Iq7O8vzWsǓo@+Kc Y/gՇ]byg.9rWvQf_/GPʻGP_?xΰb.T{s.~xI.2ǛTvOafFpeGaٿm~t"רS;}ቝGҢ)>܃se&18рg!\cTmz|/ϝ9wy 9777w_ 6C$(-Z489~z[ /ž}NF]PqZ؄˂Kgd qqD,ʖpunޭ9V/o[Z f|x_p)hr8oz[r0jrmiRܹ黚}ݛ7u&m璍Ah9]gfY %*Trr2wnkƁ_-htxshCf\K `(+mw'_򼡜fŚV H#Rဎ;K5W& "/TQkNGSa#_g!j':݋1JkrYP=1"[K@`GV+t D:Id~I2\xmbIs!lN?x_yU"5߷zxov2Lp2-ke6٧/4iDOH/bIܳ$–$xVhws[>(<ZF1EAŒ綦O'JQ-ɲ='#%IsΙfXfC5#-1g'm)>O-Bb gQC'kS;K6I/xKN/|-<)LdS3& 0Ok|aXaϴ`Uȉ0~6 _?[go&܊~ l˳ ]y!p7-dρ7Ta<79< .sG]BR}V60^ܟ0oV+j`J) s,+!^p{w!Xݸ#s"T.KM9c'2OzYI7˚L= #"mGxλӔ*Aic,ida Y%&kCtauZBaz}i}R[Ma4g"&fn5Ƈ3{*AN֠s HC fxRCV< d+"-'#} xV/q( D؁^"c#ؾqDmh4y@'l@` d3Q=ѹdwD`SgwTQw!!i~Z?ݹhKjTN'dvYj^:N A2N6~@dsscS"42cП Jvx UKA$^.0a ma3֦ QE>z=D,'u+e~4JJ@!QXk&|2"cҧ{* KU`F2eђTh5Y.3FV(*~"GgtRĄfIC|ԀT;2}, teӥVG|t*o"ք E%P-G)B42D?Tƺw-cԀKf4+TQ 3iEc4EJWhwZ7g٫s&$jor5HkD5sAHw%) -mC)ŭfD 88 0R1meO`ĺ$ ځ]lAiJXtR!dSVJFLۿ--CreiZAWܝNL¾"$}Aԫ Io,w$wB(oy[K=d0^1Xg@l/uizdM2<4#} \#@j^)u)z/bT<S[ƣ|>v#F|t9b ?u6.} IV+fmԧElk g+1],s;uJ^ᏭdW߫P!V@.\(ryPԋ_)cOuu,䙛hMq:CJ=zX BL':ޭ(RyǶ[|@քnM}Z?uQwk е[M4ʦءwch[,!Ge'ֶwݚgn'۔c:NaB~6 _N+Q׼-c~#ږ~A5YKҴ}YAL R;c=m3 A\>u2ȬBL$E$rN\+<a6/N`7/JBv|5`'Oӻ;i ھ)(񍱟yũY|| yg &΍ Zl{OleeӅ+?I.>O2yz]r"*qg X$Zd)5e)~GX{Ht++{LV OM qbP\/__#F̓#_^fzY~_|1iz¿iPMu_^NSӆPI3sZn3*Luj#2" ^T 9',/{_!Z:6K g3'3Xf )JK 280HV)Bh-BK(`1v2Etj9b9JrLSXX),Sn#bTcy%1}Nzb抽5*,y Oۓo(*39WH|TR)Q4acoJ(-k`yVc.8%$7,D#6/ 1*#?S$:7" E&-IϓA {h?>˱'1Y,ƛTq`-T0O+&ݕX;߮sn]#_]S57>`([W O>,T&A)Cx[ccQH$Ģh5?uz㯲o. /rm~xrc#W٧Q<8|ף1 G4bYX{B`@XʃD (ķhYfЖ@oК!@N.{xe\Øt[ ؊`Q'<O"fpH]MN7&> 9Yl dXl6 Eb"ՂuTS,J:}Ҽ˳a}(hJ{a?©Ș~uq=í}* ;d Q34&cS`Tc}!zXYT82f90XJL 1,5q+âGŔ@V?>eG(**`?Vsք֡/sCg!=K=>zEgUPIRXaK{`ueoY KL>XHw;Hlo .r5L8@ "q%' Aa!Bn?DI)DJLwS?{N~Yu[Cx(}~ZA;A6(ck?LF]z2z4?-4_s'c0yεm)M[D9IޤP4MS x5tQU eM%Rʅ@&Y⃲w33&0zai%MJg1 W7:XnB}aP/O"+TAU8ҕjzi]*T Q*2ߢ4%T KR}gbTiFP1 Ny7зoe-P*SBØ_$)-&mgI$13%FݲBT>cևJN.F'&EH-]pUKqY.us9dzfWYѐq Ի8=C&p3#.5>@kgǜ|8 w+B:PO sybq|&<5H G";~E0 *6LR콺?|r=jpGl;/zVmI8gF s8eFaw0/ʨ-b m|Vp!H^Ј?S</43ӽl;!鴴W;愔eMQ4M!z [@}9٠ຫ"a(18:%RT%&H²&:4X p g K's&/,\"gRS8v*3e"E(Ve&b&*FJ)p`cQA-.c,G?o#`ᷦ瑾Bb u^zjI/M,ʏ+?n*M ` Q0CR3JS_e7-4w*ceFKĭJuNX|(~jtY?"c;z5PPb[f6/zeh9Yt'-ÜnQCsK; @ /{,{3e6R ʌ<{"{P?Vc%x;J;yVj6]ӑ~}Q%oq&q\șB*Mn;帧0`7,{ĠʑqeW! G0b5褃\\e'Sl g**Ik=xfm5Wy*1B?X@l MQ\^AQƬX:BxsQrNٸhp_|j ?Tj(c'H-WyT ::7#UT. v̝ҸDB}JkɏJ;SKzz)/2xsl w݂fxFFn!eBxMB\=*Sb5P(eI߷V,IyA=pud+ٝq&!_F8CSɂn"ȼBd(>:Jmhqu { @NK:n4:3D pz&m|zr'>.@/_J+dq5$/MpAYLbo0 @+#h#Xuxir̓H~)9Mh.3==Q]p 6scyM1Ks.U;ײ)K JX!%--A [J5;!h:EULG7Yk+UY 6`|/1' T`,_;[,㙯Dl!RzZsPv+;<ӈ,O"TaUp=Rn§S3VJN`HJ::X%V>j%{xٽnYBo +B!,i(j[E4(}9pn4z*ܨ5zq!4goҗ>H OXPd+o727_HESGו<]޺v.'%P)H; MSw`}lH6%AS^>h{y6FFUͶJ5msom6j¨ۦd U; "ȫoȣC>ؠ~=%n{\2$Vʇ̰'2a]Һv>.'c F&q-#K׮ER/MJe$GE^6voɓV8onW2m#\"-5dy󴘳4+/:QA!Zx-gUhp0^w`/--&XZ̜Ts"*&& CQc RPK=ov^0ƝLkkNS#ͧBl|X)'Xxơa㴾pΊD+7@P<Y[J Ń; X&p M*V JTz 1yB볼fVL/<&Yyz*>W'pȚ,o!gqkI5w4Ӥ8cLlz@C^=ߜQHֳ4 6tD9BR$v ov0Kp.%~.Е1 ڌX{($pB})vr⸔_SjO]% |u2%-kNn8c˓ّiTP{6fwm*)@6qmBY*"XxD8ϑ͙֧`܁v! ];݄CRsf\>;8_s>mNTxe>KU0ͪwo/U]g, vrvV)a kO*hrӷm$f3?Z(Bsń.ݧ)lfF#e4myڧzoυ 3W"QZn KXR-%٬DžnfbĊwKnLVC{mɍ> &\c .Vus[.탪)2 >mN!rrpRCqUj?+aZB޾T(_'JEeNTޮ+Zfŕwvx =N#{u$ h3!`|@E9Rcے#\N37t^LvŃ{~ kB|8%ל=(< iampc%c| SʖCee;/k* B)Uy]% ['6WGXjc~ub3fX*:9"R w7)ۊ7\:)t#:̴vjLMzJL-?[C(q{Թ9lnEi*[W& .Od9r#ƶ[{PzbS=鍭IDCr1 r ݯ*t[" 8WBHP`a-/u>Vug<w,U~7^Mz7}V%[(5AOy WpðrMEWS{eS~zbAﱑ];y_zé [Gʳv46q 5#g#챥ڋS]wu֘E|W71qB% ܑ,ɿ[Lw}9@1fvR{?/&s~-Gþycǎ>:: o:Da|=~@ң5BaXKGsԹ1D:}~<= 0~\x~t>Wa+./.Nn~|Q|pRwNT=w?c_yӛO/9g~N% wI,zoAd7M4Ϯox=`ޓXGd>?ǽIE|lrлO0]i]ʄcwW'Nzzp8oc.c;& Ԍlv/@-I4\5ˀwEӯRcL`׶/4OXkЎ k)}eF؃E3k_՝0VSgj1v:G;f(@ }q)ӷ;*j._n/^`n5_BO(gc%\ {{}:ԤHqAs Y݁`htaTqğ/ 0OZ;<̧Xjw cL⛤?y9&D?OH_D{+vqGKb"Al%u˗>|3`}.,(DԶk u[eVZHIl>oPaG``*Zh qQ+ngZG̢؎3C_;wN/~ ƟX$M9#,|\WdtDG'SHQkS᝚nLLCGlS,]NmJ!Oo{3:c&؜j =4A[;>>]K}0bឯmnWͩA!7<[R`f5>a =u4(@=ׁS(ia M}wX$3mm'fOO^ݛ:5LIU6&#1X^Ct ._kͧD>~= zþAv ērzDJےfXJZNG#[8z远NݶˈqG?3[yq%v<] m=h <i3EH%ԬSg gj>ZCߏ}$6tFia$x Ҹeu8%BZ@ǑϪ !:+:}"m2#mRD5ShGơAl: 3A}V2hcdr(XB/9FB/7z׎_N Dk'O5IQÛùpxD`k隯]]%Nh:I DYpGR 7cdP}8Gw|^ 2ۇR 9#,ȶqbZ)}Nm;" EO8JK tj\z\ccD{K"SK')1u \6޶Z,ydjJ3 V H a<`!8Z8 .0q@h"CZ.#mnV!9U;ZUt:s]Nf *A#ڿc9%#FՈ(CAdDZW5BϨ[ ߳.c992&'BQM`M - QCe!5 u R%s UЎJ7P&$* m5(bQ:fGoW4:YdiE#7ڑkzeZҠ!eEZ j݇ sZD1ӆE.>S[Co篖Fx8n44k bm];]lަZm:7)"`9zyIeY>?9YsnV޳\G,0}{p!R3c7?NgKnOpg@O;q";KLp|"pmsV`PU|{pwG*Q Kn?]UJ jS|h %/LCdXSFLV.Fi,HU'.b7N-+Q2F˳}# -#( &BARgE,$IdzW$h'8u#4:ZCDTRMSIBi#޶$#Qr$%)=qheb#E8hrYL# ƒ2RVޝ5lL)#m@KV&a4B&|Tҙ wDV`4.}Ր6:V{IWFvr}gGu`Dd|Nͣ~L ,4LLDr7;=O_f/㘒F- ?%Lq5 Z` nc5%__e!3&Ehݐ&_VkO8D {}+3Ү`/z6^nԢZD&NjǗ㗹.1 |%CD` *Pd|#'1JP[)o'8_ap)PoBDH؟[X"\^8 47on(9g\Q(j;&G|M! qńvJ mk鐸9Aqe_X.E"Lq P(\v j`tp`|^aW1;_HPb&+Qϻ_g?Wá=s=1*Q?)>7S3?9i& \qmLz3\}=Yep7?ػe $yfPC]*|;z +pEUM;WSmj%ݏn%=#V;\b]kRNmxr'f/u_|눗5/(L8pBJ6le\QNQQ.7eaZHsEb<ǭv"Ѿ+ 0mLZK^ &/bt \PF IR[A)ZxeJ~H EhTZYIh#5+}dەZ2=0]i\j=7j\hf^;R v1状`7f:\\ӤqriX]|]w:}ȓ6AT57#9Ph1 =sr3i@ڦ!U͇x pc|U7׳\0j65 G?d|4 ?<nD@l2@a8M` 1-n RNJjr8.VQn N)XpZߢiś)KFͽO8+*cPxuQ2hytwMWwLJFHa*Gl6ߦt[>_>=z#8F1(""0^rJ QBF3CdZQEjSzEk^:U;Nt=0/2(fԜr%a6s06K}ɏW 1'*O %:dlǚw`Pq̆N"g^Fa$rJM#gJC%4=5l)`K6?pPAmtrچuz*=5"NORrhmXko~"H'g-XJ)/joŋ{僐R *0 V F]!FH +ЬTݕޒ]uBٜ*0Tλ }_-u5J 5ߪ" 0b2#`ժ~v=41٦ e6⩋aVfkZ`<*8%ӈI 6G i%9* G 2}#dYeuem+*ue5b7Q˱Sf4)B|GҢy#s9k,[Yc ǐwWf)>}ػgqrУicfTMAvb2 q/ >.h( G E]#*8'*ճ$m \w-hT&RPjdn5܏GnPQ]*!MtV;g#0IXqҊcN`_&Z6M^Qf3pA(KmXƋNCt*O4Z-?:;D8F[,Zd_꼜 ';HmdbP3_,6!z]n]:f4/]VXo.ڜ/6oo.UZ|],p%@y? Wonۓ:}[VX|լTl'=ڊɷr4=UR֏7Y]rW)kpqւ*pTLww Y7[]P!,󬹲,Mvĭ+G9vq^j3^VǚRIr(\:HY+u0 UCο\w~ im{]lO[ξc*jU`TIBDCNLNHV'ruiEYμq jD; 5 'yԄQW@A9k-MycѢBIJRTQXIi߇_Šꦓ+jQ]*zח /%/?U]YpGR 7c(F"(p(*Nj= @exQ?Qv29!LSA29z_e F߽;D3uSDPՅhDц,s&9$ҋïC B:3qK;+-3e tvyi/kSnh(:/`C*>BQcX][=95.̳u;FJ}[W9 BY;'$]Mpkڡ%&LI!d`]*Б3hiU CN@ gsg;ܨOԒ :pB;TdЅ41Y^gplY|9_C(ɗ91gRUQΘ%'c׎?~D2ɠ}G.X\JzeKДց@s(AP=WǩQ6(33sNlTԼR#mI[.l6wX= 5 R uvl{d !Zy9Řr-CuYe1Iuu^9*o=NGfq9;?ux xGq9$ Ƽ9zwi/?9ǔ:z孥%(9!L}v楦w~Yo႓eMl D5&p8\:+a}1CoTvi"}ɪPE_Q+sqP-se )L9]OYZDfҝ}Dd]^rE([fJPЛk Jz ?RI)u_6_l𤨢}xAƏ.kU&hRs75#q6ݰ67.+Ln(B'gWeM'-5;hI1J '*O&G*>AWHiD.¶TRi1e֜ӨIe_}\O qMsu4> EBDad??=ܦtpnG'X%2!4Rɨƺ[=Ns!>_߯Qx Nt=HuR +Q?P09} Igc_{E(9Q|1fN/.6n-51Ư2Fc^{2p6Apn׹0/ao.sqΠjU?$WJ+8tž9!󣀚8X0^#yG/f]填5rX^zsNzZFݨs (Y} n MftZ/+n$dV9QK *9g`]`Uo3`-FTʆћj>DJH%j7zQ핑[4x^*)M8 BldU KYe/xq*b0>0tH5zXF$V<6ƖI}cy54FFT&cfbcn|<@ q6 ,j,A38%/S3r8Ǐo8-Q,ԸE%e,89Taŕ^AwtztCD_^K'wQJp,h`7 ngӿc[9Gx.\-W M|;xM\|1[`'^[a+4غXbRjX9/Jt lY{Q H5^Ein3)_Rg%"et+iR;q' N.(\:Q,B9@E6B1+)FɈ!M*@ (bDpKHrVZukMD;Y1u+0l6UW$f¦*Ʀrݢ҅46.ݮ)Wh&Y) oO:@d.G*c/':Q *WGL ] Lh=xLX, ݓ\T@|-هc}QRSzSi]^yh B*Bi-hp"*L 8R1ouԡ@+c,9% 4Z[1: xLeA)K" STGP>bfV_z67YгPc2|!.z^ <֥$JLz(DURL m&0A,+06#8UR)B!%`pt(MCtXې)S"SER~mYt)bL'TB)ᩍȱuU:.-sa+~澏\ҽ!96›  #2qa?sSw"X'0oCїk~Ĥ0{3-ҵp!Oy`QfB˛B|Sa'LTRˊ˖ K郵dtHHz >1 m}MRИR^t`$ai1%TcbCr c,e3%Bdm( 'OtE%l^q.0W/* +CJw;Ia1E(H(i1FA4M/w~<iż q袥iǿY8gB˙(R2VEBFķ Vr8|Y3["5 -Z聪.W<K%c3]:b@Fy-m. h=@1ͥ\C15!X,l0* jRMbs *2a9}7 S~S *!?YN^wKGVbÂg<(%||(w˿ }1L~9Ě"x ƈ4}7W i*л?ܴ ;Q1{o7xn? dA?,edC}SE6 8Mn7tdC;,<;sZ$(^ݷhOЏ6'xzc{lp5=ඖ/NiO28!pNs^y*JU M~Q9R9|` vT᫼&{9r;x}GL_1 B<FLuA|@zY 2|M 4- N4pՈێ:Lk  84 _^瑹u}'M8q]Yk}P0@n .ւBF,p"z߽6IiZ.v?T6| 5*kyWh欢w0~jn^StROHƛd? ď$܅o7p~nƿr~xWoIGM?ܶIw1o__{]2 3i wm{)d*w:w~5`l8ɍcaZh50p01V0Jame6MVCϷ=S^eבx ; G1ىw,u GA [8~GׁɍKq,Ct`д#_u_4]5^;ލxk^CԂ=kaRǺhwCڡMS5t<g&G++hgf.{7`^+bc_rʽ&?>ϣd`ξ$dxH}s7}l$gHkPy]mwnzێM33nOBJa&O([o*_!k|s <;yd {ƒ:p:~_4NrOA>&w ]sn|i2Ko>|+wG}.5|#:Y&t6 ;;$IB3 #{s0Sw^=0YYU ^ 4&0(vxG''- LlTI}=;:Fݣ= #wgWu{awK?Cm6:JޗɣФ?ˠ70gMI7}4KC8c{ :A!pd,S{\Y(3pə"6n0 Oz~y,(4>3],$}$@&C}7uO9BUHx\Nr'?,Y]YznqƸ`DׄVuQ^ԏyٲp\{b n U(mniAk+krpݕ?y(,XP+I[Y `Fͼv. T?L3?w*nVI~9Xva?z (ge%rGPr{ jkX-UxnNb]1Px,Pfub?e^gM<}e6d_"U{3vr$|C^I,x\o3^OviMvFV܋m.yz nK1xޕI1X'WY3x FATNGl VqƈW,R]-yH"2ZY:Jm栲DWLH9Thzj%Ը*G 2Qɰ"h=h!/48#uHeWf+psu,dɰ;iw4 gh4 9䩃+5#[ء~u Kjymp1_+&B4gkDI5bxeG2. LÍ3ld*ѹGusKKDnqu۶1ƌ g3pNӍp f!!˽cזHREcמkKIS4+nh6*"7Ǿm{<-Ȏ. /Yoexb\3 Rdi9-dc*ֶ6 ۈ~X+&Llkń-ΛX+m 3֊[Ц"ͷS#k9LPBnR$#8nq?*+r}~,3K[RSС4`0Cq#Y)&m*bI1ZÍ !_$m(^NXhc˩ $f8#`Ḡ ftlqM}( 6Bk:LjœGXH|ƚkc80c+u=8B!7|㔴U[EPHs/#E`pb%8fI ͍Č5 Ц)h 8I8?7IA/3UM2x6vb0kpelM|'gY>β|e8 cۓM5W$by8AhyiUFPed\Z3WT@ c(BhKf^2$|x~>_0 >J}VbL&E6yYZ4a_Ot)⻙UIøOߜ^nfP  L0` Tw _Y8{/v_a~&Q(/H8Gzduv_ӊQCB< JxDQ]I'i7( ^&39J4: CJD9 jqǽzA q ^*QOs+y@V{!-ȀH]:Exyǔ!N*dp 9IP¦O[ȑxZfNemVO#vjBCd^{m^\2oscf 1zسn$ьA11 YmPF'Q>)?RMk]>*i$/hQ)J`O8)εKyI^s4K!k2,!=I0AW l:{ZS): #=ړɄbsi)QL9Y ɬ "GἉ*+I=obYaneo7A!jksBɜQ5s oTv_y nvj*bYk'r*zď@U?eԧaKp$νaz(]5A~]3qYT;ekABɄ6%rpiSrgiSvӦҌfˏl>)CN?MqQ"9eL٠xx6RM'z!.>慸{q/rqDtws Ľ?iTƛz8\Sv]պd>Y)$ c% W7<Ň21BL">WSCMU4}4-ʚi4A q(8i,XwjƵp(:Gq $j4V:϶OT1M*GHтZN&E]IP;\7D9ŵ3xR*URRPm>=VBmʼĝZ)+ $DV5ZX`{b_g4Զ#C}1 E{>-^x:K̿:dKLTUڻkW'K 70?w4gӖ 1OVUHUلfϥJ[GUi#pDgvD29J P\rݦ=*' Լ9 ׮ C<쩾9-Wv^^3lJi{aSTS10B=f:A0[ }tVb !EyOG1e5N-Pީ@)3^t{Q-dۀ̢+2znQI)R{0=h!(H5rTdSY`s ̎d4yʹO4>5G2pٷG̓1Sb^$XwQniu9 R jcoăi YJmNICBIk=ۍIkB55:&Fߗr&E$n9IDAJEoX$F2:#(ag-O4* &N:$eۮ.TCMo㤌TN$wC37!r;SP}Ju`MցjCՑd/Ư ?Q#kg^ٟ7!{$?E%h?yeolpUDg m;cDZ-~i05},!=7lȍZvQk5h1D+kN#'Ƣ3aΚHmŻ1ekLj1gw ; ;'*N M;?WY׊tN`֊8`Gih9I?6T0j݇|:GRt7)V,BI>[WnIR(&91oWI9 _˳ p?OqZ5\8=F a7l'=r LC !˲J`j!/Ap`ո!lC"Y֐ngG[`9g=F@3_a!dg_ ,_6}Ý|]ks”{|s$b ^dG4hX̌H:FVJcN>FNQϑ߮媢HNF\"9"G5qeqsm;-<v܏'Kٌ\UeK:DV|R7\t". ꊁ|w~3m}a>U?_l ^Y fz3.Y|ؔO.JO=Ԃ] ^AJ~ﳍ\"vr{sr9~sk{IX4wo>囋{}-[}}wKsaJ[ErMST9r%pz$MpyrTkeM 9C!CDʚE8A( 5^;BZᒴ*g@$DH1C6EH4'C^eZ&eRPNi&0g>x.`^XbTQPQ[rqtVTv2<:?sR54zV)TU5af(%ﵼ7 ȁfNz2#&Zw;F CJqS9^zŘѡDilD# WF[c9!҈k%=Pؐ*ߖO,Xħ飒b#gx=/=6Wd G!Ni>=)bFHԔ{=Q 9[;z0VYچ~LbS2(J=FaP4І~CD!".bZsI]M`5jQe/:QT%cv$U^CJQwR`Ŝ5X-;"ݟ,7wFDPw\]7ʮVOՎS3NJN 911";[q6L{FǔUD0 xς'k#S >Xi`822\h@v_<Q0^k5m} fi6i޹8G(u&>X#8¶ll=FB37+Xe&zM^gdJ*}{oHJʢRRJs_'Iv0F̜ED|ZW{ k+}F3xR?ij|%r8ոn 抩m~=w#/Ÿ{ ~pVO~j_kAg=DKij-(B۵DIHc뵮~N|Ǜ @0nJzyA]v`$g4&C ^ꌼW+ûRL1Z!,Tyb G52#C&ҒKZ] 1w$œEd3p/rĊ,8(f4Z[gA QhO+ 6kת8Nl3:(RōvD.DW(&7 гU2b&N}GMзNj>hlUu"eL͇ $:1#27!mfCzU# Ճny/$I ªz5*T@*WqsV#])bxVpV+}^cS ~Yy*`U3vNבm2&iFy_jv"cg8ȆlD6jJp&"g>H( UB.;c<!S 0{`%n%E%<۷W\vkZK00B6fڟa[v ڤv/o߿]]~x9pN3zⱆnnѵfH3Z*rқJmWMc9k&{EN*# ,nuڙWKeaB \Rl(eX';YLbewy(S_c)"̘ݴHsW W\}A8ꎪ=kT"3 IGigUa45IZ].$p5gG?G}zqw{55|,=[˗X zLhx-[7a GKb6"qiD=cBΎylψRObSHqO=1+ٽHW|,ZW/B m#t:q*vҭ@J28gy^7_beVrB9w{_I 6͜E''aۗ B>m7SM: RqYɋtбtB<O$CAXHY0ЮHމ0΄8-diQ©I5+fCC(%јݥ\=o0G {Odj `ݟ3T:܁GÚ vKhun93Eh[Us:Ǻ"ԏFMs^5-j 0mz&mek) YU065c\A[>ʔk]vQfc6͉Q^cUhZ@YcZ6 Iʟ,ph` U2M)9ʘGTmyPģ7;ca7B}f ? bGgxiթc]_B $]%t/4A|4`;]cɀ -[>m]eM=JM.bReGMxs3OP3]*O܀{ps+ >71D*7̑`>hn8zE6a:),r] Z8}ٜ9>[d@?MڞeM&Xm0J1EK-PuUBRBg >ťpA |g#{AYQH~ *iGamh&A&-:b46ϝ`Ԙ+k+Q$5걆R'yi7w?nw7ӥŦ2JRH*=d\%:oW:8+8\bL. fNLN¹epn gMLwny(*fU07YIimtpIaNkՉ ּsZL)ԥKJY"CVP\̇smV0UgݩjS8T*BظFYV;Zi¤K|#BK 05%C`S 1x֓p$ /W8s£zpLb%VuQ82JppJF3٢sYj՜f$dQJ\!AytJ@I:R5gL#⒤Р },Uª!q`3Jŏy)JqnktӺ=t hJU ;nD`yj"8 軦Te m&M/k9 t[ [EŋB![1Et,d1F8]ELsh#]-rZsa PnB6m XCPe;T;@V:dlCoXw:8moxNk$v5 FkD%G}'LT)HR %Q S jS»SȴZΑ;'p;'NY!Q-@q]Q vRc s'6,X W ݗ>GVwad,>6=Ce)LfjlyW {A$"sM8C.-\p`Y),RnIT_;^tS}KIzNc;Nl#ŇK?'+Q#L=ONՌi  =m G3^!L/ׂY;]RuO5HW_yp{_{G3i1lkkzS{< ne;Lcl6`R4\N(_K7> ,Lq U7O.az&zjtf]v=yfxE-bƝh]4xջ~=λx?Qtg{|1Ձ]s?Z(_x[_^~QvAK3~A` \I= ·a'7#H竞g߿=Q$*~d_&:\ع6A_P8)xه]Ex沇1t>hٖ|b4HU|~*㿟->xy8{\CJI?օhӚlc7NsmT}e[8Pi'9.PϦ|@ʾWa\\0N0KUkr|9t]Vn+K$P## 1v|h0TW(xn;[>~X|Kn+;q ed˧c|r*lAqD5 ,\L!YBǀ=/8`bHt>&>&M?tF* cE#Gi|).}Bv5v\1j̞ RΈ0Z}=DVOqEC[&kC|\tsy5 :49)&buQqtt14W;V@>ɚ&j_0 QBw{)fD۵$))%׺N89 ۸HrP%d>*PAX2T \ uف-b:k`upFu&.:RfdHiJ ^¿l\JXSR譙hyሙ+@qFg=kpMRg5.2D¢\.H瞽:3(Js&K3+pFf XmÔzbx;˛!_G%WE-2 $˵tS#7k,U6M͉G9\$4N+!r9 Qc U*1B3 TR7ku{Bj}Tt^C3ra'uI"8 x[S bIVГ޾n_@Kql}c1)w\vTܓrXd)}0XPjp|6I0b`~o)|t1:ˁAeW +knHmKzZ'O(8H -7A5lH/,H@-R );j(Fc,X:TRpHY$N-Sݲ |9vOY+gkWEmFqn5GCx**p!*U\L`]c@%m7uGu7)_h^xD-aEf$<_JG$<.\ ^D\?2vi2S>L]w߫p=χ?e|@x:KNtEeo-/|*jHo.hC=]Z~SFNYtoiXGXme-VsZwyqmN BGYSe=Op+{KȞ(_ NtFE9fxEeLbd&A FhWӤW`"ƭKN#tIJxEus%8uB!`*Gp!XkdܓR}Cp 1;Pmc=b6)Nxǥ.'?V6 {мm(uo{?n۵S~zAс]sl̏X?噡yp'O'c-Lbw9eC1mc0"Oͯ П\bg|C6D UN jxlź#`ݺbFuct ʔYΌn%4Op)7d qw^T.0)cM%sflFT^G;Lp[㾍hPF2?ϏZf36 [(v2p_? "э SdT,LeQv;grQ 5'Fn+T;* t9S}qvn<LIXx%|RD+RtZ <__7P( ]V$]9`L=K}f<q~0x8h?_+9mY 獝_ÿZXaôH$@5v^`@#vmgO$`0iE O vR'VA왈Kk6u,Ww+R]^Jx"MM(yvҴp@n޹q.7.-Xߪ~-. nh_ 'U%Չ`%V?WѦzB\[ƚޣsou\s4PwwނVFlWP6/zGOם:Zb/&(Bd^T2RP7B_jǗkwP|H+<#pccX˽$LH4[/>yR QM/NEV}sk+?9FP>WH3bJ󹼲10X2Vs&1'Z\ J{s9v'\:yL1>+ZˆjhfuC{&4{zFtMz;>6NBu!NQ2 9RYʸjZs6:%{J )]g&eFQv|>Ã|]О~΁ҪûuZ2C4%a>VP) Vk!DR&c܀J) 6>1/%db(6"($Y+zr"%2j'X/&SF\jj.wrsOqCb/P朢SJg?9D[ ~l'>j˛rmu@|&8 N7nP)%Z. oi?k)0ҸQ)mT_-S9i|vZ :Ε<4fqѐiy/3 C1}ākc[0M4eT#Z:R.@`ȔEl%1DH5Bi`,k\ ͈hyPߦp!pt#_-T~+whEt$FABd%*ĉ$mpԇDE{A%i岂 QV-CX(oN; ރZ (rVEѮ@1%L4=*' ZpIAS9@bЪ2J*8 US&< Pt%ʙAťT[D"(u%>'[9CF_./p@]8q B^)(9_94U&^ "nJe֮Tn!,ob>ي| }yEv([(EM뽇ɗQowhDhuߠU.T#nd}=z̗.4Ui*DŽW1i?͘_jمfϼE%T9A~ƋDc,hAe07>VU@)Y#~B"(NN'!T V`+T1Dn<ی)܃yL+3O,T)rVIօD}H`sZf~ 03{LV+ZV&X؜'.ZvیPyΊ3T*Xa+ǩ%ji|J0CN6 '3**%rE#| 0Х$hsWr$t(6#`$+)HhK0TPgUh6#eZa( R|mKGT@128G#ڈ/4 8%/xt#{;ZNI;)ɋRLJYYtRPY"{C'!nmF0b#]{ |]ZWEe>w#l`v̳|vnVR"Y}ZV}H`7%OѢqZ9U^˪'ibPvGb *[ʉ7 }ycQ51ÇeA%0GNHKNȏ O!RHu!O0^r|-< b؉c%4Q"8g X-vk1¦4+]pRqn|G(jGbMs7Vn8;vTF}]kOww+ںu&4CK?[Lp_]HkAoߏy#.!ȂuRNT|ۛ*QiY_]H˓pM~B0@݀;驢F0ՌPS>>TQ|hSa r' ˜钥KQ: VT:HYqA &C;Vڌ)܇yEyHQr)iEZ%AL_KQk1B $ Ċ{QnhKIB(e@b7hbZtA HzԨ`GkیPy)̼Er2g’Q*-6Ikv-yR_JRJ+l j%m+ Fj5xIPo Ŕ5$ mF<T#xO8`O?d9- Q̹ ܄DK@zZa1Gd\dSP]0&%HL*a#I>x@ t>DdR:j;=3ME[żB@Sne4( [FO%I%뀵PRycٖ8^RN\{fXŇR'),y8vj`\()D+TE,f6}{/(XJ5F &H(Kʨ<$:es%dZ#I-}P2&2Bޠm@ @6E uZPOZ[piK6SH&)r$ЇHAR\˄{70PtZV-He@v;4$6pIW˄:=rE:nYc ֌o%LvE"r/nSkґD E)\d ЁJPe_r5"=ʘGGiB*$$=prFyH(# nԆSr7*!QUs&4ys; D{R#Q2CP.h+ o#F~ ]T ;)8xK PF,gV\lh. S_@Z6kHXt?0J{G z]* Zj[:e - aTۑјTܡO(!+܆)J#Z v Z%DƄST @k۬j gAS4nO;~< U,׭z$fI.gc$jvܝ],4 @{d%굏RB_j tq&t >9>4>b 5> S-K*pVyr! zMh0 C42 D:Psb8i[\:&/}N)3ѝB'Έ%cR 5S ˎ? y ֡F0*`RD?CRODKۨ@aH!sLJs ZB ZKDc(8[@-B-(߳%tcSAl>Eu&FWW)c.@[K(PZm$1 3 G3 " jxo u;s4mʹl\+7ESY@vd<7ן (Tδm!Eu =/t{S6nh:~zx4 lD&bR]{ijЍ@OFM 4шI4uKo)܏9ް^%r C(-IRh -GEx=8?DD|^Eûy =j4t Ќoգ4zPqNX785su?*rRWuOm>b63#A=1`]imD^0oQ5MY(g.<cG*ޙJʼn;%X=K[:>f>)jEFm)Yۓ|qrNGrPP pD29 $x)#0ER%86):ANl;}2uosvw;3r7NȁNpp*?~NA>4;;lrJ`0{g_mk 'b&=?7SZsE: (w>*DE`ONQj;_vIXŤgV^  Z;j~ާf gxs QCs?\ۘqy]72P6O], lc;0uyPVe|jnH3cժɿxvoB1EZVMXZ1'c4 k9yLZC _A;C"" %jݐ::v9A<)Th<d]S@_Xy(H 阓}\%A,ך'Hw,c .Di91ޯ6JYT7[AXW.SƅJR7_>޼|K4*wOjТ){5bz2A;٪QAվ=Snm:2B܄\eSzj$Qcp)im CDن߮S\+#o= 5̃GVm8hWƋnoM(1lѴ܈*\^vQz\L)O~MZe*НHI\ Ĩ >l[0Юu@ MP'p^ `sIՈy q1,#ÃϷk8ɝΗ3xTL^=|cNUBB1)x(_XP+wE_r7ynrM_*8x_>':=ꄲg6b_8͖D=!L':hxjW޼hأ͞ora:qŮ܌C.цa %>!#Hs PW+OX|_|wuϛW\AGLKFm}g dă[~%}Y%{_gEHT8W<*#ApҹKpX!zD w%_.o7Ŭ]_t9T OBe^$W,߆"U:^qC痷q0F~Zu,V $%bn$x Y)#M>0o VQBUIzqƸ:/]m#e;S{_Yb؟#=4IX駒)-B.M>&ɾMT1abN͉`d-7tܳ(R !VͶhia-|뷄$H "庺3T}qt6M0|k8puFq=erǫÍ:;۶_߽y<6Mq[Mq`(rS.@vg3 <Δ8Q>)1뽫YG R#N" m.NR+CTāb:ѿ,##s1Zo$!ģ\&wG]\Z$uVxuoMV[&/i|?%#Jy󹸫Nsf|Fe%Z.G1KOWޓjOD/ai69W[7'ϳSVh4PAᲉq!!|TuІ *(gL.XB188gcV(EEmA g PJ 4͐շgn=w.w0OIOa \ݴ:4#iRpgHIy2O45 U-1Gw s7G'cnkˢz?tP=ma VCP:ԡ+pT~v_@G|[ h^)VD W768\O+,> (Q*zd(JXYX4}pj3Z?Qכ}@-i ^ # Ih:'$Y!Av[/Zt`|O&9aU3Ӆgl&*_3. 8v+f'9lv%Agᨐ[~4)KT+љ+%h%z[2Q P*}X4D)\GHIC4&)J!`D5-H"6B$DOv,Ěh4`]1raQx ;fPO\g.~DbõQ11ja7z 7Ԏzzd1VeHvxt!>@F~~ti穟-ͩ6'Q0Tc̾w. :r2bL4U69Fn]V&0kDQ!MޞٍZ굒Xú]I!,4Yp-BYAE-d֖1LJ[j#c(Xe&x$Y&6eE[␵X5+mǓe9z+p&g8}*x¨]` Ԛ0Z4¤j;QvA:Dz~ \[I],+LD S3Lk3( k.޲_塬Ӕ f@~dlF2iIes{&BpG- t iXde93TB<jͺ@t0ZD0w[*s)8yr>+%K\3rڜȥ7Gz]1AhϢ.hgx >& <s#hX H>84V*n6>$W_7o9ŏf3)#2x-"-xw $P1~`f$m`pRW6cju]KArpkƒmЮ{XwӠҀvDVrU'>wZWWVJ_{]khGa} YZӝ>cL@UJȑB7t3NJ]a]zYvNn7_R>xS~T.z3݂aUֈ y'c]hHWPQ4 gr 7rPp<7Su2%PCm~"B! ~nuuyӦt`*;F6]G#bY.%A5 [arܪ۾x|[Wim{1]1P jT{WdJI)PȌʶu幽Kq%Z6:|^14oq2X+AyiTXhZ >3@5|eU%ɑ 9#3(YDfRb9k; fZfq?Wk\tA+fB(k+C^ڂrƄtArk/h&KݵO; tD9FgLrT*.hADثǙ6TknV Lݟja'f LK8yUH;RK$K%AyE3Y"Y_8-oE}ʖn_[Hm⪖D$\ vc'mEЪ B"XVPe6BZGVyKB]P"-;E? *-Ț>6>c[N``:.h(~sb{@uF-T>5!TK" \Y?3=jUx=eJ 0&8M޳IlڄZdtuyGZ=Cy" +b@)rkyј)1fᾔ;yg`^3ew`H׌0ű1GE2o$mcdžqÓܲlvW0 WR< C6T#V2-zƼʂ{wD5#1yؓ ""\5Q\h=߱kF6bedw|{o%IgRX@~}7NJ<)< k͘g'kRQu+g[;X6$. hvÂmi FYԺXx5>4GMA2]2s[ǬpBlOoƫ/W SEׂ5|{p?gx{+וU܉7Vnڱ0*mpBGR >ю&Gx ^5&kZ,SC_ [TwXæ0{rR3|oϟSj |5cR_IrŦccIYv16r㫸kd?,19ʙ2T#ַj]>aذVfW;FxεGrVit׹6vkp wNwo\cNnMQFTGi3  ~Howoٗ۳ˋEtx)["輜Ow:Ljv| VvB``Z6UuhQ0D(q$b9X\)VCe̕Bט1^c: шd$2(@J#n%# ,7r ګv z籼}p`jY@es]|0w)VS(A"B=*n?tSz,xNw7(Q~/kKyhxqJA4WA:eo֭"ZԲYnchuHmv(&YZTꜾ|=f "Vz!}ía Z7_+.oj1 o׼}Q $K@?Y q:&Uv;~l1A{\ë$]mG5[q:}+.+ܗvZiHp/;adJKqwX䷷d8{7,‰hG˖x`ZqbQG%%q rѢ]]pT r)56H]rS5#l4y \W:DQky H7 K2Jg@ Q`x"<ѣUNmc~E)y AB8diV2'*7y+\f 0pZpH"ghtd}1+il52­舌4Fk8D&BY ik6!EЦ})^f "xX1k^g40nي1N%+&Ai$]Jb¼ah?m IjJbIAđh2ZȚ,hT4^d\O5#l٣P3&ueZz\|^MG&vmcB/pGD^Í#;J1=94@>DX4Ie2 ;` kF4TvL+̀ Бybr]aU!bQr6vG? z9m͹⪆v8!4;ݘt2K6v Lq^ڢjقM(yG Z16e\IJ͔(1IJ X"(xƑ' ڈ@IVG LZ%{&񄊖#Z%H"9'XrC2Ik ţ.GuRYYat=j-4kV(K1Q'$"s8%k1yhy$DcDHDU`*Z(R҇HN/^w\$SABP,JO$[wk˨!q[OlFgZ qմSNT(o D^"ɒ `wLJ)s?Fo{R##m HeA;..9iHsJX6`) }[1iv Nr_l@Ylu&ʑl_!'HK21T*_}FYE^i!}̆Lfi >ז"c9D@֟jZ> zZ>`06r@K)$U(H[3*I\0[I637s?׼Ē( JKZ5"H&DE޷El$iiif.U/8-u9Jb]M]e?]C<ms|eS'.]=yA}Nu{z}?W?93ş'w>_Ғ;?O3ٻq#%KH/7.2 e}~ٝ)%]`gW,s6SogUVܗ߮!4s4 \V]*# Y]B_<*!%{ Bg>b' t|uy:7t$>aШo823ݍ%QRll't&LM ݶ7"ϮQ2`7܍oIXbnmph7tBԭkK )n|q@uN7r]vI.ѝG{(JxMz5{ŧ6Y@PJq5c-+7KvI)vhhG8I}/*F_ Î0λdt\B/7u#wkJuG-:t粢Ά7Vv)I)qcU΂7Rn`pqb3is̥;J((xM+"^f + SĖEQY"d!,_=E4vhS#T_yw| F1zC Ζ(Ӡm]mvAT΄ca؉).pEGt  }#]@٫ŔɱǪOգPN m%CC|<̅.iK;RV1(\)#DW%A Xa;@X8 8㦬xePZ,D)l3h֚1FTL7BZTjESltrnk(@Vf:đ@%'$9T1B%25Q_!C3 \+Ը>bS 2cjzK7ݝ X`9Ry40^ Y뒁CxVbtTfͻVzŠJ`#'7@#Ro5}[Š+aj" #RA%SwVBy%ӠzoƑ=Q^CכqU{-I.E9df =O]3NTӘ={n;ۨ6%H,yz66w^(5ػmP N]'}%$bk<(rrFU]1#vwsG|T:] \pg" Jȵ"Sָ$@åɎ|vi 7OCD8xjؐ^S%VA y(@OH*6.Q?< Ln.Y=qv #~zqV+09;A\PsPBO-e71\%1gFk;I-HP9|c7i7E"vXJsאRRr" C 7m92]cI*\VȉD; ִ0-$7iUE~u[uCY\,ğ+ n (۠.|FjgA>V#ա(8 b-jEQ:NkRhrV;r_3i?XՌ5_&WDk׾f6ݬX=pk}l DBa )&oVumU*f MOlkė'4a3՚2c4~Iy-cВ;_pzzp_pw<33rQ`)/MM qtA^s^ASތ_S;ƙ@*NA>ހ 燊ἁ5h~P:B{N!%Yɫo}|nG=c8aصԒ˒D5?'F+mtfW:]~皲s~6ibrnRO6';UX/RcAοW.%/N'JֈG,aaٟr Pj uL^b|@+YKEՁeGhO5X(O:SD7E5'n6qnb牊[=օ|.S}ƹݔRm:NPA}GҤօ|.ڧd4a[ ~G@h"y=X#Ltg=X,ouIxF]xTb_i~UoojuNNerjʊ*Vn`6D7|c놯$= T!9u|*1Xov[MaۤPYo}C;[>_͝qG'ݥçIOW?u:F8XUFH_U#qK '>=/ .tu7 B.TTG^Wң(eho KDӘr#Զd%hBbQ a +YJyP&H;.=W/lʼnnي $U؇DNGI>qMbsP5(nn%Ĕ 8&g4°L̡(X[G Ji|&kFQ^ ԭ1z ogy(N ؍ L}Wr֬W\%{܉VW:XxUt_lWe(sutr+<]O],.|.|蔔1ګSf|RLmf5ud_.l#?cZB/}Mw9ug>:b/d72ElAom(S/]h& "l9H#%.N`֗|W$2qkK = ͦ@yä{\=%f$s*vyR|LbzppUj貘۷CB|:# yD -X(e!9TLD\BW"qwA$ܮIdU}L*.&oM̛_~MpƉ[ٕ :=FNX/ (VeFjA<2LeGc31cc: lJj ^9D<lW!^3,2X+qjh6B1@TwsC8/7KmqHJ"u1*8"y'/%C G~/q~n']R#ڽkRx>8̊ASL2k~ٻg~M KNt(VRJi"%צ)"RV"s"B# ^5oF+FOͰEFkri_M=wkōݥ_ڛ{Inވ@yŘQFW֭ [ϦK-}ۜj$BrRjJA]Z &)Fcc݊2Sh{w;v/>Z7WiýؑdF%~'/-/sE,kQ ĸOUNKtSH 7)vE4z}3>2~} HVGrvfiP㻯X1Ljn?~}W+5%Vgo[mvp}YX3_r5D?-kvC]RkQ=9wS_|: >FMEkI_ww]^Cƃpo{ѫUcK{n#hw']j[ImDKgA`cA4(”VC`%s9q` JT;Y sv%YK 4]:wK(mFN|Ԫk[#7]]-Ez\엙PlEͦ}&U}P(4YzXy@"ivU QdM h!@!P@.R(k!+sEź$缧=GTYNR/;^gg3v֥!6F\wV{ڼJiOZz'pK@.-h=5`,Ƽvֵ,ջQn$[;+s 1C8ADuAhrlf JQ <Ո2dzC3(*Ǥޑ/_bepg`@2{Sy3i Ss׸?8FvO9G)eg\=8ϏLgGr Һj 8eZ#kS\3^ÉCt:8C7<`{ox>A ij =%W4 aRT9ՌXjS=eL?wI8"n>:!LJX΀cBtNqe)`z/q\^cutS/F>JKKlZb]t,٤S<|YnPo˕QU+;,  Sn2H:!/h<փ /=\۠KCzGjc QVlS=>V.ֹ}ir?t?]3ɬ|ongśuoDuZ߬c0a$<޲ˍR3z˻y96TM^_(B-qO)XT ~:ٖ>LJ/ǰ@ɘt^*\ 53sv''n\[UT s4#5'6}u:aV!m #zJ$k/=~ yrR W~A{kg5NQH! %`_ǭ v6>fW!D-sARWngSh7=ЬR9f#a.K8w럝-Onj|@2mkZcYVecy65uQE-Jn@\ͶPPFJ2(bIld4ZSG0mth]6k@B5r[@&ݗvM)G%Aܟo- $8[_Q?-9SyD {068[_i DӜJPJ@9[_ f|xrJ5t_c}rԜ*IvB&l0Yγ O /s7|N| s vRe  -u|ѭR#ծ.򜃁H̼s"Ώ.H8'rsZ x!=x ؙzMrsK4+]+1q}Ď:3pA7 ]-gnُ;c@Q.uU9 `AprzӒf(zQ{ z/:=s`dJ$qwL)۷ޑ[1>!!:pi?>;ڐ4άx'kT~rKIx6Sw MU#/ v~箷-:GuX/\ ge^Ue*+H"j*{p#hy˽O"5Oo}@mmˢ8`t𲑥\iq>Fadͥ7Ŏ!F";`h)"b 4V]tU{a`xѿnGky܊R*/?>B5Fb^=4__L%m-EvV͊aQ7F-?ߟL5>X6u |vuY0Ɋ\3͈T5 Ts %+3/tMꪪ$bW;qo'ڸ>t&p-TWX,}N7uKYչ]kÇШt$ݮ׼6NBЌ1FsvX 8ܚ5% U%́4oz4(ѮF%L~–1:Eо9@زC!Nآڟef5aX)9@Ap%l%P%QiO BO2,i4LBpEZRr/v6LM g;ts)L-\,X:ACTu:%A5ʬaUqe5n_¬{tN[#$hzGj#A ZÂuFup96vR^1# n2:Z*c(gPR(IӥBLa.רzxn=I*u9@5=+p;W̿#`+o%F hi4n^/laD5*%zch²o`eBa~.O-)6ԫ Ίx*9*MeC__xS^Gep7C(2$: .AlGڏ~-:Pa^Ѽu)gTΜtGA&;PBgL#Ȅk^+y"DG?y;?yQbL@,-&Bt dTg:ux/؊v=iEUL{(~s@ VtB+i RF5s/ )ɲ@ &H ::<Bt+D qmm~ "|`b8ˎT= G츤#8"CZEΑ~Ŏ]OU|2QMMY7PŚ9**JԚLrG)~pZBE2tmBDRsT PRb.*^Ҭa)tPMFxsԼTF`\qs[F\sˀsD,Ի<+ \rL{BqiO~L_2 /_ЧjӚ݋\z5j3$ǭ;8YDj%ήւ,f q-Lv .,!md;YB~*PT yTݘ;ƜO%0޼ϸSwrrהhN)Nun+9$:8BD7G`MTpؕ+#WQ4X|J^UYv'&J>/ӣbhb/n@[e,DV-Z)DRIJ0 #r#nN Rk5Jmb*H/G k+y2h`lS +V54Ft%-פZ+`Nr$ Y\5RUw)GŪdKUlNZ`A &Z4EU.Vu!ʆԒk3C-kʊtQ37-WOlVHYŲY슸zR=6p6,5j*cT~ͮ},G$xX6ͼ^tӻ+F+3o&lXcqY]"]>3lAFB6Xqv7{9ڞR3|=r妽3mفk`iOJ=]MsOר6{nԸT";o E)вj,K* yrxT1[VѠn4Z9rf\Y~n&Rv |nVPj?L~}df֕UnWWK|RǼ1`|p[)r;/ypcW6}XWsi&Ze[)UnVJR-yVzLmG+h+5X+E)cR_oK͌5ZkRiI⫕.#|r*{&J6lj뿔t)˩q;'͌mU;E 9oy{7BGVj>KC Q',Q{N?}LZ~w+=^S4$s rKuyZD+IjR?*'/~h>—I~*eRXz랤5F?K"dAc$U.sdTI'LbNI[Rzr1V}QSD []$A\_'#)0Bi +L6T??|T(㢞S];_qp7n[ÂL'>mvjb0Y5Y|QM9~XAv蠻 vuaS^q惦Jt/ M :h+:ʼn1UӜgMfu &Fc?{nEHH !qŞgp&'O }aH$!e%B{0#"Y|S!ӕm+׉e(3=;Av^X>Ằ ɡWKCG{-Z4OH3nj:8{pٞMG[Xcr`~~ x}~{mCQWsG?*e>_ TTq'~@= ^$_hWBӁMXmtc9#L렫DN]o<=R'U}g}/~*]l?USs7QMKdo4@AteIzbC͐FyeevzQTVNn[4#ь彈F)+71VL4Z ϫgw+IuKR%wߨn~f7V34nuՎ:eiLOJRIP>X{qM-u~s^o!^jizWֽ_U> Pi{h-?~|܃ &|d7<X̞=DmAiKZm]'3y2]SW1B:w^LXGCQѴ~vY#巯ƚb1}tQ'0:llHQF ؁+ZΩ\W7acB_GvoWrzO.XXfϮEv;|)efl0zpڃK r(͟m<⠭$*Q~9ݺ`v=q&Iͧm38찱CR.ZM$c-idzo3FldK\y$KvH+.з1*@켔W3f?>ź1^TMƔh 8`FcɒA-]yκn-IJH{Hgx=U{xmA|{,ggDDe&g=0,zōv^JOL(cP܉TOhl9N0Pްh 6z4T؆Cupla$n^ܝ*p|5fo GÈ)fĆ aު F!l.M˜5(DTۿ/JЗ4)( Sc[ܛm>#k*xn#4 @@GP#]5a2dI$i462'JsV߅:pd:62%IAQBcSq)A10&hB2A,WՉL/ød|?ϾǷM_&DŽ 跿|ebvoR-iF3BjhӪpb9/FO+hrR&AĄR͕&D]-_])wW -2Ll79%!ofPJcsy(+qJƻRᙝK `glκ>VꞳxS +=,ͶԌ2;wX٥2R'5'e&JJ6sΗJ CRlKd)esRRwz7bIbݐ풷X:A36kjm+5פ mqKhhAVGSl:Tg [+)DE3JRZs} }=onl[)ԫMztLa[e*D+b[TX/pƻD\txJ0D4@O}o:y߃pbխs!NQ;m F#TB (땋kdB{nْ aڠ^7n-q s]8Z1aÝu&ḳYJ{@9]6LcZLw_ol.ʯQʽ9.!$ =O҅[ݿ7ozKxYPzU )׮)VaQ󱯽DJXwb* ;fO&x^zYW; CGٌ|-u}R~2Pv$2Tn.i )fDwѮpdj 6#ٌp&3-&NcJ.<+`P:zkU.B>(ba{^yLJYR_zS/Tw$b|H$g;Sk?5hsB+#-hr~MsK 0MfhaN9ޒp%hmum& a߫'Pb-\m .nvp@iσĠcaaÂ[ia`!ḮFq6zg:h,-`#<;!&Q+'Wf澏2LƨŊ(HI!D2A4&&xJr3#${2{-1Sq3)2B$&ce`#SdqfJa#+].l𺊶w_#C\^k).|_8]T2*-.hdT^_nq=: Bi?^.v{mU6wW~B=M`9KCRlK@M4( gVzHm90X[)&c9ge7R[_+=k+eJ,|ꅕ2gvY2P rR"%hV q+P @~V.ù+Xaȥ+5muO96^=-*XP.z̍[_2Eu9(y'c}T;sH,#6ސuC*⍮+hٟfE>?s9tyqzӧ oz-F-a*"K{4=杣٧Ƶpmo$J^Y8|Yu Y@tk*\pB.W _jF5FvM r:1V/Ч&uk`6']@-zf@p9w{ZkxbaW]qqfc35OKco^EWAR'{`H~ fUJ"y,,iWhͅUl偲vmP]B wJ,/{ܔjE&[M{6:o._2b!s#bǘJ;(jR4@kɘh=Xsv_\`!J KNބ[>zN^ze>~lGmTX}mڬJzߎt*Ι*d00UqjL1ak[0n&!l>d 7#BaCŔVG͸ N˷{zn<&D "ѝS p7|׋;SF$THɘqbChM"՜9KIY*MnJˣ^6f[j^=67mR=+@UJP~7?Q,YJf 3Ð &JBsb "4DQDP3-՛R䤔wi JL'J$@I v]"JS @bws{. (Qr8}`;cpn,r|Ӟ_$21xm0vw5 ¶8x#\m.'"~Q.5|y"+|=%Y:5JK}-5- VzVJRVJv4)qsV Nh#B#̋SIuR cPrc'}\e1dQD$U),ly³6{nbŷi`(i)\I cŌ6$&[m(T"#Ѫf0`QR FN KS0+@MK8D"n~ L8%R͈d"H̄-.J61Q&LDQ I) ,2] &L"5ƙ" eٵ氓W@|vMmn\Pm6 MƹphjSF(1GG觩fnnjxrYڙn]]^ھM^xyg.F4-h$?%?/o> j??OWx Zŭ/y)9t0l_WJ_~YD6‚zbak'_I\`ܤwGc} պ->.؅;Xˆs.ZK .fq<̖?t~$$UcA18fd4,&}9٧҉O'2-ؐƴY?Py,-+Aӭ6xyi1O؄(Y.QoJgaHMJW2 XC * Iɟ<5k~@(!uIcӴT0? r.((U r).!JUJv־}yD5I$(E"fbǔbQed(TuR T"\y2QqUVWP!7IM#((MHIQff h(">8QRHjZԢteN ߧF6W͍U#@sB}FOF+95MAeݖSL1XwvDmHlhfRqI!!42ʤ62UI Dg*td2lJϣCD-Ptx+w| R,ռ;[[)OdDS.b 2PȔDfiXJd1BhgԺ3E6] |Bέrwfw蛙o8ffP\j,E)%SM23sӰ/K,^4! 5smzw6j.d  s 욇!walc[ __8Qǚ' . tN#N-;rv%[.-w}G˺o~if,S\HTDoF7vɚN(fd_3=y~I8 .sGjvT2./+OE?ݼf[J\s1aZ49=FUd~[;UT(4ڎfQ7^nT&\~ժ&jrPMu D ^8WR%L 6 > OnʻŨk /\m܋s-@n Tzշà\\e,)G-OT_v;Sf'XT!Wa݇*<[e+Hhi~Ⲽ9F-q% %Wz9]j n_V2R{GrFjY:ar-ʯEQi?r6Z)}M\t="Y$m9},#hṶSV){?OnHKgz:1n^S&5V=ӻa!D+ٔz7!zUՁT}Gwn0-{4ݪinuX7-r"o/;tDzg.7Z=8r'&IQhyw}q=;Bի/kgrΫ+] PQQaJfJATQʓKd]nmAJa𒪓hdTKJp3!znח5|ʊ>?ԓx:h-wh7{6J/RZڙMj29OtTe]<'gkdQ2%Ҷ25/>h4dG6&s&N6 9wzODm0 fgq-#"芆P] G)n2ↈ2[,%R bcũQ8%6s)bAxFXݬTJ]ony bUeOߺ]9_@!ͮviIX)F=zmR]f]u]PPxGr=DS}[A>BW}oG\OuOEa/-#7(mv;@G>dg뮯Y=|:БT"tZPSjyN~Q|o}Q|:8h^#F 5]ۛkjD@߂Pgp)6sǧj'>M-Enz>HgY_yO_KMdWEä [Vǻ*9R_ |VzKXE΄kZ6oH耔"PRqtE~"'J삒ED6)i-iyst,;>(Ζg'[,-SJAk6'4vyA` ! 'Y>mӡyVn#Lp.u&43XP 3ESHULY$`(̈W:1ʸJdy.x^z=l{d0e5{q'9 W:8^* D"\у[-zqw '8"YʢXoD=GrmfsPf~e[êY=Oހ#qoMDx`6_lY}He4,Av$u[Zҫ؊mF mv4PGf|QF^(xڦ&Rl~{ HHDew L/H{WРDe{wv|ŘIf@L`$爐,0K` R$s$G K%Aj<)×Z&pD&I&PI5J* ,$ S L@F4GXe.x"n!bίf_iI}Sd*U٪\/;oB">}~ A8_nSwaG`TU?g 55VE鈚Ϙ) (S'1q5`/?S}*q-5X*5迭S*#W4kõ}r8-h /1wջFVs#%g=]ۡ"|vf=%soU|gJmxR@WZO7NYaeU @aDpD+-O("BKxǭjlZ~>)‘ɯeZ*} *Im9&JJD \paŞ-㲲4b苉2r}I>oѽiw:%|Ex>xq6^h60%gn;+Yxdg\cƕHrJH,#n J Hh'Ø'Dr=Js"w]a !@T,Bc7s\,@GLp@q#4cC#ZݒσH3=A K ڙz$oA7,*;7'o?w4 0z*X,"sKA%{ѫ0' Ǔb5%v3IF٩b5Y MOWsm?YY?}lLuz?0MS5^S^/h̿>K,G#RZ&k_Pk} -M"ucd72 /s?rC@,5Ӛ gWS3IƷ_bf`^@73>Jjw|=^_]9L !oِ3nnN~0C蹈,9MX`>C}}g=W]+Ӝ;V>.Ui[ cU>[W.b}7Z݈QI$=MY6Aː3 ghKg[59 "&wpံʝ۩ύ7΍A+xl&壛FB&kXf~jq]"oga*^kԍڠ 1q t׏뺳3ǃsٽXKsch[fߏ˟Zq5Ҥcӭ-3 ̑@6EU˳G(~mŲ7p- Kv`jckvsV+nVBxe}]g=c{PY4BuSvsj8HufwiN ݜm[ #Q1Ex-ΗڭzcC抁qmvpAt,vss`!|)d`5^x*ENѳ oӺ~p`B巛S1ֽ|0c%ُA̓o7xnk!;Fvhr*HDz|e !b(Kp/bj|"('5.06\ύ CЖZm&\?҉L{=S%]F\$A g;!ݾ^3-N xɴu31-^Ov?[t`JzNr!|C*^,Ӈ$#j($`sk'JExI $)뚒m )պҎt?`JjSEB!8,2Iu)7{+̴QxkNe8[ke_0~g11R8ë J{A!{3as*Gs+4̉('+$o]e29V@<0("Hβ(QhśmJm.tDni6>X \MV bpo9LG(^hj&U,g-q$iZ=0<= 0vҋknxNytmѥ[7\LV/n'-#Ie5/˽$QA)W3Zrc+,4Sr9O8<8Ϯ1xŁ{*V Ky;G&09џY2XQhS,F$iTS#lBm塴)4;ok2o5jmsRm$l &A.gV7 x$+Xv6asr #z3.A:@a ;?, #@`"$h{)`TֱSnpdX%:NzU>n`\a|OAMMo*ʀ3 !w ވ=[$k}W?zn9l6fS]Zq`/D$m 3Ao6U &L/S9܏6g1~/7`]=-nڥa|_~Yym}FPu7Ss8&̕WX]c9S[nxI`R9f}¦ސ|r"H%ctHo3z YKt(K3 'Zȳ~(PSy*l L>a'IwC|%>dfZ{᛽FJ؃yV&4CYl.|JwpN}, e#&a8Gd1 HH*@JL8Y+r2R0 ^>Ft.U7Hs^>(y֦O4yX@6]cٍ2ۮ-S. kfVCb%f)_b޻źT.}'I VK$`4ݦëQ$WeON;y1Y;in6*q2){I8O .>Q5"z\m _O69C݆_%YcqT00wӣ2"Q0HyQ ,һToodKRfV_4w*,k7zoLKii?LLwǴ6-x:>UѺ+jfАubE =U$@|UŹlv:nhl| 6'@.{_bkj^6(]ЉF)lzӅ$rjzζ]h|YySt^D鼸_[$H`红қ bb)䒠0b>}GuQBL(;N~B A-# $E@d)h9߽fQloJ3i(ʥ9c1J3K%HDPss, ,Ae#-m&AER_r$L$KEFe a`IHLR* rȝ+F^;ir(򎍹1YVFdrnfϿ|ܼB!݇H$"@,WUJ70"bI_~|\i5u1 C$On~BT:LuףcH #+}"NloňgCH.I]{LǢ> {Z&K8ûOd*vGTd*j/}bStXZ\UE8S;IwY7Oujdn2gw|0BŚl`xbb u$Y^]1Q^/vȵcoM{^lc< U;W|=/W쪦݃>Ywa2|]u1zzi߿ ;FnѓUl=og6<.ͼ.t_,70!Z][3Zp(XD`1uswsqedNoՙLP1c޼xm6=7fݷWO ibpeǠYJiE x<`!5IDaSKb D h6`X\1|y/dBﱋSE3afa9۫ $^=Y|kSHov=`@+1"Yfe9$ɍs?ԣ דyl2ךNGyNHFSp yB`B PBcMs!MiDW.~[ʱ_Mk1m(h'NCpᶮ"f+r&"43Z3R >W`&ۧ7ZbI&!j G>oBXĂ0F0Nfl.E.\f63 ~t6_Mp9/+)p-|53{y_ą CM$h$w`+d'G;),#?|:U^{mxY;)uz%K'2*q_/)VloQ-4FNjBpjqT7y<֭\6D&0!)A-i2$IcPR%ׂ$,uV6++l4!򢰟–R*VUJJcYD'z,H"R,QVG.p3IbV3lQ=O8r WbmqnUjW , #=T١ZHsR𹼤!fty,#titm5 s6ϝ11 W#'\ep8FOnM,9d08]#eD$f}2p-_c F`O MYWH[a;mҔu.R;o{_ ,y? :0Khtk%Vڪл0o $pJb$m㢧kn񢟼xOĒ\^qK0axN;lf]Zl& S>Q ]J%'Q}ZK[skHP ]0`Xqos2yd 1*I \^/?}LYp>J _ 'ѼsyDΉ]Q4/fd@0 "D!XĔQcB(aLXXhai'$: 3$Tlebt $Ǜ%/u r!VqJm6Ɵ٘n٤q+K ᳇UW2U; u W4Dv 5*4:R<ҡKxhʢX0%Let, (MW(r /ޝ*?jP=O4AhsM SXQDH 5¡ItT&#`,y2bP(e̕FNhV[ǫ?KMFF6T oDyqcS%PiC! "aVL"m$Ɣ*^L(,jsITj-Fܓ|$ńhM$ht@Xk(Y8J`ik XHa ۇQ-6rRIOQCmXJ6,.,IhDj d望6\6zV~`1Tn/?V{⇕Z)C>+Kq1`K09ޣA8K{3#U;irF YnQk{)gxZZ3!8+B $t_#IeW0]?_72Nz30N\W4$Bo]*pry]nN/q†XRVػ'WW gS:w]3E>;y` ߾ {kL#\p8tyacYXԑUnR^V- a"hF7PR mD2M3M7c?U_9a:l֧Tyi;,j#k%rr Nך܆qag6ʁYCN/`l]z=~Jf|c4݌ha 7V rdCtŕfHܳrQ&y殯@+zFr, ^{gAɡ+UvHx~ 0-[n M<[h-*tŹ*Q偏v;!0Ri <[he=AtL%Of.}}1&_8OXD>7'3 .O,jJ~'RL76 U@J"`X^/U^x0t4N >>N:P^5j)q>XtKO5I^>SMÇjRs@)өR3$HnmPzQOF#%Z-`ɞrWb K1*~={eDuD'yFU]wu6X=#qUqg[^YGq>!>FLjY\jiv?Wer"ZKGZ9ЗP|~EoZW}3v|*lHm2ܒ0u~$/V -dL4"DlIX%Nh Ur0M$ 5/$vџPXNw-KsuiugWgrŸt2 m~zH-ʧAz:tj*bVx= tͧe.hnaF(dnن'zրES]{8x; 4Y3Ma^HokFׂ)$Oa$DsAbhjW 1rTH ܐPopPLb,}?iQ8pGibs҄&(H7)e4XVM{P/_YTDr5Kzdiٝ]Ǧr(5;6Eݴ]_>;*($#-#T+Ǒyo<|&Ѕ$ ]]w!-j 5 ׈) _zOGt.6ޜ,Ml2,8FG!XF0PYZj+BFYFuD)L<&19ROh c\C ^}R㮰3ݝأG $D+yBɁ/d աldJwڋ_K^>yle/Y %AހN % Ж΍q9b˧n?ҋԌV_]rfU'hd1|J]YwS& c4]DŌXmM0U!f0)ATS"<̆]ks9v+,}I6 \9Ͳl(Wj{&E͇_zalռ8uqCPJ0)Ryi1a\rL"iӜZ,l[Jol}{ES߳Wk~I;<+]ӓq5:6y__Kn` (4rj=lˈJ?-GE_3%not1)狇W>MVBk4?yɗ?'Q*%4&?e{O; !ciݭׯ]a'ySvG&Ϛ?BЋd}O{bJh}(q)+9uَՒQ BЪ =/+WV=FG)&^T\})K[=[mQ_RT;'}F1gu-_U _r;J9RnVfu, ճG3RWÈR+1J/J5>QjQ*ڗճՀQJ_ΣSgRՠ̣(ev93Jj{2FyG)RKBBryVՉ0{# ȓr}WYZsBч!1P}[#CwynrRˎekڈnOC*_+ taA"DS!"Hk}AG^RRוPN|4 Wݩ7e_\{=|H8̼xEИغ8ytno1wnPi֣lR/?:њ[9]ZMSK;2ߝU T ~.=N0 Xp>,R\"9/aEҹQDE c=D<%.yCUF׭VQ&sHjjv;M\Q6vU8F-@ڵS9EWKߛEiBmޘo\\sLU?=<=j[8Q)h6޽{3G+o?7|V7\g h@u:^.q@I,RZ̜=h3kb\}OocB B+6l5@۹A[^eCS4SeqP;\I9m79@vPen]P%,J3e!4QÅΩd]EO)X;)>vp@hSC\ʘF%]uYU^S]k^}kZD1!(ْsg Dr :I88RQCp%ǿ rp^cJw'* = s3ŷuz~Ck ! z2W21%LRG8/B?KR)syA]]0&K"(+ȫ8ffNfZLq L)OHՔD@ieVp u.KHVrHd H MY X(KM@ާ9E%RR?/=Ū&JOU))Pzj$t 0)VT0?)n8J#;cS E~S+*luܮd)0T*@yj0^j&Ze|Y.n^p2 cVV[!ᑣ9*sSE GA0$g?ochz FOOcM?v*nxih6~k%e T69Dѽ~_x@{"\읺;8\Ytǩvzpzs^ӄM4!^rhS-ģ@SX. LX?gmͲ zfH~::=չ?ӹ#\|LN1I>fDJ1$o؝7 TLRN## CJP1f&xv*R g&t1*8=p/{4WvJtDj*QtbGAxjb J4ĉ6zjVVlDq}YvNVVèYQ /JYsJǬխfyG_M}0ճ1J:J3"40F ZF;h@KE2,R0+<)u. ^,Z jI9+ /%ͲD$LU^WsSFOF$mW"l}ta}⸭TP&  gtJI:q'TRd;Mk*V+if.'&I;|T&`R\rADvrL sK<OUKPt*|~2w,y𒃊/| 罯)@TN4$c;Zܰkv~Bb>'-1ˌGWwJxK?p4ƽ xӼGc DiNe<" % MSjbʹς\)6re'j ӜEsKṘ$@D.Lt4fIEf2SLBhrcۛ^нT ? T!/ -(DS(LeJ0,UfY .҂+TI4젨$@GrMpMs-v-/3l_ic [lz҃=||hbf$ǯvGM84aH<(ZOU$uL+jOcӀwgDs҂z*K_&}<ߥXݡWϙUejOɣSBoU(>lZb~yw}u=&kpAkMpt`k3Vvwb naZhktF3@dmS krjM.!r>F%iKltwlK'moӝ{q-yT"%z`\^}d|8 W .t:lTt2/l A:u:: |B#F[! v>lCva0:OH"Y},a5SR 3>$.6_wsxHtV]k"f;V3dL9D/&d?f&⽾\_vbfM7["6ʋ;{W|Gp)lW~3Sӵ}y~Wsrvi/DYCC4* ~nҙ]b1 nͽO,מyf&"䍇hTLi2Nvĩ1b:\EhJyg-gvk"Bxb -WWmSbk3bUM|Nd*hr󗹱rQ6W{yW^wBqתyOMSn.聘^kc+o,g me#zǻK'dxLq[~ۋố10 o;@ȹ&ƩV0u^0u!F5BzKr)^@8cDu'qzJX=y(7 q0 G5$)ѝT@ZN}ZKWПrȩf'z;$_w:5T4su%Rhh; CȄ9 ӭVDekV{2fѹdutUIBhvp"S(ZbV]ѕՊ! b!t7us O&"䍇hTLi1z'il-Љԑj;ŗyf&"䍇hTLI=nvSrNwdWiZk35!o&}Qܳ Hp?/d1rEVCN 0@^s6 V|> xV3W m8[wb5eĞȬLrX<|ߛ]W5/\({Qu1?"yd3Ͽ'_ҖX~l3B 7Fwx1X';q@NwKjB!_6B4ሞZ-a<8!y"c> SpUQzYjFkc] (S hC0)@}/?/GǍՍĩjۯ(Hw1`8;tMK=;롟4.9}F]R/ONoRCԋM 5{?K]ms䶑+SJrE /|wu++_"ЫV/I΋!Ñ'k"ݍF~(X̔3$rY$d Z3A@ݦZt^F$ߥ6-G(~wtIb.G͏dMX'l΍~ 7~iʴ#!S86~{H o2&Iph~ineARHNrNFHcouzU?uj5t )Ji=AjIR$s!:+i#NX]lӷS~ &['|gmI'5n@ ܗ挕 /\85r>F0Ƥ+I55D5yusW, KO,Dߒ({"RyrO|S"ƅ`segXAv*2FC됞3:f#XDY" g8EHGR<^k!i6h`]Ԉ F<]8*o#`@0Uؒx`"Ӝ S ~bnPH΅4J 슁X{| M\:Gs6 $kC< nz)jCĄ\MJBBc*;*Z ]aG\z!gaEv:-bNΦ_jjsԊc@ׄ|`OaڄID㟁M%H4^U)Ƞ]xQ5ڔ#9$!oE=z=qtAtFv+T`ѭUևqST;:֌s-:U)m1w*}ֽ{ 1!oEW} # &b.&IR3=FZd=F6csiϤhJ 4EOxstC7>W1)ChM^z^ &KTQx)4/('PҴ^\z^*XVRz䥧i^PmG᥻hJ͵RyqR)ҼZc^'\C#bi)5guKOKӼpθCx 8䯿eTږP6+?wσ* K]VRfXJ+rkNKlwS&w}\GWE,'$ qH2%fC쮤4Ozv "R05|~MA”ϟNRKV]R_4fғR*7G21(|Ts4-PX]]f|$zg#2X.M;fʢed.F`j>{ g7R[5 bX (f(a5V?l'hdLXxzp3n’VRr: ]!sh -̃ ){A4c9rHjO `^L$V|ڛ]VWp|t<~:NNcyJB:}˵@+eR(g=+Q22Լ f ̀Q>RKyF0(8xSEgMÏlj9V8Ey$rLJj|񜶗F{;dbΘ+c,%:rTdE"+sIƛ2P*+/3VsKBܒ 861ͭӞ[a|0=nט挕M.I/r^caݖE^sWE\9<Z&>kM֖0K(@Z3!l01bVFtBB Q,#>#DJCLXq!YWyʋ7<HRk~t~M/$g7NPZMr'h)P:%ZYBy KLL :-V:P?6\xYP3pZ - /_#`6ETkplE1ZQ7کŠyxS^R҂垼-=sD|/__~|{HfcΨ(C 7WX\Yoe z}qB*<\]]^w5 o>&)Ԓa5WW0}f$q5gJ-n uxW$ @au>E@Ī<ۏ_'Q^j NYy=:A}K+vΖ2?o]m$3xr8$4 oOg1V,FpikRKȎN fJR% JDa" ( mxbdZDr'fWW@/=j!_MOaz؃9B=6 =W:񹹊7VAnc~N]tƓ6fjb~g.<#wAZ29bT|*O\CGj-!:R7dؾ¸<1#*#T9#ny~bL-tކ]9O#~ ?iOFDZ<=7 "R+ C @ߴTYRdA"t#]ێ:xny`({#} ~̡dP+5 |ks*0';rN]^j>Xsv 50I'RHU6\5]X͔@ځ5d| wj!bMjRe/,iŏE z%B[ꋦԱxғR0i^[9K9K9>/aIVJk`-4TV@,)BiP䢴 9҇&)hq$&9Yi,#fo9wU6?5Æsܾh_.ݢkiCBya߽ c6nѾ2ϳ''2Hyˇx:UE~~%xؾQG-6o~B[TE[YC&OsJ:};<&b|XcrTod`JN?DH_=Y#: )dlc'rT`jUys}\$iV55k/$rk$rx-C FrMxDM坝c\1ְ$qwsh+ϡ W{^ֹFigiG@QhS\GzfV[^2(r˜,I9ȔNwT*hJBػHkAe?Q dhj> %Fn.: < &aP$MN&w hnhf3A:]A 5ĒG'y ~0d!HJxikk G-I,0EpY{.c0uִUðIyu*K>B/a 04/$!7#W*IfK4cI6g N˯7Ye w4, 8X_ղsNy hx'߬ ka'E4pCb]mU\BT;ЩUAf_=)>mA6,"_gYUW2cV<W~_dGC/?r{{yjJ^JJ_E|NrY*e.򨐸Qf8o+ g[ֱpCe7nlYՌneG,A_ۙ )[w՚ZJJujM-hs~]SNjUNs栊42dϘmr"` YA?;P%]2Q^9gI t_Ŷ\6T ɖ(ՅgEۚjk&ٽZ@ZG\bI1ֳ610.MgAIE3MXs`{JF|*] 妵mitE7{eIl`M `ZD/cI+d>Yylv8洹xCR3CU_wUwWb䁷NMAM K=[iE故故故U-:XHRU3U L"2+L c-:ymf ?$ $My9G)̒g&J| $RA&dW36?EԳ0tVX  G E!DœVb_XV{ w}S. fl'8MF=ݻ9APN(ʉZwN?/ƩlMV(=O!T0ۻ[->6lАDRkc8T?W^;?tM =@,1(GJPp3tY bQ3HeRG$(5'j19!L!Da3+`p,Eaŵa $e #vF2uFr dBhM?jK4 ƃT Ա 4ږzahge8E`@24x)fF'i(Y#$8KȈrfb7G4+?j[`c!p{66֚HqBJؘ@bhu3@`$!in A[{ pqjiPA+_ZHs]F||7fc#w RR tΑ=IwZp]ie@mF=scd1p{~G[G_ce)oN^rKB `#hYSGB1VdxE8TI\N1PlEn^Z_d{+]Jij}`7ZI2nLy{s<nGD(Qz*/C-.;hHQ, RdV&ފ&wi8z9&š&^Ϧxηi<7̣utTs]}һzogGoiq %3ZJzhk1hof1vYA+ZIcڲ|]V, |H)*c(IpZ,-ZΔ߄8J i'T Ty+Ր`ᜩ2bǧa`MM7= 6mN ˕mDžZ9XP,}nqZ^RuW f*>R+%ዯpm6[W2R6\y b5.Vk]Moh y^W0j:n3Mbͮ 9Qo؂j]_.{uKntfܥW MqK֦? Ub;|ByqDz`,V۴֙S.1L9; !/>~zmtj:1:2{MW9itq=So`g*&E݇OW_QjWvn2jrE 0Ԛ5Z XZйss2GfC;ʫn29ov kx4wcO3ܸoSYPXg D]T9t, qOCL INjrqsIE]MW7nbԕXRq;|⡒G NL+9w]PŴjJ0L=#C*ՓA[+sEPW+Ze=wM#uqꈗlL\I4LK a 0&tK-: R:pJ|GLTN<;rD!/>辧^L`W.}Gѭڴ@x͟qt!/Gk\*[ Lcuѭpb9r13wf": Cd G2Aam+赲el&>Syoò`>]*z J/ d):ϚS6=iao&y s.޼ 'ۛ<ˠ&Dj[&IӇw&ٿ &j}U4.diff#I*ta7do=օ ௥TwN?/SEo"@*?ywPo[|mi)kj3B_?tލRy) I⼑:J«bzuoJ qi-0?K][3$z׻JA8+O\i#B\w i,pG46ka[\MJ>x>ד*hRSz= 'r}ج68|#YfgOփ2Y xكf7"bw.8Cᄬj$O,j*Z azM>*p|4* IF<ʨV$J$4RHu,8KQ0m"ᆀkVX $Qj)i>-^6tfJ8-Lr{zXZBO(=FRRXJ D.epBQS7rjRRNPK-;Q*=y\JM'P*=~.5+wPz(Go7qU'P ?8ťnjR@^gežkEneңF)n(esE'PʤJsYcD)Wn( &JwI=,KMiQ `a(\j)YQ*ψq.q)Qʸ~3ΩEx,M:M|>d͟^Lt@˖T{M6_ndPf{%C1zg G ˄I ][aYj)X:NJiv0 R\jsBQmCDη( m,;"'5Jݸe}( z&9@9JN9:Rn(ͥB񔤽._jf]X-,5<.e(%,5#洈w( ̼To_V__VQ)aއvzleUP8ŷW:)ɒ5?>8jDFHur_2*MSTMZ4,bTz鐳XP!!`+oqY?cTjG[4 g˻s:S,[ .E'˅\Sq'T@T0 $EvHpteT,nb(ӉIep\F*eI8G(HnEunL[w÷T99v !AġF͑g $, 1~DQ*yjaXaLR qڗ0 b"+{R6ves[>/c7_Gyfȼ^2U%W__/s 1x~Ȇ&eb^b_XGws޷j?w^u9B'B+q_}}1< /N7E[,yA;;mɲu6LCgFGfZ!ƅl6~4OALn1ЫeL~=.۱I%6mr} LNA~9+»R H+69ZRPyHq>K$8lqӳo[l)%g}z3%9zMNn7Z$S6 M L\8٣)UQ>l9 1O'N2 .n,OyO|$ 0le;0I,Ix(J(Ij.oD˖ĂHY5MWRڏ|ŰiYZRT.7@v$V@cg,{AZQBjBr0B#U8+i0wThzsz] o*9/5MDО. aл4Pm5KdEffhœ=1Aqqu+} lq.6j?v =n^5~D1,--PA99t\\O6PJIv9i{4fhq]_eRuB/=x"80]Zѣ$$q/}/Pְ ]ao@A-$',\o.'hT%*@SZ#}f)2TUI6߁K(N5EV=Ǩ B׷~bb^㡴1TD(Vg/BnWd(}v9.F~MA4%!|nTu1/Khe)jʣIG%, nSg}7IfΚt zs3rOߦ xve w{o?7eukt ҭ$NCd{I]D'JJ F|S5U&>VS})w6`Fmҭﯣtb͇c3,W>g鹉yv􌇍hx}'F {!Oo 'M_I˝wo :K"vշT/AE,sa' k1-C8q%"p MJhƈAG81 %HBlܱ0yMx\;m9ѾbX##: Z|PP8XiiN5TD# JqHO@Gc b`_@1 3pM%v[+vFM}c *Q/* "pjXL ssZPZpYF:D2vef.L?4krD 褽YxWT-gFYws -$D.m4ҖkQL Ӟd07K7xe+ Avے 6)5j*Kx!"X-,)8VȒû%%S!O9O*,-DBnY?JЬ8/p&G㹵Kau?<`XeDžzs2MKL/τj:'Xh BtW5{8I=FTtյnރ?JMQ1~adA΋ra$ïH-fV)Fm> "L7IƑ]vyk® bJ=I&>&D661Ya]ǥ!'Js6YI҄IŐh1d'Ij!jޛ|z;ܭX.VI{U̪|+þ;a:^}+y k: (M: ӯ#0G5"LDWXZ1c&:ZVZZ-E6UŸ{CsD;爓пJme&췗#|X+i3o$ih+Z@_2E1`R5l/4O']>|'jrٍB,܀:Wh sjOb.dq2g!xO]t8#(\T3/E1vEX_@!Gӝ Pjyr=ܤrl掿TW_vi|ԑf"p"6uO! ٧xnoe<S)18 ! |"#"OPdhEA4 pmBa1l#v6)"TI%;e޽r1ooZ{ JN$LB7b_ ߻JĎ7qR}i^7Wz3+w8GyxG(XUpYCP;/40͆dwToAFB/ X O_zɈ̏I{KOUV*$Cvkz7O88T_sBӹQ7:N_Lhz4s ێ6e4xYު.8e-+&)U!RFEG)Toly̏|jzYceV"JG 4z*K3)m__z|~>:F~CccDµ'x`Uk4?Lh N1C. ㅦ=Ev87} "B36u!y yf߲Y>G ZrWpz WF?OQ`ڇ3:o^)w__*iO?6ǝo/;N:e'I@$ϟwN6:}=ʥZ&lohʏvws& -8~ LֶۯC@5蔣`f" dNxo }Ӏ&J̈m }ioXM wwAtw?d(-'̩KG6/ qެvIH\ƯpdOsxNjb2UH,h;:P~ʇ nbm]$ZdF/=cFu&hrVQ!Uk!ZX'{zy)"P} KxdOm⣗S,֌z~үP <߬WvUt;cjc;`.wK=~0 ۳([_@^u@2~4QۗВ%6 16 xdd%cenwem2Nf<&0cOF?;h%)]\ (|fۯs]Evꥵ$φg`M ̜̅hok @M|[|;_WK&n7'3 ﲻJ(bF6n{mb:Jߝ,#)i-MfP>=숛y߿Y–j<,Nؤi =h3*I5Hew 욄Kn=;we{zr_'Y>*˼Α6MHq6)^ܾ=(>  twg'8x%s~R`TI¹Hz>I?iw`S; xrH[kqުKdRdL>}KsJB\iELy ɼ mLO!Y8sijm4'45&Ĉ3kD2',T9LGF{VXkYiUNJ~X˝KH )_D~̠\\\WD]25d<0eyp]Uu턝RJ\Wn F#鸅BϝL!8@[\uk.!8$XG1dJP)N"c#TAŻ}i0i"BuHdȸ& D5LJ>Pc ("xbFe\ګ$t`C$Wwu\MJ7q. gpyy7&Jp[rb{P ɪx>2Oo;=n4Am^Qƻϛ3J #^F7M{ٟFe靮?M)\Lz=%pm#1!Zۍ͏L`5_4stJ5$8bHo kGMsnH7kI, y}9b.) HyJl|Eƾq`oM("LD JGCVd1MSVl[ALj Io=L31E:ɉ<;)'Vߡ-j P8@j8B":uҕZ)8Y3"_F{6wm#l<H89-Ջ&Ou NoBPLEClQI!.ᑌ޾wa=کvoK ͊'U#]9bK8e--C6!qe&^8n ǩ&/b]hJuwvKdTJG[txkh{Rj;/V]^Y0mx6!*w#i퍂P-֪Q+"oHI.lPt* :AQ:KL砆s;ÎV0Ap4=68hk]atKӻc.YĀ%9[֗ja'C?4sХSvc\ѓq<28ӛ&}5_VR{wv#F%>,J qb3x MYtob~9`9{dX3w\)GXx>ԏ ǻ5O}mq7̓jguf̑ *ǚ(y@.f rjIIi!Bx+>{L/,h?1##LL È>ܗK$|s A¡oޖk 0CeO&v4ZIg(}.m'vXXQ{^CvAC(ÜOpu' "Fm~ЕO%zEy|_Oأugʚ8_ai֕u0NxgP)b4@&7 Et XUU^UY_Nk Mxo.R Pe`sDW9- i-| Ԇo=` b&'^:֥{,Yn*^T)6Nbsя_7ǃ+} ;࠭M'{)/^|u3L(zI:Ԝ챓tUs `ܩV6Fe:(c!),DH (UA a|.Ztӝ"Yt-m;n-?=K\&蕴ggJgDR@`Y$̲p 'CLTZ3!s&*g"SnO՚bal ݘL$|7xAc}k'bq~4m\=;aOBiى (C=T, ?\->:(neewAso%%gYpe.8U^騭U )2,cZiu,z%@+ k1{pޟ"Bm~֥X&+A1&sQQ3 n"҆R+^"҆"0hj[Ւj#QU3Өzy3NrpH&m =̼qɁ3<w[0,qùtXlj& zJ%ۈv*Fcg&9NJLs֠態#fRiz1Mq›\ y[P uaHELR{mEP)7e54;{=&T+B55j AExjlJ6:M[S~z&.@B7s"{#FU z_xd*Ru ݆-Ĩ[.&]#zrKb7" غ ^&<*lrLRcyptZ1Yg"z e(@J:j .6^c2 辷@axD$́ {h }Pn7Sk;Hc M 0F@#@e,bxicxłO۝|I꺦und7 5)n$@s pבݶ,4YV*иxfRsOE470״d?ms2<;5)0[f44‚WɅ,3]6&Va>mZwx횆o :+z]g߫{|nh5r쑻1"S3`Dg+oΘ{bA$lA?1#AHj=m(^fZ IkIN>8 d3Dx`PÃL4q߸Q5XutHO@)?$NAъD;l 3 yz?y@K3`I$\:>@L0jWx.6IIY " P2Q x#pYNeDKe7ed ƫa[ 5 Ճ>5<4;S5Æjwq}v-#3}ŋ=]R#L4ӴuBIzr(pxI)efrƝ˱B&̃Jj#nZX:n;D`9er9OrMc4P2$L2ف,ĒY02 f%~mt F鷫_R%P. S͞0Btw_)Oǿ ˕&| -VbVe/"128)#DW:(̊ TZ8fOXIEЩLZ"e~Xejq3 ?qޏ}%29H2Xa>~\ȷ.߸PQd&' gFm^9h1?($rO? {O/wUO{r4o]i4ubثg_Em%tUCdnWTb@d824s W *K}A+4+׶B"0NM<}Qks?AdLzaPqZ׺'1wgs]-|0׬Fӣ2AS|2.@?etB SPJpX>X\'SHĺo4~FE 1K+Lʿ<^E9p7LS>XtmAz"SSG!O~!].j71RLӷ-i*g z^,Si'T9b/ y;m7mAL{KMaU"и-GDk@'cYeh s>=ޖ09KlRf䧱X @nxd 8G3Ce U "N$"AybWD0ʲ3;j؞A\B&]15ڪqŬf.x4SRK>qҳkE~Nk@{6[ja4PAOv[AH.cގm%LdT2"=8b$i71&]zµރ]p<L7;Aq^# \xRހdZ 45tס9SVFIy m&*Vl»w4lTV suo>D\6 Mu\I]6;B{SZY*2*SIq+]W]* 68606 #3c7:6x94'9 QF?cG;=b&el֛,Wv;/;%&3j/ElMٯc>u E?`"&!*m$K~JON2Y1(ͧK.a9du|TJ*'|6&hICiЧy81dJ-}7FKG0ޣN71Z/ 0(l&Jg</Ϡе7ZgrX \6em#юCm&ևr@FE3(LBTB២LFzU֩ 1R;j\e/|r74+O L8"cg$E$ ]:|]@:?ì!.]y4"͘Ob4x*y([#-""sWSJ|W FQb(E^Sڼ'dY|0,E=dR !j&1Vʲhz1uAymhZ#˜DZS\rP)2.~׻.ŪJ({J*R[Zpt8*eSfO }IO0hlHn 9)s"f2cN9[C{x3bj_b_::1dVϾr8:⤎D\ Rk)5_M$<3fDg|Çf98_VBZCH#h@#R2xXy-L9Z^yX 'ܭLڂ}Tk9C`#Lxk)F$I359Moi6%bq;c`l!,ZfE dqeѽ:Gw*PkpYy$)s\&*YSZMݔ` -Reyǡ8;_?ޗ7~_% Lnp0){/Mz@KFRkw@oW|uuw \=W۞쇫?=}+pSg5~Ͽ,>cҡaYᗞ`_\. K3HN0}/]NS] ]J<,p~8JDDQMo%C cm\R;F6?Z1VJH uIdɣeMKNCR4Jy[bd^)TGoCÌe7X*&k({R^hB!]ϐA):lwQ2H*L-fpZn9f(i7C+D("`B11x~$/!jr4בA0=D MEN2wf%kD12r?v|!:1K'Z8ѭ^s$I<€v<-%`FRWQ(*n4R1_~qdFF 0y~~qp6VR7Z_Ĺk&}`kfM)Y>+"2k[Toâtb#F@FJXYb ٙ bNCLn?EH0ʝ%gsr!D@23BB!+ajϮn^'`wfb S2h  KeR}cGSиlzdZ $H$ %2C.7Jczt?[vJ<xfW5se#"|Sr]R LN/ÏFM+Yə\HJ=f# "'@̉"42SY!ꭦeE!4(0ZmH+~FdꠍhK)-8#H >ZpmfC@L%0L\npsF,`b>-1:{;C˼M!mD#)5 @(JJ3BJxNgx+ô9U"ӲJ]3L!G񑍮G+Htk =2ĚyId9!$zF2oq![?;~E FN#ā+-yN߫|¤?n%-n<ƑTLp"Q.5\4*Ն0'- [R|kI`x0hZ81ЊõyU:B_`34kl(5  i GlAD n7L6A^6Lg cUd^ R!GP(T;TcɁ#_iX'U168s˦zz' ] ;zk.9Gت[FYNƀ <zSe쬅zzVl{w'Zbۭ-C$Kit)0?/UoP$O"f{rMȜ[`^(uN:An>ɓD :.H؂l;Tuc&rphS*5)GXWׅi >\X DC#zsT vC2,QLPYHJV~ד7ɔF.W#2VsxcF6 K";׊Q6`^Y8t 烁}WhK$Y##ee՗&@M:τAݧ֋^."E^v쵧y]2 09UTf[o:qe"ʪxsF;azEuUms('zL  wiwz 1nt4:7ҾW 3~vLy?d?8x?G:ڡU. ^$f{!' wAHٽ:z=w)_]}e٢m;byzƔGVO3h sBRCr|25T(Z nz\XZ_RZ uX^FF׶u4Ypfn|Ui?sܽ|W:aњ|fKi>Kw5OINŖ%T/B6Nkƽ%ύY-kćZBm%ډ8m8%DS9Loq5mюl}"؏Job:55Ղvvl4ߋ1|u_%Mubj,Xwц\mkF̔.IwFhB=ϸsҵ I|%GJZ?V|~k譵*Ȧ_Oc8KߵBm Ͱh/{7Ӽ`IvC1h8OTJ&ތK>P&ȥy;Pb-pvoKU)sM=5ab"@N/L!Ii[D"hJXsz`Д5ӠPk8T0դi>2@3ۢa?~Ϳ5=;1UŒ~2' ~ϚkhG!g;/79L&N/l9Li 0j,l9/>$~;z klyb@ ’u8$τJǪ4_exoYY Ue st ֭ ݖkPЫ|'!kT6Y9, Gdx2 ]me,]^>u|g++XKY'Ş87ǓE]&90/"0(\@ 1[{v[s hz{Ev;}#,ԲϼwZ"_\+w\XP ׍+ﳚwIJDVxaWy KգèE$T>AZ d7N|xV2tMMd:@jێ[+?"WYԜɖ#I7ζYDʧ?WX)1:ɴV   d2m!DfI"iչ2l5X=#8jD+*ibиJ5󟍊F%HW+[Ԓ˞95KAHsF>p~HqVNZ (DG вjYҳJꪷGWQ\qwTUr}Aƺ}ԅugW樊)d;?>jz;6ߋ&MőE>;/7q2k3̃M (X);W;) 3JxvZY_a2,jbbY\21NS̐=U(SBl%pŞi4Um5^[QDΊS.8G#?GAςai-:Bł7%7&U;;;%T0p=̲ΉRLJ1ʞ 9c :+*b ^"&/.nkׂn=x j=nP < yXIhgxhE^4TE(`!Y(짙Bē[JkKX1|P n9˭Dm=նywvj'rF'ؠJߥ8mE@A18'`+]y{F^nPu4xf]퉈:(5h\?!So!WuOҟ)e\p./}/e8L$V)$^xcW&|+RV_霞Nvg`!g4m䟞vXbd7|TnfE;pn 2h6RsU`Kba r [,s>{)]+M:7Nu܄h`L&!M$}V>.t;]7CFxٌgx=6m۸Y(v]2[U ohQ^YگBY0kdTldH;[Au5BGH8{s`'3z=`])d.yC1J a Mb`'2.y*oa3]SA!D(ybߍQq8dpadJⰚRFv赕\@to18a0Jy12w@'K4r̚RFpR9QH_0<`1bȇ˴:GYJJD97=nEL1:qS,pM1<!lVw[AĐtL_\6;hjQƌ$\jFo-Ѕ^v,tb#Y;`}M̙3}Ō,0Y/-G3 b5 ґP'G |=u p :oRa3 },J$r^lQ ٻ6W.>l$ X}J)&)~[M]-8i$6ll :IPȉkiJ +2xM fHF6k6/Q;GnMpuَQā>@#2e0(2|䪞 B& IpMֳ(8wRFվ+[9!dG[p4&Z| B+K4\5Mi}V`+9mɹh5,~hD!r$_ tRmZ椤X1 "EdRTS䤈묆VEn_𡆺?]-״BHA(!'2Y7r}b]]'Y>hFr[Iq^-dmֺڮd(+eΥlrHH5-b9|43V r}E >|D#*aQPPu!%ud)>D8'TxH ^NHAד *, VY  2&rLhI$CHWr#2D MAH!%F& Glq~6/b Q O EI%/$ZM!bV)}Mqc*D6N&eQHf#L S`ڛ| A'<9v=r3 ͔!;ICUb39oeLyo,oڝfF1΢:;"Y.GD/բMecj;d5>3;IaKc [;:k,yn5(%X% dV%?!_ i97|5 ^G_n{jߌ:!yPA4EJDy 8C̛ObYYN0U,XcUԗmf ]:vi1uX&~h܍Zt>I.E.Apeˎ`o&lʸXGS܍fd7pAXj?2FVfA05?HG^0^a9 hkhaqN=6k9]T]_H}U<\32[7(c'DrC*? _b"^vLu*IKmJR-y/Q Mޠ)̪S_u0BmFu|'M` FDQ >xU'jQ;rkbnH54ѷhk>u?@U 1S68 .hC%H(}TmnLqڇ62{*iZlX{ɿ O}Vkeo zv l 3$VdR,piO X&x3 AHeYCH^ޠ165=K טg:ZȔJ4:&"wo3YsHXn&Vxjh3eٙ›4Yj<#Jٺ׸ У[ks'˵ӗ$Vk I[N&.0$Ct2)]4pTG0'Z#F->{{uzn쿐G?ЮN9(VHB0oq{тqz Z#KmoP؟Oe㍽ybYzRѦ:ٞ!kG%g'` rn}1a9GBM wb#;1&߫&>'4&H8~Z\ՙOr"ϣVs׹msCSϹY7#.S5U, M^7X,г\ZO7&K7*1s5`ē֜qXuqXrA1Bڰǻ\%R2=D1nH!Gޜ̎s~zE %.9T(&nSGGJZ~g_/+| 9]OHYuB [zئ7I̳ JP!8^G;ie-̞Y諺so`뎬Hd|7V4*bpH̏ڞMόO=0BFM*Ne6W0$0jdcɟe,9Fr"ׁ+cJTY, %)-S0F(IiXS}G2it\~~JDX+<#ӣ^^ՠjN9X퐎'm>n'b~dP)?NzW-Q|a҃!?j@t=DE[NdFaGn>=3yAhAgNdN@Ƚ'/-)"osLpzjt?/_q;Df7lPeMFiED $i:bK=6=Nr[.?>d w? oB1G-Sr!G@i@iz"!j}7?ښ&d<9jgP"L(/#qtv\u+~7#~e5C_'g_0>R(AzI@P)B-"̬ۼd/s}.qg UT'x#VOn0} 0㜠-(-"0o$(} PMцQ:QƠ2No2/4ɨd0,ǜ.qx/]ry5#(.y5zIaÒ6T|cq}d צ[6=|$$眅aq{}1Ez?mU-6~a(߈`1eL \NQZerٻרKT,mn'!{,Scm0nX9lwS("=' ݒRn}}{T+FAZ곆,=9ij@N{;#4R9& d`S Q0uL{6}g p ].xk G?ۿyU><_惛`PmH~@k|po[o o/?au |x@8]pF NW˿?~i_dW::vr1h鏶9C`cZэFٟA9;j% πgR'T!%#z6}`K >hFp ?7zWTGyN)k*@BL%(G?`ݳ0EnP (q4=A3N&Ɗ++0,ڃ tt]=-ͷt/Uǐ'SiɕVF9oxh'־2I1UBh_^ĶHnR&d)&Ye])J%vЩ@:9A=Weԛ-;6lRI,hLuKza>tCGh1 FJ ѳLה1%&W/͢pA!f*m{|>n&~7-iݱUϞG=~L~(=liQ-'sf]h]%.ݦ7'n!Ifn% N!I$"BH%( Gw ؿrE x r$j n?>8ӋFŘ7OupYc񌛃߶%L [Ofx2CsnD$WIt9^O^@p"Rŏ6(yr+^|1?~^!wRO+dmT>qC`hʬtrĞzE26Ɗo\_VXAwLv1FtoSنpj5ǶZ+1NW`leқiVsNjo#mHvyĠӞ5XҬhsŘ#yqBeKS t rW~>\ӯ/m7oI~ᇙM;8E8JT,?ٻF#W=,<@:0.`gFcHJ=KEXE"YEDFdݖbfX!9&U6>޷G dO&$JPǥ[fFU \_aL+3]lϭ"kʼnGӀ]p"u}y?4Ai ᅓL+U&҈V]6 !"E ycH erY ddisK_ ߞVh;,oN>T\eǕJk+JE \|l\L|& sjAؚeftqJo]EEP)MEC{J۪}֞\\PМ$2zFI$8Ruuu< xEvQ]g iM49"iJ칱\lGW,9bLR$zB਄ME#7tt?~X-od czLjKw}3&行cs*r1ΕbZ }џ?}.F?sK-{K^C-KFYOCY3 ™U& X[GBW1m_#)LžP 2_JᦤЗ8Y1KPL u4ԾvBkiO 1k1)FrI,WL{ OvhԎ=e͠|,ݜ]@ⰱl;^,!OQL ٥oɜ_2mF]{?[&u,"t ~xq9}s?ŷm5霙G$.o$^]zNk.iJ`chP& bs$*0 E7<8ٱz9u>]K*2>0+JrIU7v?jj{!Wco<9\rw+2)ψ\0]EhӖl3n*CVS4%{rM{e> '!Zf~VA%mͮ)7nrI^^k Fk2|tfoAWXkKAiۦ4~5 ETR:GԇxA$ۉ8$qj|-dC(L{&\lW;oFe`" F2CH50e)XKAk|`K20KXTlonͭiU `eiaQ'֚@Ҟܿmr҈|:D“QJYHĠOc5zB5n5l6oj:92lD3kc=_»sI na8ބ[r/()>%~KǕ>"a$ yi:Q40e e8d³t0lW_2~O .3>C,_p.r򗳸rWvq6Bݠޘ~EQTӣ_d*&SowWU*&q⚌rtb9Ͽٌyj{7zsx;Na&i]:i\-BXfqӪvj/'7W݂n4Eկ˟wտY[@/_~>]T u Ni)e n(ꅆtXB7 f\DL>-_W~P`e'?UX&n[-qH+;,)&|uN1 u7sjAXԃpօBHg,``|fm[~Ҿ%Q4@mӷ ,giЖ\ mX$螶.V}5&NȸJɶ8EHWR4pB!UL!'6Fe Sc\d+qGwŎ}ڲ]Lcdp XM5%<_À\}& WVM G1,n#i# ]K-EfmǾ9vC2kNLppLJb1sڠ]Ѓ>i+o3Z>l+g !m̯>~̯^R[?ExP&pb=jJDdՁ2WQA2+_+0w#%9N\ :1wN펰F:WƽDh{WAmhͽ f!?JAT];#^L#7I@A bCQۈpI%6mb\r1-Gm Zt8SkuO k>+\!v oOTP2cNJ?ұ%Yq+2Z a!HC9 Lz=!LVJ|5<ߚ97 h- gJfxЈ sc62-?cڜ1C A5:OPtz=ͤsȥ~YOqdP$5="*@4s ^̀aӊFG'4rGi5n*VFI)m ^*[q9R4%<;Eql1HCĵȻ mDhie3c/+Þ-#,1VK!{)`H]UbDO5v$ gK. BԊ(QhuDCbPN3 H5BP+C+h/dUs׼RTIIMͦ\ M}"&X.c|gc&sG.V$),)M]`”&Bm3FfR]`Rч~s:J,yǦR+FIKU%ڐZ-xN'ٵh"uFJZ?D8_7o):*(dȴ [ @XZx'xsAJQ9=Iѫ谤Oh, k]+կ2{$˩S%p2SM^0#EXgɺS~&чk>|$|1 uW)S':`BniwA)BFT\zyCE)4kQ|JӔ-8yQY[.}=%=JIHXrR,]Xcmj[HI9@&.LCI:nA, ( }ojDrrx TD69H-.)HC :5kUxfӉpXQ!+#INToOiPb,yg8DI$ft,jE_%OC@RC \ִdTTLǚc;>֮T^o>6潏 {L\|&mRWWr-D Vg/--XU J~ۺ~pr.z si1{\޿w\G > Zyl,x);P"]%)/,W )B>`jK9q7*My_zZr\ s]9闒?z C@o !eyï<:G5gLm:~X[n4T/gIpF|<(0 eϹ P֔SȪX$MfӹB9&_rIpM_ Ky7"q93dk՗c6شm?$[;[xUA'TK^e%N~r|:VRĸVFBa&+h࿟S௳~a+ֵ7FU6plMƭT~#g 56 >D՜%!IGaY+˳4̸utCf@rШ*əA&*MG;WgvVBfC~Z#z;svH9#!/GPPST "GoOB 1(锿Eg"뻟t×o1{Ǒ5Yؔ8>Dh띳iH;~rݒpDSڷHl#g:Ad xS 9&ֆz޳d6.3i@I . 9$3#oW,磓gJojL;wA~H-ʡ q3ǑȆ[xoMճ$3s7c#Aip'ؽ6%ZIi-Z pqCk4j"'V IXQ`1UGDqj6*R6-x .5+\f+PfrpQQƖX,69JHQ|,6F^+5}5՗ƔQViKEsgPg臜݃l՜Ԑkɨ,tQ8a@8JvS\dA\k>nzxW -@ B%2C[&[Kx;!d9闫m/>f z0 QҽJ!h$"'βܢ8N5B#?-=gm }*vxaTL''WT2c}1>f v;P}QsikQjB=3}zZ쐸~E<,JLDxyKWw0 N3k2F~TRMLONsV.As3kYo*yne@@zt+yA5qʚ 6Dk!><O@NFnCoc&7CІbވN,6 "KXF;'o,.[^?t?2'$? vh3.ae{"κBKIVb1vb!G= bqFsǂ,ȭ=n 3pp[dpM~RF Z7kxB`Y{i6K%e·:|1;͆p8'˽_tfC)~3'q>~O*ߒ|e$@0ot^ReRsD;t &oZQ}٫m6 3n׍]㌬ޣww㹏vau1k>>vjkI;e0ѧp>H <:!)'gzьe54sT_EMhw@l*" ٘|k|\"GZ >QM:yzCwO/>9I6UV,G)J@d8KOJx+=H'snj_0ϋPy@E]㫞Ƃ}nݼ]vr*on=~ǁA'3VS*)*\;1GݫNw10~(`LqZLȠ2,2gS@Nas#;]&?< 4Ӳf`[VWX$LA' %YGဝd,۬s[ZJHA69|n״W]JEέ֥J֞`kF+_$HQhIu%SwD"cXS68íD)SN&>kf7|G\cr znN${O!*&1N YqvZ\:Z7nY/Şwٵ/Kuո8F ii'>b%(?Y=J*?7Vy8 ?N5oƏT)Bo>^qNJL}f j*H>2ViV2#>/zK\ց!ˑhXgjEo- M 6dWj?L4#*DJ rDt$Uz_`f-)uP.lqyq:GxNσu3{hnɺL]$FB@ ?0NB--ޱOA%EBjtyxlb 7ؔjV/h, $~:£Q1ъB9g%a_3zۃoލKc S"k@HaMOqvچ>Jm;~Ű#;sp7R98etYɦ '; ;B[.v4/6;T};\Co+6ڃ*#{?%m6ɯv}t+ɖJ&읫8>_AK(-8OՈ?}{[O?o=6DdfuXNfqIޞA[0k1f@€~|d˯ZMeQVi4 FWU_z^Uf/9h^bb+.5h9\\e*D025e;}  M / 0Uj%H){  gUE5䴋,`Ճ# )UY;$m!FHV 'hή`5msօZ}3gzyQx.߿,xF%yZgi~O[yxE9O 1C=6v&z~~uc wrڏΝȳ3+=FJ`5BWzF@ͅ]UDj+!ü#3XM' iS"K ͣW2zgT-ubݓ:pep|-'PRF xHTjS.[M[0C2{Mޕ5q$`L+`wlD<`IAI IId @P؍0T mċd8)| b䧰kt/UW齙Blsth^eK*XXݬ0vC mVs;\fE&MnxU)LPlR8US3Kfٌ'y[\ XI(dDz@*Xobb!Vm/s+Y=0r5K)#A\Q%}UNF‰}Hz!:Th#* @BRHW2R4p\EMw+ʰVjAk+60w;ڏj}@ׅe}-o*AEFL@(xyEtn!&엏KW:ڼw.&\Fe1zB)"VGcVe7/ontLejv-8>ZCez=\[fd_m-B?M![>ДNNdgOUW7U^}5FGs5~"; Pzo.3P1:7&x5eQ iaB aD-|=vsj۴Vzע1 yvsAfP$bN$Ide92ciZ£35f.LĿNuQc~{K^Iz$T* Xg0O2uDG'ʐ:uV?+f#Ց*h^Ϧr0~4"1QFW7nTZM7`'16P`Z'6s`+ !5}<'͔ LJ`Y(;\[.ۊE~T+/ҽHPI#':WLCF.wN D'0P^VU %Hj>(SrMAEd3I (ۊf\X}(*ף80n=(E!) YB>kQzFB28(d-P|A` FG e j[,P`x5^ Sz_5ր>q:s~( g5jp%9voCD!oI~~(?ޏ\s#H[K$1~J( >K-|VC4 VdJ`K ut[OӶ6)\Aw^TG!鶸s`5xTb*U&hF(ETe[fi5Z{-Nh;^_?(F`[rPtt!&,"UQ. ʖvA[QfRAȑڔUT (1PU(搭4-Hm!ܧ{*Wtxylm+=mMߛ3m'^òXK⨽2ZC<>du3c zf9m2."Bp&8|H jWQRG0*2!r|,4@8ކZJVbVC@ S.*t2Œ)T:'; (9;i^4set)r73HB7:'5]e2H ^@)=rFז+ҍOQLJ;CgxsjЭ]j\m=v?x)P0=&4 /[4~|7 C7IG7Wt4Onx2;tGjra#,SG\M )s.<~k^-?a/q可D\DDax>t7pz~.ݐRhPKѲ:Q)-?JVۣ]ų9;+ _ }?Yfx$x=R\5(n!M>- #~X'~i~s7BH+,N#^?p,-@bg̔\*^|WNO`?~ìx2_}PQ?a0-N~zGLqcxR9#E6ח8Vؚ`i.Zc5:>-ͶQw͘8ZVwЋɚ}zYMk,&Z*7rJ|K.ҩV6FYedzu]G)EuL|šV|HHzɊ7tĪ *?_lb(HT1!NE*[R2b\LAZɓ˼~O%ZD6@:dMc^@MTvmNBRd4%[׊bzVkeƭEvW*H' Ztc>!)Ѡ}*^̬6ܱRRZ_C]UIt%;\F4B 5xS NҚ1v8E)Yb~?dxa@$ݺ/4xP1%m\J|ְH8>"3]Rێ:s*J LT&&`1sZ55}217,Nn~١ BfCܨdS`GB&9RawB/8 {;8W%jl249!%y>ș"GMTZe1RqV(7e J69mYi}VY"|9[,V}*{ѾR k [I Z]A1L)+ GV e|ŶC_g>V+VX&Ehp(1{Qa }W,b*^IWzS]PʆP3,fqԵe,b5iCl5-i^M9YVS4siy"&3J^)E723f C<H rHF)1aT|:'/d+?g/МG呃OR`N#6h2}ǝޣ3ǍƘfD+ޘ19;]HA1QFm a1BPI蜂y5[ƥyHs^/L B`|5*D&oqf؀bؐ)x|=7/ﯗea4*[&6>=zx##|G/xpΊ9+8Ù㊴'〜*(P \_*..QDa@"]\ ]»[ C;m?\S67ڕd}ݧ qEUH X`PN3Z8-yv;[CÆQf-"+pP7=l_3+y0 m`]1n6c/!jtƆh-J Dk8rC~)]{cGorkMn5Դz wA+#dm%޽;{.جUY,|M-z+W7'8S2YQ E` 7ήXKmآ%ߑ_qGWdEOh ݑMP6Z[m/ XH8i>/``a}VY}j\P&2RQk[-Fxر1WT\\n3t8cdtU>fM2Kx2q/5{>{Ʒ ,7F4P}rKYШ?U}J.5m[2)8r4$A49 İ͜NӬJf  }$>/E1I@ip$~z_*QRzFgPކp&- &)VۯSDNǎ晌H˜3yڛ1V p@$E5|:e"[iv>Pc#-%2Ԝ`%=HM"72[M.A3 p9t!5$qN:8\ A"Vh|;I_ DEa`ͻ8Zm׶mokz$к5eU"s W6'wstUO~W%;xɮy5nhLs.['-Ŋ!ӑ&S Z%>iu)A R)a:L-BKNN0C).Ȏ- ]\Jys逃 .0c s!Df` <ę4DA[+ݵ9q8sOc:WAA`} -24#/QIrdƓ\Yώ=k`Pa&dE3lBqgΘO<>{Y:b"Ke"  N8r C#jg"t/5G\GcpgJ$F:!mPVƨD(LUa˽AIMp[cbJ'~ltmJE=Z*;SmU΋*Dpj[1eZbmOǠdg~]9e*]`8~*lrs)OpS{ODdÃlKͫs7;;&\苡;_1|v~yqΊK0<9^|' .(zd@T{;'ml :;3R;#(8Er-"&4kJ:ޓG"Z{\+^h5iWjQ=VyQ!''̐B>ʸ)z7|o,7@䐅5@`ܜbRc^|B^J#&9Hq!,!u ㎹TBkRP:.SLͧThn$:@ڗ6uK"viM'J~570S16V H0).Qw彊[`{now|qsw;䈤bRHd/",F Cr]J@ؒ<Mepn ,Q;O!#Jc!WesDFtF)WR@'+]^8#z}9{qɓN34Ox@~W:O&UFbTڂqFs]H.ޖ_bJL˾{@8pJ׏'8[=n_z1h |;=sPVjqO#裞Tpytc[[u}$ɵ,vNڋnX0!Mx ; Mxu,47f_[)mGhHC:4/| 6 w›{;3Jj8h4U$>Kz=,7k>ZRk>DJ8xTd$bL(Ѵ+mSƈiU/|;0&O_ DJ=-B{28W|,JRJZ28늈3JrS%|R!uS/KX_I#<T1 [yBX&?[^x;[ srRKcOE *PF/ӏer=g[FoG&Wr,ǥ9tgsI(OF`FyFL(0r:*A}N{<U ~W+JZ|Sv|gH컸"qÓEmZgxuV*,A$$:uIr9 ]0k ؘz7՛ddi+ !eC ˆq5u!{P(:TzBjeJqI$ #N^6#fgx}u}tz!.]vܔ"h$%([ ܓ?m߽Y޶+ث[ *pk9>EgwUbeyw>LY2z׋dW\sF- 2%b)V>h),I8\mg|a(/VB?ޯOz)ߋQ !*lJqb }e .&d1wGU~e$e*͔Y@7YL9McUѰXg*E QgT[M|*eK߸â5# oNբeF1؇pT`{6\ˆ34(!ǹ3ۜE%0h-BJTʭꔃW5Ow5 KGw!]Wb$H{6%cj@:%M*D!K i GCH\":C1$(SwߵN"RjNuB J59 R(XEty)n\A.tZ]6_?ZY'7LibmJ,G&=OQmcsMuWs@u7"序~OH_P%&ݟ AL9(>m2 b~\fř Rst4-gx9)vӫ8/VQl0hij_JM.6qWS0i8VuOOjT(n^䔔yǟ~N=P ~x4G8:jmpnQaB0ϓ̎|Zi|ʕv? 'b`x?w K0}0Xb.upyGnK|0_DL?&8X˿[|+j|4y_)K;]AEFCxկNn_WaVaV+p%,悹1\Yk z ȣ=%?j~-^ ^9'w:}]%ʷNT>əJ"-EFb[H@JڕQG]hA* B!\c2sŤyyHVx}gs4fhۦBg۶Tǀ s9)MY--Bl>xkZgZ۸җ܆SUg+[[Sq)ZK8[߯1C×wn'1>ҮD6>ur^%;5rAP+$̨=>3qSqP:v|7"؈[W!6 X]u @5XT=hXWҲ=zdfWη6+G=Nidso3 /?`Zj BX]4n<}||l ߨS fDozB$ 99=ӖuVjP7#w|-_Re:ҊuU&ɵUѽKq;-V!Q .%eWLuըw#&898@NBk]<_'m Vphf@?<&Rbȉ}#;"+IZAc uCk3M QE:ksZ:8u&528.Cv˦!2]Fub>#DVa֢Z9{m3)_jJh:#2!ЧnR>WI7KnV ZEIuޫ4$w!2δY]tꙄDʴ|4D:Rl_n2'v5@j_659ݻ*o'l:Z3B sbWS`ֳ{u:3W!qqRӏY\.8%`Sr͈gyp6n)/pmC V|4Ph%@=H#Jr`PE&T7qE3;gV@yk= NJZS ֒hߖj:oG̺.ȥ%yNL Qxg%D,礈)N jH NA/k\!nC+{>7e;mr)b&zjgO^Sj%ͳ{% ;P LH*ʤSC]nv$Npf%>,Zkiq$ߢ ,~ݘO˙}}U۱2rѭ@Jh+/ ؟-gz?_Z}^ǔ f~:bQbxQq>ˋAf*3%-+Vr)Ls' * l{kJ6>yOo.E* XINn^tlNÛwWzzNZg-oܲĆ{faS$ " a]F"G,l4'#v# &rRP$Y TK$d{(vgJYƚGJ,Ȋ^5nstZv176JT^9ٱo1v>ۉG~ p/{_bc2㤶#75#\_u~9V[4EFm\dž;US(;0$j M, `y\%(A(]}&I+$: =K$M"DGy6ad+:4Yd8qۡ2 tǍդFXD(VpDBS [LJv5*ES:d,d<'\;Q ZfU>ςIqZ!8:D=B]qO DΔ͙>j$Bxy'3P> ZZKTV:fD"Jz@4YjchL689HT{# \QsOr:b[%I,DŽ.7EpA\ƺwEUz_M_t1W䄯jy%=zkҰNUyu0eΧl6p}%BHKN| nm{,Ay ǻ/KگZVJX~yIݭYhN\iTJȔFE +Ln0L"(0.hEȝb y-771ְ WV'mi "Q}N5 kDQhi7BRҊ[!:&ˇ]&1mv^e׳i-{Q:X)xڪ_ۃ ު)V?VJ8+i.Y{e_15좰S  ɌOS8~[hld'VM :lLvi>d鉂8 usUF#?3BѾa$J Ew!G.f}gUBF6ݤEDy{64{W~ZSXYվDL3b1;y,=#Uo|6hU7E-[ߴhIbN$gZnVfb׫势V|s%rl>9 Wbv@T!aN *% I)52dhb4B'iő(swf/=ALj]2Lf`{+bJ<˖v6;򮆘"(UAuHi[w zYϫǣ<ٱ_*-#oiu\, QJiD(6 UI<kqF%%.y|dY"CN{fHeHyЂUVt˿.ҦvEγ7)YdE~ ؂S<ܕ[D` ''C!AGVM@!2enyNy0ٟ_k쭄ME(q{Ǘ^usvBԠ6i._#/`P5vL[oht"ْ#|2 < ѦĠbQ=5u:>nدx, PE(hDSYyzGY0\R=Bע}zDr-"uU"}IƖMf[gB#iƗ7m#-fosY{"MgiOzLTK hgqPdiXiC/G!v?QRRLs Yt.L т w㳷BrzekaG@ LZ (TH.69GYQ`|X\5N\T֎Z9^h~sR4FVdP ʧL\,gOQ3_mUʷ}!pZWnE"Eۡ"7&Xkv@ܖV>@=;NWW]'>w$Mk《ԐVtj#Xx8{tVgaq\RA0oC竫kssYekO%"ffUPq,o7qCX|xY8H$/|dvL B/PW}l"S5}VOQQdރ6EG"djJ)k嘆M=чǂ '_LRp<0uFmu9;5УCgS 8eY8B;M,\q`nKnXyg-sm.4P2, lPc%CSK&*MW*AKiP+\IĚHzRͬձ2!딜{k=r_<JULʹ L$Um kylS6t9ڛX6È)yѦҳ.sP۲EٳnuLxޞ|ry&.k~,_ IZ[Aa]3KkknlS}[S&$yIE"e^d9S8#CHI}h yvE/a}{,ҧMf8)jZuD@[(88vspt{q7T]7[ lh~@OU )810PJPÆ^8-B}།y:ztz)dEe,ʻc#^/F< ,PI;g`]?c->S%:$h464WZ3qm ,t*A!N` \7ȭcqFAk~lw ԇð|}9 nw Tnf̲C@5;Bv] :=]tUهlQjuYf<*!Q) i)%H؂bNJ*_OgTKDȈtxE,SDSߕN3I{O?楓3jJ]u t6xRg:Gگ._M棇o 3@E{C%(ؤ&%6v$0PҰb`6 Ѵlg96WN4zQ2QɈBix_SJ7/?o)-u}P[w3M+/`Y xkkCtB" `rtf؆]lwR V{z< kٲoX|dw'v/Ҡ/ BS Q㢐Thf5llWJ(uڠU(Ə;ûX0 ؁M5?30LJH 6)ě^dx]cO5 7BqÅ9@K`#bPN /Fx#yW-F)OFD U2KTR)'-$F'N -5 n3M:=6oKJ(X;F}U@͚L olx̀l߲GT} ێAdʷA'#zjW)I O x> TN Yv+{aDG \`ͮ*j}$sxAn >YxV+0C?6~p%b |x>VgIRw`6 ^Me!P^j;o$S{ \IYߢ"glXYÄe9C L0do,2DƓb{8W¯P?#.+ME8gJgAcuOǍ)hg5faZiB 8P,3k#[^}?ݾ&y#J{ٹ:wYIj)f9D(%SbWQ K B4~$ʽ/_6ra,<$w ߲4}xa^]k"E 2qPwK"9W,I~7b2؁o9,0/"ܸ%dr>^V!8PHIaZ甓zD÷.nom/j<v`a#uL|o'*A8%W+F-ۛrm.3Og֊H:6 u{sm)`ceaO)&y'8|3Ol&),oY-mH-q7!qOk ;`IJJh s% b/󴐐:% ;3b?Y<{$4 (FF 5ǽ0̶1!({?慖uQFcoCp/*UNfwFEZb6@=Ƣ|q9D`ҎElwRy1["CZ^+٧ZF.z0!}N^Fwn9ZPQ!-_'sEFqջ+`_?]ګw:|o^00~--n\!+drLL^'1)+ /I#ޖ%2^MLk4\G0<>NwDK;8:-e0Jwl]t`|ړ .<:JWTiV?7;ˍaA{ҡTi?Y5kܙd)e?}tJC./ >y |+u-sLyM b?xcE[Y-lEGD\H،pgUQK0HF&it$9%mvd׃w_]H`{~bzpGv%}jyJlRǔ.Puc0<9BEUXVD} q kĿ:p[͞:2dX췫AZv9IڠuTr pՔg}|WY~vo4emYW]x/bdI,Z TJa;<9Mȃ τ = eN P;jgy?UXB~Z+jd׽4_xO*|&/?t3+pXFGճgS5n~\`>%`H\/~zotK~ {wc=۽WQL pX @~:IyhF?}kZIKB<`$~4$ز&[_G 짶fP׬[=-%[f26YV9gI)}TOZЏ'd56Q+d(?~+ I)S"$N8ws;~|5)}[F0e/H-1baB!u8`C֥H^2L6@ [*]K y ~KaRb;J;v;ozWGKcNxO_Ggggv ]n @6NFrN:j9s`6&iroR/q "+^rR?)8mp>1.Wў07?3_'-0qGb( bzlc߶ᬝSϚF98F/RQ4 2oQLK^RBuW80j VYdr*)E%0oQ\IxygFZKc Vr~G-<Ƣl:2yo',֪]` zX?0Н48Cxhk2ɸ@&,#ӆ80]FD z+Qy_:^Z'AX f}zHb)b@r1{?Օ7Te xy$6hg^|~ u ?VW$=06D^Xu[QC(%U(ᥞT\Zi$48'`>REVq7\Zq{ XRaӵWF-5l ZK|qXYL(*bKV7ٳ溱-p1%*J`uxS >`Od~|o[R4%="#tKŷ>}+m# NA NS-EkDN#BS3,{P? w]3xbDVw&Nd|r|sJ+~-O pZTStH`}T]lC@ .yoM@sLbL/s5h<[LKO ݓJ ~S"N9V0LsLoGTkڮ\hꢧ_۽+."9۟@!\7Z>Ph)lw(5{W֑ &{ Ędr=8bL4`>%C%@@RgM^%% '~d),t댮90[?t;?Vw߽nM=W'?vOt9^V [k;iQL\h$^2a.L\^ea\דIoWV nZ6u.U-)ag2Q3sHԋ;\<2o9W/ܚs?r8% $z|&K$C@1AJmx⮬Z-3+4-Y4D |Tx_yGT}cdO,DeYS[yzȺ!d]9 &VF_Mw0m3-K)bwx}q:6de#zf2~oԖJe1\K$&x JāXPD] _tQʔ.`\`$7rvFܾ;|AE uFFcWä yMqltRCyX6 ew-P8czJ%00fxiJl:@%&LOp>f-tAA˅R;߹Yr0b^*ɫ/8i\(FՉHùZKzwV ֣ݮHE`b0,'#D۔&ɜ8#}0h W 7Q}J5%*4;X2%Ksfn0d =E«2DcUF%Vr40K! zGA7X (7 @7: #Pf4cL{3%y|y!<66a&1Z(EԊY0&鮸$NikpM`⒘ g]f j!^p_b+TZ vIK+! JPsI*+xQIK 9 km񃷑--͎뺘*%[JS--9,8֮'Q^$y hiޡkWx56$߿l\Z":fdy>HDqPfәJĥH1.?lWKTeX(wm%Cꗯj|Ѓl`eye'fO,xxVn+'p8">W!Q$zJ?mHdmĠx3 4}h@4BЇǣZ] H# x&$F\pas@nj2$&[@%6&|Y% eqk=rX9X)lA_oP}J55FKPn'W+tyUV )yErY TUf`*Ҷ558o[Ҧnj6Q}J5ޯ_bx(mڐz[xRo"B_m}CX$E{ D,v D6ގJYoHPDCȢ;>4 j"K0wh( ը2@IUU7ZXZ6h[iAM8 pXƥD <D:۱HYqo]rwru!Y‰f1dA? g拥a~-:_~ckZLS3rrߍCcʳ@xP cRQ _ZDt5@$?GEZ@D⧷'˔| 82S 4KnK'^Yu/fFSƗىٰ4ɰofS"Ӊ !M_OoJՌ,\,(kU.b pGrףag:G)e\m>Kz^swv~W鸏n`?f WN~,S?s%j #dz׶̅VhG鲏 1̱.HwK\аIoW-إ߫4n X> dn)c68S_[P- jKx"h2>?A\_)>lْe/:f߽ 6oMg0e?kQ2aFZwQʥ^{#w 뺌\^iwp^atsb ?2Mg׫웖_nQ3DW?z\9ҿ`a:ͧ:|: '4C_~>O{=֥mM<)ԫywg=I|DOG(?^R*7eƛbeNr_jc?Ϫ߮W_rG7 0S7|~2C$Dͥ8 pL!3Zry5.n>,_ ?ۿyX˫ŧ/~L&&$u0$s2pHLG ʂ,$Hg#&,eI P4%("lB°M5GlYH05m}>ƀ2؂=3qf*HHy!2'n@u|{hNQsh>qyTT_N$ElG߿6$a |8jmo{p_Um~{dܤdJ~!*&qvaX]xB۔N50cک0G6cCu?gIEAz3>,m$508oPBG r+D?&Z*6R À6vG4*[:uNjI^ #?=UQ1X8 p0#F"]"2J\2_i0ٿCvwpzfyiAǢTxk'4Oi%J90nThvTO7gg]Am 8+#sýG"6NƼJ@-)I䒒e1AWj=җI2(qB=bbeȸ+ؙ*>ܪHcޞp_us'TyoȗNФ_ώB 7L㋳JhokFUs am /CJ hԄcK\ C\*t㻮d`oZʍ46#WHl$* 끐&8X;4H?cQ9ǻSDgjfvǣ818ȥ)7%]\Lz.AXbxQ >.Q evN抦lݔwc$D9|Ϝ؉sށg>*ϧh=ٙ QŅ=8kG wl=|:TJR7 > I!ْ {5~d d3ᡳ)RtH=gs $zRfgz;8WIdIR#q׀0ݸ7lͽ#" ݤSŇNgu~]k|wjjyOaogwֿmy8ܜ۫W>$70%h5Og'&fAYR =j^L|15ʟu m y&eSTLݸruc:mx1ݴDњv#M4˦؞D) Є-W1Fw KGn)m y&ۦYޗ}91oz_D^,yRJKk>+*>6gzc E:W%Z~ {%d."lB"/@4Ab ,ᧆe;e"(O J5^6CĮRȈ<IAb[H`aMUJ !JS w'vi ۓHb>'E&dє$cڒCoHP e|"gϒ@x畈 e{D=.%j^ئ&'|sF#&< )0^؟yU=ڂڭ 7Ƞ4[ ($,+&-) )JVk |PZr5G)HcMte_p^JIH s,B',D\x4jΥW^îTJ& ,-dUJUSVM,9"LIhE831DS*ܿ˔z45`j+@Ze 72J"2mdG z#6EhEO7D q%"ԥw(b@Y(yh^>'Fef6,]`86lzkR1Ò(l 1Ɣp,bFYō) PQ1sC6q\n`EΡ5|vL׭mld # x}#*O,/X ysg&bk}|.( |5HQo |L̩_C{Z[_aq2 [YXn 4Y^:3K<"Y")?.V)#xq @`\0vE,\F =8]|peOq%(]pܾhcXc$dH<uq9*4yִq?|YI{f13:.jejyx4@Yz;hִQލXR"g/5m4wĚb\`48IبUIXr29 ڥ7 XRŽQvxxnUUU*TpT2\*^֦^$ B"A~if^cF;.EiK5DS[tOm, |C.|L~U Hݺb9hH ㊰ŔSAp)r2(BNʰB@_zcjgLmLJSQhgO`JH= |ZKj $gLnQ^Ziv(] 6͊OʋS (E8G62a GV_!4YOů9>S%Emp DB_˭}>l ruìF ts)[x%[T$M4˦Mһe t2Hnmhx-j!ֻ `!/Dl'o!aOb\ĘNn)mDL7-[~EowB^)#R]["Eciïd -Eɉ(88Lbŷ)[ٜ1l%bs&DyZ$l_ʋ81šS|Zm4-:'J R52sUwc s >%HŲ9`4JpsxNh$F_V'¡U>v#;^ v3QB](Ce%JܬdPzAeS?% 9C3v+>5:l?[\cimmtINۚ-—_\-g*Z׽휚(M<=oލAْ>tlXm1A ]9RKJS'YoKMP\P%*G*d"AT%4(1EHYQGkyr:Γ9 wו.?tsz 0_ֽC}Y$qӸXw1L/gj8 šЊevd@/16AZ1]!}E_P%1B d˲~QҼSuu&)`1bmR&(p?qX*XN?_(0$M/qx K#)wUu{!ۄ_dTZ1,;9i Gt)SS/ ϼr7tw:c9.-@+Ls\)yU9v@,*e@(D/&1 Cmj\ [C%a\Xa@YR!'0T1. TJr +4OҒV nF}#u/f~BT^F n +W),Y0i*eks݌"'R"L{tS'd3ﳽÖZhNÎjmRzOwam+t9@ۊa\@Lۋa0T|1 7_(E*υ޿O̱& SO3JJ =sD,#(w; ZizޕTpO\LZf^+&-ybRQCL僵HXlA,!EuQ  D*hHڰYtSBȔd1SIJDɪ>D*.&6 匦\? 2Ճ7S-4=1K\.ȭš~sDoۆ6mwK\ Z-f778)ie:l{;b1ۆnwp{!ʢM=o\[FbI[!{5{ָJ$y&\9IpjI&\#s ^  |w޽ i^*V>hԛw +o%(yH|Ǎ̻ 0%*^l^DVZ8S㿏iA/o6o>ؼQAMLJ8<_,(4BMQ\Iv$]\IGfw ~>'YL2uD+K$ysH6s&/d-- 6~bn⺓q['~zˏYho)B˭e6/!{.Y"^aC4JI:`oOW 'MWFU2=x*} @F|׬hGI"D{&aY ̐i5Ma>gNfvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000005615327515136171622017717 0ustar rootrootJan 27 15:42:12 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 15:42:12 crc restorecon[4684]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:12 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:42:13 crc restorecon[4684]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:42:13 crc restorecon[4684]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 15:42:14 crc kubenswrapper[4966]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 15:42:14 crc kubenswrapper[4966]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 15:42:14 crc kubenswrapper[4966]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 15:42:14 crc kubenswrapper[4966]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 15:42:14 crc kubenswrapper[4966]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 15:42:14 crc kubenswrapper[4966]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.266779 4966 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276698 4966 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276740 4966 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276755 4966 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276766 4966 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276777 4966 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276789 4966 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276801 4966 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276813 4966 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276823 4966 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276833 4966 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276843 4966 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276852 4966 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276862 4966 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276871 4966 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276879 4966 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276887 4966 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276934 4966 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276945 4966 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276966 4966 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276975 4966 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276984 4966 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.276994 4966 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277004 4966 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277012 4966 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277019 4966 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277027 4966 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277035 4966 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277044 4966 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277054 4966 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277064 4966 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277074 4966 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277086 4966 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277095 4966 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277103 4966 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277111 4966 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277119 4966 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277126 4966 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277133 4966 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277145 4966 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277155 4966 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277165 4966 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277173 4966 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277182 4966 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277192 4966 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277200 4966 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277207 4966 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277215 4966 feature_gate.go:330] unrecognized feature gate: Example Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277222 4966 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277230 4966 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277241 4966 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277251 4966 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277260 4966 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277268 4966 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277277 4966 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277287 4966 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277295 4966 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277305 4966 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277315 4966 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277326 4966 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277335 4966 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277343 4966 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277356 4966 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277367 4966 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277375 4966 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277384 4966 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277395 4966 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277403 4966 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277411 4966 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277419 4966 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277427 4966 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.277435 4966 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.277579 4966 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.277594 4966 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.277612 4966 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.277624 4966 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.277636 4966 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.277646 4966 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.277659 4966 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.277670 4966 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.277681 4966 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.277691 4966 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.277702 4966 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.277711 4966 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.277720 4966 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.277731 4966 flags.go:64] FLAG: --cgroup-root="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.277740 4966 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.277749 4966 flags.go:64] FLAG: --client-ca-file="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.277758 4966 flags.go:64] FLAG: --cloud-config="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.277767 4966 flags.go:64] FLAG: --cloud-provider="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.277775 4966 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278236 4966 flags.go:64] FLAG: --cluster-domain="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278247 4966 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278256 4966 flags.go:64] FLAG: --config-dir="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278266 4966 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278275 4966 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278287 4966 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278296 4966 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278305 4966 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278317 4966 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278327 4966 flags.go:64] FLAG: --contention-profiling="false" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278338 4966 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278348 4966 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278358 4966 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278369 4966 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278380 4966 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278390 4966 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278399 4966 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278408 4966 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278417 4966 flags.go:64] FLAG: --enable-server="true" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278428 4966 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278443 4966 flags.go:64] FLAG: --event-burst="100" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278456 4966 flags.go:64] FLAG: --event-qps="50" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278469 4966 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278482 4966 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278494 4966 flags.go:64] FLAG: --eviction-hard="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278509 4966 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278521 4966 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278532 4966 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278543 4966 flags.go:64] FLAG: --eviction-soft="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278554 4966 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278565 4966 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278576 4966 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278588 4966 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278600 4966 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278611 4966 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278623 4966 flags.go:64] FLAG: --feature-gates="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278637 4966 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278648 4966 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278660 4966 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278671 4966 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278682 4966 flags.go:64] FLAG: --healthz-port="10248" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278694 4966 flags.go:64] FLAG: --help="false" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278703 4966 flags.go:64] FLAG: --hostname-override="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278712 4966 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278721 4966 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278730 4966 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278739 4966 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278748 4966 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278758 4966 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278769 4966 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278777 4966 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278786 4966 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278795 4966 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278805 4966 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278814 4966 flags.go:64] FLAG: --kube-reserved="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278823 4966 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278831 4966 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278842 4966 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278851 4966 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278860 4966 flags.go:64] FLAG: --lock-file="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278869 4966 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278878 4966 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278887 4966 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278932 4966 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278942 4966 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278951 4966 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278960 4966 flags.go:64] FLAG: --logging-format="text" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278969 4966 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278979 4966 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278988 4966 flags.go:64] FLAG: --manifest-url="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.278996 4966 flags.go:64] FLAG: --manifest-url-header="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279008 4966 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279017 4966 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279029 4966 flags.go:64] FLAG: --max-pods="110" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279073 4966 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279083 4966 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279092 4966 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279100 4966 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279109 4966 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279119 4966 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279129 4966 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279152 4966 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279161 4966 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279171 4966 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279180 4966 flags.go:64] FLAG: --pod-cidr="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279190 4966 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279204 4966 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279213 4966 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279222 4966 flags.go:64] FLAG: --pods-per-core="0" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279231 4966 flags.go:64] FLAG: --port="10250" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279240 4966 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279249 4966 flags.go:64] FLAG: --provider-id="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279258 4966 flags.go:64] FLAG: --qos-reserved="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279266 4966 flags.go:64] FLAG: --read-only-port="10255" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279276 4966 flags.go:64] FLAG: --register-node="true" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279286 4966 flags.go:64] FLAG: --register-schedulable="true" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279295 4966 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279323 4966 flags.go:64] FLAG: --registry-burst="10" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279337 4966 flags.go:64] FLAG: --registry-qps="5" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279347 4966 flags.go:64] FLAG: --reserved-cpus="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279356 4966 flags.go:64] FLAG: --reserved-memory="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279367 4966 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279376 4966 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279386 4966 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279395 4966 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279404 4966 flags.go:64] FLAG: --runonce="false" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279412 4966 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279421 4966 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279431 4966 flags.go:64] FLAG: --seccomp-default="false" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279440 4966 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279449 4966 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279458 4966 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279468 4966 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279477 4966 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279486 4966 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279495 4966 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279504 4966 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279512 4966 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279521 4966 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279530 4966 flags.go:64] FLAG: --system-cgroups="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279539 4966 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279554 4966 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279562 4966 flags.go:64] FLAG: --tls-cert-file="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279571 4966 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279591 4966 flags.go:64] FLAG: --tls-min-version="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279601 4966 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279609 4966 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279618 4966 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279627 4966 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279636 4966 flags.go:64] FLAG: --v="2" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279651 4966 flags.go:64] FLAG: --version="false" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279662 4966 flags.go:64] FLAG: --vmodule="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279672 4966 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.279681 4966 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.279930 4966 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.279943 4966 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.279952 4966 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.279961 4966 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.279970 4966 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.279978 4966 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.279986 4966 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.279994 4966 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280002 4966 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280009 4966 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280017 4966 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280025 4966 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280033 4966 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280040 4966 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280049 4966 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280057 4966 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280064 4966 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280072 4966 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280080 4966 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280087 4966 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280095 4966 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280102 4966 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280111 4966 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280119 4966 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280128 4966 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280135 4966 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280143 4966 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280150 4966 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280161 4966 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280169 4966 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280176 4966 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280184 4966 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280192 4966 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280200 4966 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280208 4966 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280215 4966 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280223 4966 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280231 4966 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280239 4966 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280247 4966 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280257 4966 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280266 4966 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280274 4966 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280285 4966 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280296 4966 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280306 4966 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280316 4966 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280324 4966 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280333 4966 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280341 4966 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280350 4966 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280358 4966 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280366 4966 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280374 4966 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280382 4966 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280390 4966 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280397 4966 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280405 4966 feature_gate.go:330] unrecognized feature gate: Example Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280414 4966 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280424 4966 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280432 4966 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280440 4966 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280448 4966 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280456 4966 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280464 4966 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280471 4966 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280479 4966 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280487 4966 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280494 4966 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280505 4966 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.280515 4966 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.281246 4966 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.295438 4966 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.295488 4966 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295640 4966 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295654 4966 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295664 4966 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295675 4966 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295684 4966 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295692 4966 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295699 4966 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295707 4966 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295715 4966 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295725 4966 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295734 4966 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295742 4966 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295750 4966 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295757 4966 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295766 4966 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295774 4966 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295781 4966 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295789 4966 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295798 4966 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295805 4966 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295813 4966 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295821 4966 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295828 4966 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295836 4966 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295844 4966 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295852 4966 feature_gate.go:330] unrecognized feature gate: Example Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295859 4966 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295867 4966 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295877 4966 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295888 4966 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295932 4966 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295944 4966 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295954 4966 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295963 4966 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295983 4966 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.295992 4966 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296001 4966 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296009 4966 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296017 4966 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296025 4966 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296033 4966 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296041 4966 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296049 4966 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296057 4966 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296064 4966 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296073 4966 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296081 4966 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296089 4966 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296097 4966 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296105 4966 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296113 4966 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296121 4966 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296129 4966 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296136 4966 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296147 4966 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296157 4966 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296166 4966 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296174 4966 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296182 4966 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296190 4966 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296200 4966 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296211 4966 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296220 4966 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296229 4966 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296237 4966 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296245 4966 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296253 4966 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296273 4966 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296281 4966 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296289 4966 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296307 4966 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.296320 4966 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296640 4966 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296653 4966 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296662 4966 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296671 4966 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296679 4966 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296688 4966 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296696 4966 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296704 4966 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296713 4966 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296721 4966 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296729 4966 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296737 4966 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296745 4966 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296753 4966 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296761 4966 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296769 4966 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296776 4966 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296784 4966 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296791 4966 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296799 4966 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296807 4966 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296814 4966 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296823 4966 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296831 4966 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296838 4966 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296847 4966 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296855 4966 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296863 4966 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296871 4966 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296879 4966 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296888 4966 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296920 4966 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296931 4966 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296940 4966 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296961 4966 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296971 4966 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296979 4966 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296988 4966 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.296996 4966 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297004 4966 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297012 4966 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297020 4966 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297027 4966 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297038 4966 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297047 4966 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297055 4966 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297064 4966 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297072 4966 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297080 4966 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297087 4966 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297095 4966 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297105 4966 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297114 4966 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297122 4966 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297130 4966 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297138 4966 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297146 4966 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297154 4966 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297161 4966 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297169 4966 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297179 4966 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297187 4966 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297194 4966 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297202 4966 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297210 4966 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297217 4966 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297227 4966 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297238 4966 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297246 4966 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297254 4966 feature_gate.go:330] unrecognized feature gate: Example Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.297273 4966 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.297285 4966 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.297551 4966 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.303920 4966 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.304057 4966 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.305974 4966 server.go:997] "Starting client certificate rotation" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.306022 4966 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.306334 4966 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-12 21:05:46.24432377 +0000 UTC Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.306471 4966 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.334066 4966 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 15:42:14 crc kubenswrapper[4966]: E0127 15:42:14.337860 4966 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.58:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.340857 4966 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.364640 4966 log.go:25] "Validated CRI v1 runtime API" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.406969 4966 log.go:25] "Validated CRI v1 image API" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.409637 4966 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.415691 4966 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-15-37-43-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.415741 4966 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.434397 4966 manager.go:217] Machine: {Timestamp:2026-01-27 15:42:14.430221748 +0000 UTC m=+0.733015266 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:dd047662-73e9-4358-9128-488711b4c80e BootID:b746d435-9a50-4ea2-9e69-34be734a7dee Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:0e:9f:fd Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:0e:9f:fd Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:fd:93:9a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:f5:44:6f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:7c:2d:c7 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:18:87:a4 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:62:ef:a9:be:2f:81 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ee:09:8b:28:21:41 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.434737 4966 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.435265 4966 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.435847 4966 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.436114 4966 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.436169 4966 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.436515 4966 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.436529 4966 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.437376 4966 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.437421 4966 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.437716 4966 state_mem.go:36] "Initialized new in-memory state store" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.437848 4966 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.444083 4966 kubelet.go:418] "Attempting to sync node with API server" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.444124 4966 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.444184 4966 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.444204 4966 kubelet.go:324] "Adding apiserver pod source" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.444222 4966 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.452116 4966 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.58:6443: connect: connection refused Jan 27 15:42:14 crc kubenswrapper[4966]: E0127 15:42:14.452240 4966 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.58:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.454063 4966 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.58:6443: connect: connection refused Jan 27 15:42:14 crc kubenswrapper[4966]: E0127 15:42:14.454178 4966 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.58:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.454690 4966 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.456596 4966 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.458414 4966 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.460147 4966 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.460195 4966 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.460212 4966 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.460230 4966 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.460255 4966 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.460271 4966 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.460288 4966 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.460312 4966 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.460330 4966 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.460348 4966 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.460369 4966 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.460384 4966 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.461162 4966 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.462125 4966 server.go:1280] "Started kubelet" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.462641 4966 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.58:6443: connect: connection refused Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.463227 4966 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.464326 4966 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 15:42:14 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.466240 4966 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.467638 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.467673 4966 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.467776 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 13:27:40.754281522 +0000 UTC Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.468041 4966 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.468075 4966 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 15:42:14 crc kubenswrapper[4966]: E0127 15:42:14.468071 4966 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.468288 4966 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.468617 4966 server.go:460] "Adding debug handlers to kubelet server" Jan 27 15:42:14 crc kubenswrapper[4966]: E0127 15:42:14.468688 4966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.58:6443: connect: connection refused" interval="200ms" Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.468872 4966 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.58:6443: connect: connection refused Jan 27 15:42:14 crc kubenswrapper[4966]: E0127 15:42:14.468981 4966 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.58:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.469638 4966 factory.go:153] Registering CRI-O factory Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.469668 4966 factory.go:221] Registration of the crio container factory successfully Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.470336 4966 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.470492 4966 factory.go:55] Registering systemd factory Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.471156 4966 factory.go:221] Registration of the systemd container factory successfully Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.471430 4966 factory.go:103] Registering Raw factory Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.472314 4966 manager.go:1196] Started watching for new ooms in manager Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.473286 4966 manager.go:319] Starting recovery of all containers Jan 27 15:42:14 crc kubenswrapper[4966]: E0127 15:42:14.473937 4966 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.58:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188ea0d3a4874ffe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 15:42:14.462058494 +0000 UTC m=+0.764852022,LastTimestamp:2026-01-27 15:42:14.462058494 +0000 UTC m=+0.764852022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479475 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479564 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479587 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479600 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479616 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479629 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479663 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479680 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479693 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479712 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479726 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479743 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479779 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479803 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479818 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479837 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479849 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479863 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479880 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479893 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479919 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479942 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479955 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479971 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.479983 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480000 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480023 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480042 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480058 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480075 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480090 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480106 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480124 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480142 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480161 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480175 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480188 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480218 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480234 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480253 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480267 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480281 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480299 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480312 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480331 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480386 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480399 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480418 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480434 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480451 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480463 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480481 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480516 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480552 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480568 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480592 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480615 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480635 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480654 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480674 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480687 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480712 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480727 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480741 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480759 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480772 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480790 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480807 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480820 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480838 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480852 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.480994 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481020 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481038 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481064 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481082 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481098 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481113 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481137 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481156 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481169 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481183 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481199 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481212 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481227 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481241 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481258 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481283 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481295 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481309 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481323 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481335 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481349 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481361 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481375 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481387 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481409 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481425 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481439 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481462 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481485 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481505 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481528 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481554 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481588 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481610 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481627 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481654 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481676 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481695 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481716 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481729 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481745 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481758 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481774 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481790 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.481802 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.485567 4966 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.485644 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.485686 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.485712 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.485726 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.485747 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.485764 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.485779 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.485801 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.485815 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.485830 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.485870 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.485885 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.485929 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.485954 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.485970 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.485991 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.486008 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.486730 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.486763 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.486781 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.486797 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.486814 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.486830 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.486849 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.486862 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.486877 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.486891 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.486926 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.486941 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.486956 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.486974 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.486991 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487005 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487019 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487033 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487048 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487063 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487075 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487089 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487106 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487121 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487142 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487158 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487172 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487187 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487200 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487215 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487230 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487248 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487262 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487275 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487289 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487306 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487320 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487334 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487348 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487363 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487376 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487389 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487403 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487423 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487437 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487456 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487469 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487487 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487507 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487526 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487542 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487557 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487570 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487586 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487599 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487613 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487629 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487642 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487656 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487694 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487708 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487724 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487741 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487755 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487772 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487789 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487804 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487823 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487836 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487851 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487865 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487878 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487895 4966 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487922 4966 reconstruct.go:97] "Volume reconstruction finished" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.487931 4966 reconciler.go:26] "Reconciler: start to sync state" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.509786 4966 manager.go:324] Recovery completed Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.517185 4966 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.519526 4966 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.519578 4966 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.519609 4966 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 15:42:14 crc kubenswrapper[4966]: E0127 15:42:14.519657 4966 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 15:42:14 crc kubenswrapper[4966]: W0127 15:42:14.520709 4966 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.58:6443: connect: connection refused Jan 27 15:42:14 crc kubenswrapper[4966]: E0127 15:42:14.520806 4966 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.58:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.524198 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.525683 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.525804 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.525920 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.527020 4966 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.527128 4966 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.527202 4966 state_mem.go:36] "Initialized new in-memory state store" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.552653 4966 policy_none.go:49] "None policy: Start" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.554049 4966 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.554091 4966 state_mem.go:35] "Initializing new in-memory state store" Jan 27 15:42:14 crc kubenswrapper[4966]: E0127 15:42:14.568334 4966 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.617459 4966 manager.go:334] "Starting Device Plugin manager" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.617537 4966 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.617553 4966 server.go:79] "Starting device plugin registration server" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.618042 4966 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.618066 4966 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.618235 4966 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.618392 4966 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.618411 4966 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.620661 4966 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.620851 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.622474 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.622563 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.622588 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.622889 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.623393 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.623448 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.624518 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.624561 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.624577 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.624599 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.624636 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.624649 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.624843 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.625348 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.625406 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.625974 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.626004 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.626018 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.626146 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.626285 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.626331 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.626349 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.626384 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.626461 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.626934 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.626967 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.627003 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:14 crc kubenswrapper[4966]: E0127 15:42:14.627134 4966 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.627190 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.627331 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.627369 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.629122 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.629163 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.629178 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.629184 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.629184 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.629216 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.629275 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.629282 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.629304 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.629599 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.629641 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.630257 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.630284 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.630293 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:14 crc kubenswrapper[4966]: E0127 15:42:14.669467 4966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.58:6443: connect: connection refused" interval="400ms" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.690492 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.690617 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.690692 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.690771 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.690855 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.690944 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.691017 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.691090 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.691164 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.691292 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.691362 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.691424 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.691495 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.691649 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.691717 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.719136 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.720982 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.721046 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.721064 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.721103 4966 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 15:42:14 crc kubenswrapper[4966]: E0127 15:42:14.721861 4966 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.58:6443: connect: connection refused" node="crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.793489 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.793569 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.793609 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.793641 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.793672 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.793705 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.793732 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.793768 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.793857 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.793875 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.793767 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.793815 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.793956 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.793760 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.794001 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.794054 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.794090 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.794120 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.794158 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.794188 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.794183 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.794223 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.794256 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.794257 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.793991 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.794306 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.794327 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.794345 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.794284 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.794313 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.922686 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.923956 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.924012 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.924024 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.924056 4966 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 15:42:14 crc kubenswrapper[4966]: E0127 15:42:14.924528 4966 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.58:6443: connect: connection refused" node="crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.956294 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.960388 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.977936 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.994321 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 15:42:14 crc kubenswrapper[4966]: I0127 15:42:14.998108 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:42:15 crc kubenswrapper[4966]: W0127 15:42:15.002910 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-13b41646b2fd1c56d35807db89f8c49a05e632e0b29a725d5b9c1a2a95c035fb WatchSource:0}: Error finding container 13b41646b2fd1c56d35807db89f8c49a05e632e0b29a725d5b9c1a2a95c035fb: Status 404 returned error can't find the container with id 13b41646b2fd1c56d35807db89f8c49a05e632e0b29a725d5b9c1a2a95c035fb Jan 27 15:42:15 crc kubenswrapper[4966]: W0127 15:42:15.005509 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-64f22015de5d0e75d9f9add6db4e10b615cce0c982f449a7bf4a18a6576ca2fe WatchSource:0}: Error finding container 64f22015de5d0e75d9f9add6db4e10b615cce0c982f449a7bf4a18a6576ca2fe: Status 404 returned error can't find the container with id 64f22015de5d0e75d9f9add6db4e10b615cce0c982f449a7bf4a18a6576ca2fe Jan 27 15:42:15 crc kubenswrapper[4966]: W0127 15:42:15.020091 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-4063834b3563761c598446eeb87ca78b41fbee83f034fe9bbcd00f11023a6644 WatchSource:0}: Error finding container 4063834b3563761c598446eeb87ca78b41fbee83f034fe9bbcd00f11023a6644: Status 404 returned error can't find the container with id 4063834b3563761c598446eeb87ca78b41fbee83f034fe9bbcd00f11023a6644 Jan 27 15:42:15 crc kubenswrapper[4966]: W0127 15:42:15.021554 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-a42aff976f95d248f667b4d0453caac990b0de894f909cbfefb97cb9f5ab321f WatchSource:0}: Error finding container a42aff976f95d248f667b4d0453caac990b0de894f909cbfefb97cb9f5ab321f: Status 404 returned error can't find the container with id a42aff976f95d248f667b4d0453caac990b0de894f909cbfefb97cb9f5ab321f Jan 27 15:42:15 crc kubenswrapper[4966]: E0127 15:42:15.070421 4966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.58:6443: connect: connection refused" interval="800ms" Jan 27 15:42:15 crc kubenswrapper[4966]: I0127 15:42:15.325075 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:15 crc kubenswrapper[4966]: I0127 15:42:15.328349 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:15 crc kubenswrapper[4966]: I0127 15:42:15.328410 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:15 crc kubenswrapper[4966]: I0127 15:42:15.328425 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:15 crc kubenswrapper[4966]: I0127 15:42:15.328472 4966 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 15:42:15 crc kubenswrapper[4966]: E0127 15:42:15.329268 4966 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.58:6443: connect: connection refused" node="crc" Jan 27 15:42:15 crc kubenswrapper[4966]: W0127 15:42:15.338371 4966 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.58:6443: connect: connection refused Jan 27 15:42:15 crc kubenswrapper[4966]: E0127 15:42:15.338482 4966 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.58:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:42:15 crc kubenswrapper[4966]: W0127 15:42:15.399182 4966 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.58:6443: connect: connection refused Jan 27 15:42:15 crc kubenswrapper[4966]: E0127 15:42:15.399320 4966 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.58:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:42:15 crc kubenswrapper[4966]: I0127 15:42:15.463778 4966 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.58:6443: connect: connection refused Jan 27 15:42:15 crc kubenswrapper[4966]: I0127 15:42:15.468928 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 13:53:56.260344548 +0000 UTC Jan 27 15:42:15 crc kubenswrapper[4966]: W0127 15:42:15.500739 4966 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.58:6443: connect: connection refused Jan 27 15:42:15 crc kubenswrapper[4966]: E0127 15:42:15.500807 4966 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.58:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:42:15 crc kubenswrapper[4966]: I0127 15:42:15.526854 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"13b41646b2fd1c56d35807db89f8c49a05e632e0b29a725d5b9c1a2a95c035fb"} Jan 27 15:42:15 crc kubenswrapper[4966]: I0127 15:42:15.528196 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"285dbf3f33e97b4b2ac41b1c761659a58afcfbddeaad4daf25432ba8dde2b438"} Jan 27 15:42:15 crc kubenswrapper[4966]: I0127 15:42:15.529215 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a42aff976f95d248f667b4d0453caac990b0de894f909cbfefb97cb9f5ab321f"} Jan 27 15:42:15 crc kubenswrapper[4966]: I0127 15:42:15.530440 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4063834b3563761c598446eeb87ca78b41fbee83f034fe9bbcd00f11023a6644"} Jan 27 15:42:15 crc kubenswrapper[4966]: I0127 15:42:15.531577 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"64f22015de5d0e75d9f9add6db4e10b615cce0c982f449a7bf4a18a6576ca2fe"} Jan 27 15:42:15 crc kubenswrapper[4966]: E0127 15:42:15.872263 4966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.58:6443: connect: connection refused" interval="1.6s" Jan 27 15:42:15 crc kubenswrapper[4966]: W0127 15:42:15.907381 4966 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.58:6443: connect: connection refused Jan 27 15:42:15 crc kubenswrapper[4966]: E0127 15:42:15.907498 4966 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.58:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.129966 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.131569 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.131622 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.131636 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.131667 4966 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 15:42:16 crc kubenswrapper[4966]: E0127 15:42:16.132151 4966 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.58:6443: connect: connection refused" node="crc" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.424021 4966 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 15:42:16 crc kubenswrapper[4966]: E0127 15:42:16.425602 4966 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.58:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.464408 4966 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.58:6443: connect: connection refused Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.469734 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 20:57:34.878613412 +0000 UTC Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.539368 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03"} Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.539444 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.539455 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d"} Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.539602 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab"} Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.539632 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0"} Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.541497 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.541566 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.541588 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.543183 4966 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2" exitCode=0 Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.543283 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2"} Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.543349 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.544682 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.545091 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.545117 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.546649 4966 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c" exitCode=0 Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.546751 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.546784 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c"} Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.548069 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.548118 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.548141 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.549323 4966 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="57dc21f40163fe273f31c97da83e178ca880a6092e67c4daea6ae7cf894f8241" exitCode=0 Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.549449 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.549456 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"57dc21f40163fe273f31c97da83e178ca880a6092e67c4daea6ae7cf894f8241"} Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.550583 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.552210 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.552255 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.552281 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.553062 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.553113 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.553136 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.554246 4966 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="2754f015d7150bace4cf0d897fcb8d116ccc6be32dbe42c5d8b08ed553b1c5b0" exitCode=0 Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.554287 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"2754f015d7150bace4cf0d897fcb8d116ccc6be32dbe42c5d8b08ed553b1c5b0"} Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.554381 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.555950 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.555990 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:16 crc kubenswrapper[4966]: I0127 15:42:16.556004 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:17 crc kubenswrapper[4966]: W0127 15:42:17.219876 4966 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.58:6443: connect: connection refused Jan 27 15:42:17 crc kubenswrapper[4966]: E0127 15:42:17.220221 4966 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.58:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.464325 4966 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.58:6443: connect: connection refused Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.470084 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 20:19:20.674414111 +0000 UTC Jan 27 15:42:17 crc kubenswrapper[4966]: E0127 15:42:17.473256 4966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.58:6443: connect: connection refused" interval="3.2s" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.558967 4966 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073" exitCode=0 Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.559088 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073"} Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.559099 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.559853 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.559885 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.559915 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.562659 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2"} Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.562701 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1"} Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.562715 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306"} Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.562727 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b"} Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.565101 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"61e313c08ec7ccfad05893e2174d867d9dc0916b937367f4830028cb5aa82237"} Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.565159 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.566209 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.566242 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.566251 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.568274 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.568316 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b4962fe98a4d56e85b34bf06c62aba732afd760b74b101291561fb8fe6ac9560"} Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.568356 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"afcde980381c287a07cdc32f8ed47856c21b63161d0aa2ce3428afa60ee7f57b"} Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.568285 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.568373 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"64f96604aa4197401e6a4c779ffa48a04a7050400f22f4da428070021f15e905"} Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.568988 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.569014 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.569023 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.569135 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.569172 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.569188 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:17 crc kubenswrapper[4966]: W0127 15:42:17.643605 4966 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.58:6443: connect: connection refused Jan 27 15:42:17 crc kubenswrapper[4966]: E0127 15:42:17.643693 4966 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.58:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:42:17 crc kubenswrapper[4966]: W0127 15:42:17.712960 4966 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.58:6443: connect: connection refused Jan 27 15:42:17 crc kubenswrapper[4966]: E0127 15:42:17.713033 4966 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.58:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.732816 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.733998 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.734037 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.734048 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.734074 4966 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 15:42:17 crc kubenswrapper[4966]: E0127 15:42:17.734713 4966 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.58:6443: connect: connection refused" node="crc" Jan 27 15:42:17 crc kubenswrapper[4966]: I0127 15:42:17.994122 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.470274 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 11:02:36.543042441 +0000 UTC Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.576635 4966 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126" exitCode=0 Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.576704 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126"} Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.576824 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.577685 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.577712 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.577721 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.582681 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.582713 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.582720 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.582780 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.583889 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2"} Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.584141 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.584333 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.584376 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.584394 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.584437 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.584546 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.584567 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.584509 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.584649 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.584666 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.584980 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.585132 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:18 crc kubenswrapper[4966]: I0127 15:42:18.585259 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:19 crc kubenswrapper[4966]: I0127 15:42:19.471774 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 23:04:36.535747061 +0000 UTC Jan 27 15:42:19 crc kubenswrapper[4966]: I0127 15:42:19.589507 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4"} Jan 27 15:42:19 crc kubenswrapper[4966]: I0127 15:42:19.589541 4966 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:42:19 crc kubenswrapper[4966]: I0127 15:42:19.589567 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e"} Jan 27 15:42:19 crc kubenswrapper[4966]: I0127 15:42:19.589588 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:19 crc kubenswrapper[4966]: I0127 15:42:19.589655 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:19 crc kubenswrapper[4966]: I0127 15:42:19.589588 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9"} Jan 27 15:42:19 crc kubenswrapper[4966]: I0127 15:42:19.589788 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f"} Jan 27 15:42:19 crc kubenswrapper[4966]: I0127 15:42:19.590530 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:19 crc kubenswrapper[4966]: I0127 15:42:19.590546 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:19 crc kubenswrapper[4966]: I0127 15:42:19.590588 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:19 crc kubenswrapper[4966]: I0127 15:42:19.590608 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:19 crc kubenswrapper[4966]: I0127 15:42:19.590564 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:19 crc kubenswrapper[4966]: I0127 15:42:19.590650 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:20 crc kubenswrapper[4966]: I0127 15:42:20.472413 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 01:07:47.951222671 +0000 UTC Jan 27 15:42:20 crc kubenswrapper[4966]: I0127 15:42:20.560105 4966 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 15:42:20 crc kubenswrapper[4966]: I0127 15:42:20.601358 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5"} Jan 27 15:42:20 crc kubenswrapper[4966]: I0127 15:42:20.601530 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:20 crc kubenswrapper[4966]: I0127 15:42:20.602852 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:20 crc kubenswrapper[4966]: I0127 15:42:20.602891 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:20 crc kubenswrapper[4966]: I0127 15:42:20.602921 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:20 crc kubenswrapper[4966]: I0127 15:42:20.935335 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:20 crc kubenswrapper[4966]: I0127 15:42:20.936730 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:20 crc kubenswrapper[4966]: I0127 15:42:20.936792 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:20 crc kubenswrapper[4966]: I0127 15:42:20.936810 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:20 crc kubenswrapper[4966]: I0127 15:42:20.936847 4966 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 15:42:21 crc kubenswrapper[4966]: I0127 15:42:21.449846 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:42:21 crc kubenswrapper[4966]: I0127 15:42:21.450129 4966 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:42:21 crc kubenswrapper[4966]: I0127 15:42:21.450203 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:21 crc kubenswrapper[4966]: I0127 15:42:21.451638 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:21 crc kubenswrapper[4966]: I0127 15:42:21.451708 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:21 crc kubenswrapper[4966]: I0127 15:42:21.451732 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:21 crc kubenswrapper[4966]: I0127 15:42:21.473084 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 23:00:02.419305754 +0000 UTC Jan 27 15:42:21 crc kubenswrapper[4966]: I0127 15:42:21.604708 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:21 crc kubenswrapper[4966]: I0127 15:42:21.606031 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:21 crc kubenswrapper[4966]: I0127 15:42:21.606083 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:21 crc kubenswrapper[4966]: I0127 15:42:21.606105 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:21 crc kubenswrapper[4966]: I0127 15:42:21.672080 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:42:21 crc kubenswrapper[4966]: I0127 15:42:21.672484 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:21 crc kubenswrapper[4966]: I0127 15:42:21.674175 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:21 crc kubenswrapper[4966]: I0127 15:42:21.674258 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:21 crc kubenswrapper[4966]: I0127 15:42:21.674284 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:22 crc kubenswrapper[4966]: I0127 15:42:22.474579 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:19:49.571493555 +0000 UTC Jan 27 15:42:23 crc kubenswrapper[4966]: I0127 15:42:23.122595 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:42:23 crc kubenswrapper[4966]: I0127 15:42:23.122806 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:23 crc kubenswrapper[4966]: I0127 15:42:23.124338 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:23 crc kubenswrapper[4966]: I0127 15:42:23.124367 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:23 crc kubenswrapper[4966]: I0127 15:42:23.124380 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:23 crc kubenswrapper[4966]: I0127 15:42:23.475096 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 15:05:42.434985695 +0000 UTC Jan 27 15:42:23 crc kubenswrapper[4966]: I0127 15:42:23.594226 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:42:23 crc kubenswrapper[4966]: I0127 15:42:23.594411 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:23 crc kubenswrapper[4966]: I0127 15:42:23.595992 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:23 crc kubenswrapper[4966]: I0127 15:42:23.596033 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:23 crc kubenswrapper[4966]: I0127 15:42:23.596049 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:23 crc kubenswrapper[4966]: I0127 15:42:23.599188 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:42:23 crc kubenswrapper[4966]: I0127 15:42:23.609394 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:23 crc kubenswrapper[4966]: I0127 15:42:23.610744 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:23 crc kubenswrapper[4966]: I0127 15:42:23.610938 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:23 crc kubenswrapper[4966]: I0127 15:42:23.611071 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:23 crc kubenswrapper[4966]: I0127 15:42:23.836783 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:42:24 crc kubenswrapper[4966]: I0127 15:42:24.475749 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 19:34:44.481361921 +0000 UTC Jan 27 15:42:24 crc kubenswrapper[4966]: I0127 15:42:24.614167 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:24 crc kubenswrapper[4966]: I0127 15:42:24.615585 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:24 crc kubenswrapper[4966]: I0127 15:42:24.615656 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:24 crc kubenswrapper[4966]: I0127 15:42:24.615681 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:24 crc kubenswrapper[4966]: E0127 15:42:24.627305 4966 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 15:42:25 crc kubenswrapper[4966]: I0127 15:42:25.126356 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 15:42:25 crc kubenswrapper[4966]: I0127 15:42:25.126557 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:25 crc kubenswrapper[4966]: I0127 15:42:25.127953 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:25 crc kubenswrapper[4966]: I0127 15:42:25.128035 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:25 crc kubenswrapper[4966]: I0127 15:42:25.128061 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:25 crc kubenswrapper[4966]: I0127 15:42:25.476795 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 19:56:50.630413 +0000 UTC Jan 27 15:42:26 crc kubenswrapper[4966]: I0127 15:42:26.239865 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:42:26 crc kubenswrapper[4966]: I0127 15:42:26.240174 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:26 crc kubenswrapper[4966]: I0127 15:42:26.241729 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:26 crc kubenswrapper[4966]: I0127 15:42:26.241785 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:26 crc kubenswrapper[4966]: I0127 15:42:26.241808 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:26 crc kubenswrapper[4966]: I0127 15:42:26.247322 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:42:26 crc kubenswrapper[4966]: I0127 15:42:26.477446 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:39:42.491281782 +0000 UTC Jan 27 15:42:26 crc kubenswrapper[4966]: I0127 15:42:26.565106 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 15:42:26 crc kubenswrapper[4966]: I0127 15:42:26.565332 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:26 crc kubenswrapper[4966]: I0127 15:42:26.566783 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:26 crc kubenswrapper[4966]: I0127 15:42:26.566828 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:26 crc kubenswrapper[4966]: I0127 15:42:26.566844 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:26 crc kubenswrapper[4966]: I0127 15:42:26.620072 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:26 crc kubenswrapper[4966]: I0127 15:42:26.621511 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:26 crc kubenswrapper[4966]: I0127 15:42:26.621746 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:26 crc kubenswrapper[4966]: I0127 15:42:26.621961 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:27 crc kubenswrapper[4966]: I0127 15:42:27.478991 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 03:17:11.143701805 +0000 UTC Jan 27 15:42:28 crc kubenswrapper[4966]: W0127 15:42:28.116089 4966 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 15:42:28 crc kubenswrapper[4966]: I0127 15:42:28.116169 4966 trace.go:236] Trace[1128502183]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 15:42:18.114) (total time: 10001ms): Jan 27 15:42:28 crc kubenswrapper[4966]: Trace[1128502183]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:42:28.116) Jan 27 15:42:28 crc kubenswrapper[4966]: Trace[1128502183]: [10.001649703s] [10.001649703s] END Jan 27 15:42:28 crc kubenswrapper[4966]: E0127 15:42:28.116187 4966 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 15:42:28 crc kubenswrapper[4966]: I0127 15:42:28.464512 4966 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 27 15:42:28 crc kubenswrapper[4966]: I0127 15:42:28.479870 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 06:57:00.906151218 +0000 UTC Jan 27 15:42:28 crc kubenswrapper[4966]: I0127 15:42:28.784590 4966 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 15:42:28 crc kubenswrapper[4966]: I0127 15:42:28.784696 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 15:42:28 crc kubenswrapper[4966]: I0127 15:42:28.789203 4966 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 15:42:28 crc kubenswrapper[4966]: I0127 15:42:28.789269 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 15:42:29 crc kubenswrapper[4966]: I0127 15:42:29.240630 4966 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:42:29 crc kubenswrapper[4966]: I0127 15:42:29.240718 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 15:42:29 crc kubenswrapper[4966]: I0127 15:42:29.480266 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 18:39:53.057885495 +0000 UTC Jan 27 15:42:30 crc kubenswrapper[4966]: I0127 15:42:30.481392 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:18:56.493154321 +0000 UTC Jan 27 15:42:31 crc kubenswrapper[4966]: I0127 15:42:31.481943 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 09:42:27.004714613 +0000 UTC Jan 27 15:42:32 crc kubenswrapper[4966]: I0127 15:42:32.482215 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 15:29:50.60805926 +0000 UTC Jan 27 15:42:32 crc kubenswrapper[4966]: I0127 15:42:32.970214 4966 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 15:42:33 crc kubenswrapper[4966]: I0127 15:42:33.130840 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:42:33 crc kubenswrapper[4966]: I0127 15:42:33.131037 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:33 crc kubenswrapper[4966]: I0127 15:42:33.132179 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:33 crc kubenswrapper[4966]: I0127 15:42:33.132245 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:33 crc kubenswrapper[4966]: I0127 15:42:33.132272 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:33 crc kubenswrapper[4966]: I0127 15:42:33.137059 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:42:33 crc kubenswrapper[4966]: I0127 15:42:33.483307 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:35:00.483966353 +0000 UTC Jan 27 15:42:33 crc kubenswrapper[4966]: I0127 15:42:33.637521 4966 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:42:33 crc kubenswrapper[4966]: I0127 15:42:33.637733 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:33 crc kubenswrapper[4966]: I0127 15:42:33.638594 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:33 crc kubenswrapper[4966]: I0127 15:42:33.638644 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:33 crc kubenswrapper[4966]: I0127 15:42:33.638660 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:33 crc kubenswrapper[4966]: I0127 15:42:33.789742 4966 trace.go:236] Trace[1148972889]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 15:42:22.828) (total time: 10961ms): Jan 27 15:42:33 crc kubenswrapper[4966]: Trace[1148972889]: ---"Objects listed" error: 10961ms (15:42:33.789) Jan 27 15:42:33 crc kubenswrapper[4966]: Trace[1148972889]: [10.961548342s] [10.961548342s] END Jan 27 15:42:33 crc kubenswrapper[4966]: I0127 15:42:33.789786 4966 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 15:42:33 crc kubenswrapper[4966]: I0127 15:42:33.790349 4966 trace.go:236] Trace[2065036935]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 15:42:22.098) (total time: 11691ms): Jan 27 15:42:33 crc kubenswrapper[4966]: Trace[2065036935]: ---"Objects listed" error: 11691ms (15:42:33.790) Jan 27 15:42:33 crc kubenswrapper[4966]: Trace[2065036935]: [11.691508427s] [11.691508427s] END Jan 27 15:42:33 crc kubenswrapper[4966]: I0127 15:42:33.790369 4966 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 15:42:33 crc kubenswrapper[4966]: E0127 15:42:33.790874 4966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 27 15:42:33 crc kubenswrapper[4966]: I0127 15:42:33.793471 4966 trace.go:236] Trace[1561768541]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 15:42:23.766) (total time: 10026ms): Jan 27 15:42:33 crc kubenswrapper[4966]: Trace[1561768541]: ---"Objects listed" error: 10026ms (15:42:33.793) Jan 27 15:42:33 crc kubenswrapper[4966]: Trace[1561768541]: [10.026716962s] [10.026716962s] END Jan 27 15:42:33 crc kubenswrapper[4966]: I0127 15:42:33.793512 4966 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 15:42:33 crc kubenswrapper[4966]: I0127 15:42:33.794457 4966 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 15:42:33 crc kubenswrapper[4966]: E0127 15:42:33.795713 4966 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 27 15:42:33 crc kubenswrapper[4966]: I0127 15:42:33.814863 4966 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.323789 4966 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38394->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.323868 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38394->192.168.126.11:17697: read: connection reset by peer" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.323809 4966 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38388->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.324004 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38388->192.168.126.11:17697: read: connection reset by peer" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.324449 4966 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.324481 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.456053 4966 apiserver.go:52] "Watching apiserver" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.459953 4966 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.460174 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.460510 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.460603 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:42:34 crc kubenswrapper[4966]: E0127 15:42:34.460689 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.460712 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.460799 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:42:34 crc kubenswrapper[4966]: E0127 15:42:34.460967 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.461310 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.461493 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:34 crc kubenswrapper[4966]: E0127 15:42:34.461550 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.463161 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.463320 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.463806 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.463954 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.463977 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.464495 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.465989 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.465994 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.466685 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.469497 4966 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.483808 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:50:36.340194562 +0000 UTC Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.491605 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.499306 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.499353 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.499373 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.499392 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.499712 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500059 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500070 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500137 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500186 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500211 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500255 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500480 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: E0127 15:42:34.500308 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:42:35.000280381 +0000 UTC m=+21.303073869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500526 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500424 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500556 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500583 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500605 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500633 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500662 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500689 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500714 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500783 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500807 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500834 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500857 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500881 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500923 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500949 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500972 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500996 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501020 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501046 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501069 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501095 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501118 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501146 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501184 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501210 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501249 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501274 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501296 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501323 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501348 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501372 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501394 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501417 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501441 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501463 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501513 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501537 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501559 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501582 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501617 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501639 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501660 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501680 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501701 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501725 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501746 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501769 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501794 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501829 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501853 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501875 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501913 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501935 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501957 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501978 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502005 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502026 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502048 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502070 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502092 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502126 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502152 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502175 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502199 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502223 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502246 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502267 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502291 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502312 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502334 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502356 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502379 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502402 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502425 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502447 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502484 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502508 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502529 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502550 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502571 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502594 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502617 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502638 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502659 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502680 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502700 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502721 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502743 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502766 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502790 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502813 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502835 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502859 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502880 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502922 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502946 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502968 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502990 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503013 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503037 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503059 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503081 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503103 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503127 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503149 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503173 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503196 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503218 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503243 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503267 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503289 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503311 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503335 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503358 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503381 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503403 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503425 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503448 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503471 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503497 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503520 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.500753 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501322 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501755 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.501933 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502350 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502766 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.502939 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503346 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503520 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503741 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503760 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.504031 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.504736 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.504867 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.505062 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.505138 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.505401 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.505688 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.505733 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.506006 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.506602 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.506832 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.506948 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.506960 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.507099 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.507208 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.507267 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.507318 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.506593 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.507351 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.506877 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.507745 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.503543 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.508545 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.508577 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.508601 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.508624 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.508689 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.508712 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.508732 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.508756 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.508979 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.508999 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509012 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509020 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509033 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509126 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509158 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509182 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509136 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509206 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509234 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509301 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509346 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509385 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509416 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509444 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509477 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509506 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509595 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509627 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509656 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509690 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509722 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509752 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509784 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509822 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509850 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509877 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509960 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509990 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.510019 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.510051 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.510079 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.510106 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.510136 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.510164 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.510193 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.510220 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.510248 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.510282 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511010 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511048 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511083 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511116 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511149 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511183 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511212 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511238 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511304 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511334 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511362 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511391 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511417 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511444 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511475 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511504 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511545 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511580 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511615 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511646 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511673 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511752 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511850 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511884 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511930 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511963 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511995 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512040 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512067 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512097 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512126 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512158 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512185 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512209 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512237 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512263 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512381 4966 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512399 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512417 4966 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512431 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512445 4966 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512461 4966 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512478 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512629 4966 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512648 4966 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512663 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512677 4966 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512691 4966 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512706 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512720 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512734 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512747 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512762 4966 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512775 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512788 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512806 4966 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512823 4966 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512836 4966 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512849 4966 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512863 4966 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512919 4966 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512938 4966 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512952 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512965 4966 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512979 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512993 4966 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513007 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513021 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513035 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513048 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513061 4966 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513075 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513088 4966 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513105 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513119 4966 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513134 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513148 4966 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513161 4966 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.514504 4966 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509292 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509453 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509474 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509778 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509845 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.509947 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.510056 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.510154 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.510264 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.510237 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.516362 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.510329 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.510458 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.510669 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511130 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511184 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511317 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511463 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.511887 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512095 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.516449 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512253 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512378 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512432 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512535 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512653 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512723 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.512971 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513145 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513173 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513188 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513369 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513558 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513798 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513807 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513882 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513918 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.513993 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.514108 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.514276 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.514375 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.514741 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.514842 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.514855 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.514832 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.514998 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.515025 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.515183 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.515308 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.515415 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.515518 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.515540 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.515772 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.516717 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.516054 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.516105 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.516119 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.516121 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: E0127 15:42:34.516373 4966 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.516931 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.516969 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: E0127 15:42:34.517046 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:42:35.017016926 +0000 UTC m=+21.319810404 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.517571 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.517861 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.518012 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.518211 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.518249 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.518381 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.518622 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.518725 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.518818 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: E0127 15:42:34.519159 4966 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:42:34 crc kubenswrapper[4966]: E0127 15:42:34.519253 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:42:35.019230255 +0000 UTC m=+21.322023943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.519445 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.516448 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.516670 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.519746 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.520017 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.520139 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.520474 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.520745 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.520828 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.521477 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.521524 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.521532 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.521589 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.521805 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.521878 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.521910 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.529828 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.531052 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.531945 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.532453 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.532452 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.532515 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.532528 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.532534 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.532834 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.533466 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.534061 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.534891 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.535644 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.536550 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.536750 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.536815 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.536949 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.537524 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.537864 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.543141 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.543294 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.544493 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.544765 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.544998 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.545079 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.545094 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.545142 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.545166 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.545206 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.545444 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.546023 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.572540 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.572987 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.573128 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.573240 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.573442 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.573656 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.573686 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.573802 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.573843 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.573886 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.574055 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.574030 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.574075 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.573851 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: E0127 15:42:34.574284 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:42:34 crc kubenswrapper[4966]: E0127 15:42:34.574330 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:42:34 crc kubenswrapper[4966]: E0127 15:42:34.574354 4966 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.574457 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.575134 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.575593 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.575727 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.576738 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.576842 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.576974 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.577235 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: E0127 15:42:34.577315 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:42:35.077276777 +0000 UTC m=+21.380070265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.577528 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.577656 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.577274 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.577779 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.577908 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: E0127 15:42:34.577885 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:42:34 crc kubenswrapper[4966]: E0127 15:42:34.578376 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:42:34 crc kubenswrapper[4966]: E0127 15:42:34.578400 4966 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.579694 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.579713 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: E0127 15:42:34.580153 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:42:35.080135687 +0000 UTC m=+21.382929185 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.581403 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.583205 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.583750 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.583755 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.588440 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.588523 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.589287 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.589875 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.590113 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.590366 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.590585 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.591026 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.591168 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.591227 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.591274 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.591302 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.591175 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.591588 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.592061 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.593111 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.595366 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.596026 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.596672 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.597631 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.600290 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.600822 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.604516 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.605253 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.606535 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.607118 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.608732 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.609613 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.611602 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.614258 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.614288 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.617035 4966 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.617054 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.617134 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.617908 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618103 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618190 4966 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618224 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618250 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618272 4966 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618290 4966 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618315 4966 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618332 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618348 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618361 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618380 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618394 4966 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618245 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618406 4966 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618554 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618568 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618577 4966 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618587 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618597 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618607 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618617 4966 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618626 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618671 4966 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618680 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618690 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618698 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618707 4966 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618716 4966 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618727 4966 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618736 4966 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618745 4966 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618770 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618809 4966 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618818 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618828 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618837 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618847 4966 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618887 4966 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618919 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618928 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618937 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618978 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618988 4966 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.618997 4966 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619006 4966 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619016 4966 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619026 4966 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619035 4966 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619042 4966 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619098 4966 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619109 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619117 4966 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619126 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619134 4966 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619142 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619150 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619158 4966 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619201 4966 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619211 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619219 4966 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619227 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619237 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619245 4966 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619253 4966 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619261 4966 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619270 4966 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619278 4966 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619287 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619295 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619303 4966 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619311 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619320 4966 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619328 4966 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619336 4966 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619344 4966 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619352 4966 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619360 4966 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619368 4966 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619376 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619384 4966 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619394 4966 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619409 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619424 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619438 4966 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619449 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619458 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619466 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619474 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619483 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619491 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619499 4966 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619508 4966 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619516 4966 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619524 4966 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619532 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619540 4966 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619549 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619558 4966 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619568 4966 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619580 4966 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619593 4966 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619603 4966 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619616 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619629 4966 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619640 4966 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619651 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619660 4966 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619670 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619682 4966 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619693 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619703 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619715 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619725 4966 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619734 4966 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619742 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619750 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619758 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619773 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619785 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619797 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619808 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619819 4966 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619830 4966 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619865 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619879 4966 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.619890 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.620576 4966 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.620591 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.620629 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.620684 4966 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.620759 4966 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.620773 4966 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.620786 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.620838 4966 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.620849 4966 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.620860 4966 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.620871 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.620882 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.620919 4966 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.620931 4966 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.620942 4966 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.620952 4966 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.620964 4966 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.620975 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.620986 4966 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.620998 4966 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.621009 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.621020 4966 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.621032 4966 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.622405 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.622916 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.624005 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.624837 4966 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.625885 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.627289 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.633094 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.633697 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.634131 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.635811 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.636726 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.638168 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.638800 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.640172 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.640680 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.641111 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.641207 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.642102 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.642725 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.643118 4966 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2" exitCode=255 Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.643710 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.644670 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.645146 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.645654 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.646617 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.647984 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.648567 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.649081 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.650724 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.650978 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.651342 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.652552 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.653121 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.653715 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2"} Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.661829 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.670476 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.670631 4966 scope.go:117] "RemoveContainer" containerID="35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.679820 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.689775 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.699749 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.709375 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.717136 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.721850 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.721871 4966 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.721881 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.728336 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.738713 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.746419 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.755405 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.766045 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.776051 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.776051 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.782268 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.787684 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:42:34 crc kubenswrapper[4966]: I0127 15:42:34.791730 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:42:34 crc kubenswrapper[4966]: W0127 15:42:34.798397 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-3eaa4ffeb7cdd077b62229558fefb1811ff89a820cf437cbd2593971c290ae57 WatchSource:0}: Error finding container 3eaa4ffeb7cdd077b62229558fefb1811ff89a820cf437cbd2593971c290ae57: Status 404 returned error can't find the container with id 3eaa4ffeb7cdd077b62229558fefb1811ff89a820cf437cbd2593971c290ae57 Jan 27 15:42:34 crc kubenswrapper[4966]: W0127 15:42:34.799869 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-1ac129f33c4b21d18202204f953a7e44edceca48deb8ccdea592291dca094814 WatchSource:0}: Error finding container 1ac129f33c4b21d18202204f953a7e44edceca48deb8ccdea592291dca094814: Status 404 returned error can't find the container with id 1ac129f33c4b21d18202204f953a7e44edceca48deb8ccdea592291dca094814 Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.023878 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.024037 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.024118 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:35 crc kubenswrapper[4966]: E0127 15:42:35.024292 4966 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:42:35 crc kubenswrapper[4966]: E0127 15:42:35.024378 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:42:36.024355274 +0000 UTC m=+22.327148762 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:42:35 crc kubenswrapper[4966]: E0127 15:42:35.024891 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:42:36.02487915 +0000 UTC m=+22.327672638 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:42:35 crc kubenswrapper[4966]: E0127 15:42:35.025993 4966 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:42:35 crc kubenswrapper[4966]: E0127 15:42:35.026037 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:42:36.026028456 +0000 UTC m=+22.328821944 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.124584 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.124676 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:42:35 crc kubenswrapper[4966]: E0127 15:42:35.124794 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:42:35 crc kubenswrapper[4966]: E0127 15:42:35.124807 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:42:35 crc kubenswrapper[4966]: E0127 15:42:35.124824 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:42:35 crc kubenswrapper[4966]: E0127 15:42:35.124833 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:42:35 crc kubenswrapper[4966]: E0127 15:42:35.124842 4966 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:35 crc kubenswrapper[4966]: E0127 15:42:35.124847 4966 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:35 crc kubenswrapper[4966]: E0127 15:42:35.124914 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:42:36.124875259 +0000 UTC m=+22.427668767 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:35 crc kubenswrapper[4966]: E0127 15:42:35.124940 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:42:36.124927991 +0000 UTC m=+22.427721489 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.484080 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 09:45:03.689998116 +0000 UTC Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.520646 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:42:35 crc kubenswrapper[4966]: E0127 15:42:35.520834 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.646593 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3"} Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.646636 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3eaa4ffeb7cdd077b62229558fefb1811ff89a820cf437cbd2593971c290ae57"} Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.648946 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.650775 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049"} Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.650977 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.652769 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"75496a70b5a3af0e19a68bb4adbb38bc514dc1394082b18578b259cf408148aa"} Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.654819 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4"} Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.654880 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f"} Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.654941 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1ac129f33c4b21d18202204f953a7e44edceca48deb8ccdea592291dca094814"} Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.669444 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.684273 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.696494 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.712040 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.728992 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.744757 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.759166 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.775501 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.793100 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.806926 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.822980 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.838648 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.855690 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:35 crc kubenswrapper[4966]: I0127 15:42:35.871229 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.036520 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.036673 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.036715 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:36 crc kubenswrapper[4966]: E0127 15:42:36.036817 4966 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:42:36 crc kubenswrapper[4966]: E0127 15:42:36.036827 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:42:38.036782148 +0000 UTC m=+24.339575646 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:42:36 crc kubenswrapper[4966]: E0127 15:42:36.036917 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:42:38.036876721 +0000 UTC m=+24.339670199 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:42:36 crc kubenswrapper[4966]: E0127 15:42:36.036960 4966 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:42:36 crc kubenswrapper[4966]: E0127 15:42:36.037144 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:42:38.037116719 +0000 UTC m=+24.339910207 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.137769 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.137829 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:42:36 crc kubenswrapper[4966]: E0127 15:42:36.137944 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:42:36 crc kubenswrapper[4966]: E0127 15:42:36.137968 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:42:36 crc kubenswrapper[4966]: E0127 15:42:36.137981 4966 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:36 crc kubenswrapper[4966]: E0127 15:42:36.138031 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:42:38.138015047 +0000 UTC m=+24.440808535 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:36 crc kubenswrapper[4966]: E0127 15:42:36.137944 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:42:36 crc kubenswrapper[4966]: E0127 15:42:36.138055 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:42:36 crc kubenswrapper[4966]: E0127 15:42:36.138065 4966 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:36 crc kubenswrapper[4966]: E0127 15:42:36.138094 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:42:38.138085089 +0000 UTC m=+24.440878567 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.245425 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.250441 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.256301 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.266087 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.284433 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.302692 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.336160 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.353722 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.371410 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.385790 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.406339 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.420100 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.445867 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.459983 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.476706 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.485156 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 16:45:01.22112666 +0000 UTC Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.508493 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.520458 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.520458 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:36 crc kubenswrapper[4966]: E0127 15:42:36.520660 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:42:36 crc kubenswrapper[4966]: E0127 15:42:36.520763 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.523010 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.524686 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.526011 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.527721 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.528913 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.530207 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.530947 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.532276 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.533005 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.533653 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.540107 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.592962 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.603559 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.607349 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.611189 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.625096 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.635159 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.644472 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: E0127 15:42:36.667513 4966 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.668099 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.681318 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.697490 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.707531 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.724244 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.736150 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.746053 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.759402 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.771470 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.791605 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.804499 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.821724 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:36 crc kubenswrapper[4966]: I0127 15:42:36.836636 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:37 crc kubenswrapper[4966]: I0127 15:42:37.486003 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 12:55:01.720235666 +0000 UTC Jan 27 15:42:37 crc kubenswrapper[4966]: I0127 15:42:37.520753 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:42:37 crc kubenswrapper[4966]: E0127 15:42:37.520939 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:42:37 crc kubenswrapper[4966]: I0127 15:42:37.662746 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5"} Jan 27 15:42:37 crc kubenswrapper[4966]: I0127 15:42:37.686556 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:37Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:37 crc kubenswrapper[4966]: I0127 15:42:37.707320 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:37Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:37 crc kubenswrapper[4966]: I0127 15:42:37.732209 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:37Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:37 crc kubenswrapper[4966]: I0127 15:42:37.751503 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:37Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:37 crc kubenswrapper[4966]: I0127 15:42:37.769752 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:37Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:37 crc kubenswrapper[4966]: I0127 15:42:37.785270 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:37Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:37 crc kubenswrapper[4966]: I0127 15:42:37.803866 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:37Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:37 crc kubenswrapper[4966]: I0127 15:42:37.819017 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:37Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:37 crc kubenswrapper[4966]: I0127 15:42:37.835325 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:37Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.055013 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.055106 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.055139 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:38 crc kubenswrapper[4966]: E0127 15:42:38.055216 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:42:42.055184284 +0000 UTC m=+28.357977782 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:42:38 crc kubenswrapper[4966]: E0127 15:42:38.055233 4966 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:42:38 crc kubenswrapper[4966]: E0127 15:42:38.055301 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:42:42.055282637 +0000 UTC m=+28.358076145 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:42:38 crc kubenswrapper[4966]: E0127 15:42:38.055310 4966 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:42:38 crc kubenswrapper[4966]: E0127 15:42:38.055403 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:42:42.055383231 +0000 UTC m=+28.358176779 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.156565 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.156638 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:42:38 crc kubenswrapper[4966]: E0127 15:42:38.156763 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:42:38 crc kubenswrapper[4966]: E0127 15:42:38.156780 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:42:38 crc kubenswrapper[4966]: E0127 15:42:38.156790 4966 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:38 crc kubenswrapper[4966]: E0127 15:42:38.156802 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:42:38 crc kubenswrapper[4966]: E0127 15:42:38.156835 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:42:38 crc kubenswrapper[4966]: E0127 15:42:38.156849 4966 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:38 crc kubenswrapper[4966]: E0127 15:42:38.156837 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:42:42.156823245 +0000 UTC m=+28.459616733 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:38 crc kubenswrapper[4966]: E0127 15:42:38.156953 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:42:42.156916838 +0000 UTC m=+28.459710326 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.343224 4966 csr.go:261] certificate signing request csr-nz25d is approved, waiting to be issued Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.359389 4966 csr.go:257] certificate signing request csr-nz25d is issued Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.487141 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:41:11.639402663 +0000 UTC Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.520714 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.520789 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:42:38 crc kubenswrapper[4966]: E0127 15:42:38.520860 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:42:38 crc kubenswrapper[4966]: E0127 15:42:38.520967 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.791244 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-fkbrc"] Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.791651 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fkbrc" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.793434 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.793819 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.793999 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.808912 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.827219 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.839631 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.861496 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2b59127c-bd9c-493b-b2fc-5cec06c21bf8-hosts-file\") pod \"node-resolver-fkbrc\" (UID: \"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\") " pod="openshift-dns/node-resolver-fkbrc" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.861545 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zggpm\" (UniqueName: \"kubernetes.io/projected/2b59127c-bd9c-493b-b2fc-5cec06c21bf8-kube-api-access-zggpm\") pod \"node-resolver-fkbrc\" (UID: \"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\") " pod="openshift-dns/node-resolver-fkbrc" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.864732 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.878164 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.890226 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.915669 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.931591 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.948870 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.962367 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2b59127c-bd9c-493b-b2fc-5cec06c21bf8-hosts-file\") pod \"node-resolver-fkbrc\" (UID: \"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\") " pod="openshift-dns/node-resolver-fkbrc" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.962414 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zggpm\" (UniqueName: \"kubernetes.io/projected/2b59127c-bd9c-493b-b2fc-5cec06c21bf8-kube-api-access-zggpm\") pod \"node-resolver-fkbrc\" (UID: \"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\") " pod="openshift-dns/node-resolver-fkbrc" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.962501 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2b59127c-bd9c-493b-b2fc-5cec06c21bf8-hosts-file\") pod \"node-resolver-fkbrc\" (UID: \"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\") " pod="openshift-dns/node-resolver-fkbrc" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.972533 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:38 crc kubenswrapper[4966]: I0127 15:42:38.982652 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zggpm\" (UniqueName: \"kubernetes.io/projected/2b59127c-bd9c-493b-b2fc-5cec06c21bf8-kube-api-access-zggpm\") pod \"node-resolver-fkbrc\" (UID: \"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\") " pod="openshift-dns/node-resolver-fkbrc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.103849 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fkbrc" Jan 27 15:42:39 crc kubenswrapper[4966]: W0127 15:42:39.126275 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b59127c_bd9c_493b_b2fc_5cec06c21bf8.slice/crio-5273c0efb2ce513fcbe28484a3ae0e9ba430e7b7ceef0a6c3122cc009ac69a22 WatchSource:0}: Error finding container 5273c0efb2ce513fcbe28484a3ae0e9ba430e7b7ceef0a6c3122cc009ac69a22: Status 404 returned error can't find the container with id 5273c0efb2ce513fcbe28484a3ae0e9ba430e7b7ceef0a6c3122cc009ac69a22 Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.204374 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-wtl9v"] Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.204700 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.207283 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-glbg8"] Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.207732 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.207745 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.207908 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.208131 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.209633 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.210299 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-24tw6"] Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.210804 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-5xktc"] Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.210958 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.211078 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.219618 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.219666 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.220473 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.220516 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.220698 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.221015 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.226139 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.226612 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.226920 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.227287 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.227405 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.227629 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.227697 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.228034 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.233149 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.238664 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.261840 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.265661 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/108b786e-606b-4603-b136-6a3d61fe7ad5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-24tw6\" (UID: \"108b786e-606b-4603-b136-6a3d61fe7ad5\") " pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.265699 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-host-run-multus-certs\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.265716 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-run-netns\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.265733 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-log-socket\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.265749 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-cni-bin\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.265764 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-os-release\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.265778 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-host-var-lib-kubelet\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.265791 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-host-run-netns\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.265808 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-kubelet\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.265825 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4a25d116-d49b-4533-bac7-74bee93062b1-env-overrides\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.265853 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-run-openvswitch\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.265873 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-host-var-lib-cni-bin\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.265888 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43e2b070-838d-4a18-9a86-1683f64b641c-multus-daemon-config\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.265919 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-run-systemd\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.265934 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-host-var-lib-cni-multus\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.265948 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-run-ovn-kubernetes\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.265976 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-multus-cni-dir\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.266242 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/75889828-fc5d-4516-a3c4-db3affd4f810-proxy-tls\") pod \"machine-config-daemon-wtl9v\" (UID: \"75889828-fc5d-4516-a3c4-db3affd4f810\") " pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.266259 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6z46\" (UniqueName: \"kubernetes.io/projected/75889828-fc5d-4516-a3c4-db3affd4f810-kube-api-access-g6z46\") pod \"machine-config-daemon-wtl9v\" (UID: \"75889828-fc5d-4516-a3c4-db3affd4f810\") " pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.266299 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4a25d116-d49b-4533-bac7-74bee93062b1-ovnkube-config\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.266347 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/75889828-fc5d-4516-a3c4-db3affd4f810-rootfs\") pod \"machine-config-daemon-wtl9v\" (UID: \"75889828-fc5d-4516-a3c4-db3affd4f810\") " pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.266387 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-hostroot\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.266404 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lct5\" (UniqueName: \"kubernetes.io/projected/43e2b070-838d-4a18-9a86-1683f64b641c-kube-api-access-5lct5\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.266420 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-slash\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.266464 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/108b786e-606b-4603-b136-6a3d61fe7ad5-system-cni-dir\") pod \"multus-additional-cni-plugins-24tw6\" (UID: \"108b786e-606b-4603-b136-6a3d61fe7ad5\") " pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.266481 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-multus-socket-dir-parent\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.266596 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-host-run-k8s-cni-cncf-io\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.266672 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-var-lib-openvswitch\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.266712 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-etc-openvswitch\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.266738 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgdlv\" (UniqueName: \"kubernetes.io/projected/4a25d116-d49b-4533-bac7-74bee93062b1-kube-api-access-fgdlv\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.266782 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/108b786e-606b-4603-b136-6a3d61fe7ad5-os-release\") pod \"multus-additional-cni-plugins-24tw6\" (UID: \"108b786e-606b-4603-b136-6a3d61fe7ad5\") " pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.266823 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/108b786e-606b-4603-b136-6a3d61fe7ad5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-24tw6\" (UID: \"108b786e-606b-4603-b136-6a3d61fe7ad5\") " pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.266846 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-system-cni-dir\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.266881 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/75889828-fc5d-4516-a3c4-db3affd4f810-mcd-auth-proxy-config\") pod \"machine-config-daemon-wtl9v\" (UID: \"75889828-fc5d-4516-a3c4-db3affd4f810\") " pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.266930 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.266959 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/108b786e-606b-4603-b136-6a3d61fe7ad5-cnibin\") pod \"multus-additional-cni-plugins-24tw6\" (UID: \"108b786e-606b-4603-b136-6a3d61fe7ad5\") " pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.266980 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/108b786e-606b-4603-b136-6a3d61fe7ad5-cni-binary-copy\") pod \"multus-additional-cni-plugins-24tw6\" (UID: \"108b786e-606b-4603-b136-6a3d61fe7ad5\") " pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.267002 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-systemd-units\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.267062 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-node-log\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.267108 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-cnibin\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.267133 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4a25d116-d49b-4533-bac7-74bee93062b1-ovn-node-metrics-cert\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.267164 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-run-ovn\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.267191 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-etc-kubernetes\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.267209 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-multus-conf-dir\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.267224 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-cni-netd\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.267244 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4a25d116-d49b-4533-bac7-74bee93062b1-ovnkube-script-lib\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.267337 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r62j\" (UniqueName: \"kubernetes.io/projected/108b786e-606b-4603-b136-6a3d61fe7ad5-kube-api-access-2r62j\") pod \"multus-additional-cni-plugins-24tw6\" (UID: \"108b786e-606b-4603-b136-6a3d61fe7ad5\") " pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.267383 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43e2b070-838d-4a18-9a86-1683f64b641c-cni-binary-copy\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.276451 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.290790 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.307641 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.318676 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.331526 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.349367 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.360386 4966 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 15:37:38 +0000 UTC, rotation deadline is 2026-10-31 10:24:03.744895513 +0000 UTC Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.360446 4966 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6642h41m24.384452981s for next certificate rotation Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.363181 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368118 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-cni-bin\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368159 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-os-release\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368185 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-host-var-lib-kubelet\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368209 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-host-run-multus-certs\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368230 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-run-netns\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368251 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-log-socket\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368260 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-host-run-multus-certs\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368279 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-host-var-lib-kubelet\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368297 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-host-run-netns\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368270 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-host-run-netns\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368228 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-cni-bin\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368315 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-log-socket\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368335 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-kubelet\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368357 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4a25d116-d49b-4533-bac7-74bee93062b1-env-overrides\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368380 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-host-var-lib-cni-bin\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368401 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43e2b070-838d-4a18-9a86-1683f64b641c-multus-daemon-config\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368400 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-run-netns\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368451 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-host-var-lib-cni-bin\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368452 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-run-systemd\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368451 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-kubelet\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368424 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-run-systemd\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368517 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-run-openvswitch\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368530 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-os-release\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368560 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-run-ovn-kubernetes\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368538 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-run-ovn-kubernetes\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368620 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-run-openvswitch\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368681 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-host-var-lib-cni-multus\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368654 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-host-var-lib-cni-multus\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368713 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-multus-cni-dir\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368731 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/75889828-fc5d-4516-a3c4-db3affd4f810-proxy-tls\") pod \"machine-config-daemon-wtl9v\" (UID: \"75889828-fc5d-4516-a3c4-db3affd4f810\") " pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368747 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6z46\" (UniqueName: \"kubernetes.io/projected/75889828-fc5d-4516-a3c4-db3affd4f810-kube-api-access-g6z46\") pod \"machine-config-daemon-wtl9v\" (UID: \"75889828-fc5d-4516-a3c4-db3affd4f810\") " pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368768 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4a25d116-d49b-4533-bac7-74bee93062b1-ovnkube-config\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368788 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-hostroot\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368814 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/75889828-fc5d-4516-a3c4-db3affd4f810-rootfs\") pod \"machine-config-daemon-wtl9v\" (UID: \"75889828-fc5d-4516-a3c4-db3affd4f810\") " pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368841 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/108b786e-606b-4603-b136-6a3d61fe7ad5-system-cni-dir\") pod \"multus-additional-cni-plugins-24tw6\" (UID: \"108b786e-606b-4603-b136-6a3d61fe7ad5\") " pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368860 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-multus-socket-dir-parent\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368874 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lct5\" (UniqueName: \"kubernetes.io/projected/43e2b070-838d-4a18-9a86-1683f64b641c-kube-api-access-5lct5\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368909 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-slash\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368927 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgdlv\" (UniqueName: \"kubernetes.io/projected/4a25d116-d49b-4533-bac7-74bee93062b1-kube-api-access-fgdlv\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368946 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/108b786e-606b-4603-b136-6a3d61fe7ad5-os-release\") pod \"multus-additional-cni-plugins-24tw6\" (UID: \"108b786e-606b-4603-b136-6a3d61fe7ad5\") " pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368963 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/108b786e-606b-4603-b136-6a3d61fe7ad5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-24tw6\" (UID: \"108b786e-606b-4603-b136-6a3d61fe7ad5\") " pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368981 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-system-cni-dir\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368978 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/75889828-fc5d-4516-a3c4-db3affd4f810-rootfs\") pod \"machine-config-daemon-wtl9v\" (UID: \"75889828-fc5d-4516-a3c4-db3affd4f810\") " pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368996 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-host-run-k8s-cni-cncf-io\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369014 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-var-lib-openvswitch\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369015 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4a25d116-d49b-4533-bac7-74bee93062b1-env-overrides\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369033 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-etc-openvswitch\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369052 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/75889828-fc5d-4516-a3c4-db3affd4f810-mcd-auth-proxy-config\") pod \"machine-config-daemon-wtl9v\" (UID: \"75889828-fc5d-4516-a3c4-db3affd4f810\") " pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369071 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/108b786e-606b-4603-b136-6a3d61fe7ad5-os-release\") pod \"multus-additional-cni-plugins-24tw6\" (UID: \"108b786e-606b-4603-b136-6a3d61fe7ad5\") " pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369076 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/108b786e-606b-4603-b136-6a3d61fe7ad5-cnibin\") pod \"multus-additional-cni-plugins-24tw6\" (UID: \"108b786e-606b-4603-b136-6a3d61fe7ad5\") " pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369098 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/108b786e-606b-4603-b136-6a3d61fe7ad5-cnibin\") pod \"multus-additional-cni-plugins-24tw6\" (UID: \"108b786e-606b-4603-b136-6a3d61fe7ad5\") " pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369106 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-host-run-k8s-cni-cncf-io\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369103 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-multus-socket-dir-parent\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.368925 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-hostroot\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369124 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/108b786e-606b-4603-b136-6a3d61fe7ad5-cni-binary-copy\") pod \"multus-additional-cni-plugins-24tw6\" (UID: \"108b786e-606b-4603-b136-6a3d61fe7ad5\") " pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369128 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-etc-openvswitch\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369074 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-var-lib-openvswitch\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369154 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-slash\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369008 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-multus-cni-dir\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369194 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-systemd-units\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369189 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43e2b070-838d-4a18-9a86-1683f64b641c-multus-daemon-config\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369308 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-system-cni-dir\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369355 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-systemd-units\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369372 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/108b786e-606b-4603-b136-6a3d61fe7ad5-system-cni-dir\") pod \"multus-additional-cni-plugins-24tw6\" (UID: \"108b786e-606b-4603-b136-6a3d61fe7ad5\") " pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369389 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-node-log\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369429 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4a25d116-d49b-4533-bac7-74bee93062b1-ovnkube-config\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369445 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369433 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-node-log\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369476 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-cnibin\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369498 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369499 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4a25d116-d49b-4533-bac7-74bee93062b1-ovn-node-metrics-cert\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369531 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-etc-kubernetes\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369550 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-run-ovn\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369569 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r62j\" (UniqueName: \"kubernetes.io/projected/108b786e-606b-4603-b136-6a3d61fe7ad5-kube-api-access-2r62j\") pod \"multus-additional-cni-plugins-24tw6\" (UID: \"108b786e-606b-4603-b136-6a3d61fe7ad5\") " pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369571 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-cnibin\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369586 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43e2b070-838d-4a18-9a86-1683f64b641c-cni-binary-copy\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369600 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-run-ovn\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369612 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-multus-conf-dir\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369639 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-multus-conf-dir\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369649 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-cni-netd\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369675 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4a25d116-d49b-4533-bac7-74bee93062b1-ovnkube-script-lib\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369686 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/108b786e-606b-4603-b136-6a3d61fe7ad5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-24tw6\" (UID: \"108b786e-606b-4603-b136-6a3d61fe7ad5\") " pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369713 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/108b786e-606b-4603-b136-6a3d61fe7ad5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-24tw6\" (UID: \"108b786e-606b-4603-b136-6a3d61fe7ad5\") " pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369573 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43e2b070-838d-4a18-9a86-1683f64b641c-etc-kubernetes\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369731 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/75889828-fc5d-4516-a3c4-db3affd4f810-mcd-auth-proxy-config\") pod \"machine-config-daemon-wtl9v\" (UID: \"75889828-fc5d-4516-a3c4-db3affd4f810\") " pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369739 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/108b786e-606b-4603-b136-6a3d61fe7ad5-cni-binary-copy\") pod \"multus-additional-cni-plugins-24tw6\" (UID: \"108b786e-606b-4603-b136-6a3d61fe7ad5\") " pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.369778 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-cni-netd\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.370357 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43e2b070-838d-4a18-9a86-1683f64b641c-cni-binary-copy\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.370547 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4a25d116-d49b-4533-bac7-74bee93062b1-ovnkube-script-lib\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.370640 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/108b786e-606b-4603-b136-6a3d61fe7ad5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-24tw6\" (UID: \"108b786e-606b-4603-b136-6a3d61fe7ad5\") " pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.371788 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/75889828-fc5d-4516-a3c4-db3affd4f810-proxy-tls\") pod \"machine-config-daemon-wtl9v\" (UID: \"75889828-fc5d-4516-a3c4-db3affd4f810\") " pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.373775 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4a25d116-d49b-4533-bac7-74bee93062b1-ovn-node-metrics-cert\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.388151 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.388594 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r62j\" (UniqueName: \"kubernetes.io/projected/108b786e-606b-4603-b136-6a3d61fe7ad5-kube-api-access-2r62j\") pod \"multus-additional-cni-plugins-24tw6\" (UID: \"108b786e-606b-4603-b136-6a3d61fe7ad5\") " pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.389348 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgdlv\" (UniqueName: \"kubernetes.io/projected/4a25d116-d49b-4533-bac7-74bee93062b1-kube-api-access-fgdlv\") pod \"ovnkube-node-glbg8\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.390848 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lct5\" (UniqueName: \"kubernetes.io/projected/43e2b070-838d-4a18-9a86-1683f64b641c-kube-api-access-5lct5\") pod \"multus-5xktc\" (UID: \"43e2b070-838d-4a18-9a86-1683f64b641c\") " pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.396830 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6z46\" (UniqueName: \"kubernetes.io/projected/75889828-fc5d-4516-a3c4-db3affd4f810-kube-api-access-g6z46\") pod \"machine-config-daemon-wtl9v\" (UID: \"75889828-fc5d-4516-a3c4-db3affd4f810\") " pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.402940 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.419007 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.432742 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.443633 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.459285 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.474375 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.484365 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.488166 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:48:20.9702491 +0000 UTC Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.492815 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.504890 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.515929 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.518167 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.519978 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:42:39 crc kubenswrapper[4966]: E0127 15:42:39.520053 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.527648 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: W0127 15:42:39.529178 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75889828_fc5d_4516_a3c4_db3affd4f810.slice/crio-432cc7af32142ce73277da4316de334bed46084072c30c6f6531234076de18fe WatchSource:0}: Error finding container 432cc7af32142ce73277da4316de334bed46084072c30c6f6531234076de18fe: Status 404 returned error can't find the container with id 432cc7af32142ce73277da4316de334bed46084072c30c6f6531234076de18fe Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.532787 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.541322 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.542617 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-24tw6" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.548585 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5xktc" Jan 27 15:42:39 crc kubenswrapper[4966]: W0127 15:42:39.549618 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice/crio-de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28 WatchSource:0}: Error finding container de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28: Status 404 returned error can't find the container with id de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28 Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.555493 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: W0127 15:42:39.557381 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod108b786e_606b_4603_b136_6a3d61fe7ad5.slice/crio-47607d3b28da6b43432ca39cf28950c3c189ab2cd06573b2bb5b01b1e2b15322 WatchSource:0}: Error finding container 47607d3b28da6b43432ca39cf28950c3c189ab2cd06573b2bb5b01b1e2b15322: Status 404 returned error can't find the container with id 47607d3b28da6b43432ca39cf28950c3c189ab2cd06573b2bb5b01b1e2b15322 Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.572455 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.597229 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.675428 4966 generic.go:334] "Generic (PLEG): container finished" podID="4a25d116-d49b-4533-bac7-74bee93062b1" containerID="7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6" exitCode=0 Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.675509 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerDied","Data":"7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6"} Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.675540 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerStarted","Data":"de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28"} Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.679479 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5xktc" event={"ID":"43e2b070-838d-4a18-9a86-1683f64b641c","Type":"ContainerStarted","Data":"db40d1d883507700b6c00d4a5774af5c988cf4a34bcffe2b06d29d30a911b71f"} Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.690203 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.691578 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" event={"ID":"108b786e-606b-4603-b136-6a3d61fe7ad5","Type":"ContainerStarted","Data":"47607d3b28da6b43432ca39cf28950c3c189ab2cd06573b2bb5b01b1e2b15322"} Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.702523 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.703812 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664"} Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.703860 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"432cc7af32142ce73277da4316de334bed46084072c30c6f6531234076de18fe"} Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.709327 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fkbrc" event={"ID":"2b59127c-bd9c-493b-b2fc-5cec06c21bf8","Type":"ContainerStarted","Data":"7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e"} Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.709382 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fkbrc" event={"ID":"2b59127c-bd9c-493b-b2fc-5cec06c21bf8","Type":"ContainerStarted","Data":"5273c0efb2ce513fcbe28484a3ae0e9ba430e7b7ceef0a6c3122cc009ac69a22"} Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.717024 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.736720 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.750799 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.763345 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.776224 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.793847 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.804679 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.816535 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.827616 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.837917 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.853268 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.878024 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.891436 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.907251 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.924513 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.943301 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.956520 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.968175 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:39 crc kubenswrapper[4966]: I0127 15:42:39.983058 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.014141 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.030787 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.042072 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.052252 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.059977 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.071621 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.090963 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.199988 4966 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.202030 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.202076 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.202088 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.202212 4966 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.212097 4966 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.212369 4966 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.213587 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.213678 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.213759 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.213870 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.213995 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:40Z","lastTransitionTime":"2026-01-27T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:40 crc kubenswrapper[4966]: E0127 15:42:40.232610 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.235976 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.236009 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.236018 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.236032 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.236041 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:40Z","lastTransitionTime":"2026-01-27T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:40 crc kubenswrapper[4966]: E0127 15:42:40.248306 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.251912 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.252091 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.252184 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.252278 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.252356 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:40Z","lastTransitionTime":"2026-01-27T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:40 crc kubenswrapper[4966]: E0127 15:42:40.269137 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.274293 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.274323 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.274336 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.274353 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.274364 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:40Z","lastTransitionTime":"2026-01-27T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:40 crc kubenswrapper[4966]: E0127 15:42:40.303150 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.308700 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.308742 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.308754 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.308771 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.308788 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:40Z","lastTransitionTime":"2026-01-27T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:40 crc kubenswrapper[4966]: E0127 15:42:40.330393 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: E0127 15:42:40.330505 4966 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.331843 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.331871 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.331882 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.331913 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.331922 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:40Z","lastTransitionTime":"2026-01-27T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.433728 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.433755 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.433764 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.433778 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.433787 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:40Z","lastTransitionTime":"2026-01-27T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.488298 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:06:01.917799092 +0000 UTC Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.520815 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:40 crc kubenswrapper[4966]: E0127 15:42:40.520934 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.521187 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:42:40 crc kubenswrapper[4966]: E0127 15:42:40.521237 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.535595 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.535620 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.535630 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.535643 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.535652 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:40Z","lastTransitionTime":"2026-01-27T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.639342 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.639680 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.639689 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.639703 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.639713 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:40Z","lastTransitionTime":"2026-01-27T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.717578 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerStarted","Data":"a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942"} Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.717621 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerStarted","Data":"70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e"} Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.717633 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerStarted","Data":"dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3"} Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.717641 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerStarted","Data":"01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162"} Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.717649 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerStarted","Data":"d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b"} Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.717656 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerStarted","Data":"859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9"} Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.719365 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5xktc" event={"ID":"43e2b070-838d-4a18-9a86-1683f64b641c","Type":"ContainerStarted","Data":"7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117"} Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.721777 4966 generic.go:334] "Generic (PLEG): container finished" podID="108b786e-606b-4603-b136-6a3d61fe7ad5" containerID="b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3" exitCode=0 Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.721847 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" event={"ID":"108b786e-606b-4603-b136-6a3d61fe7ad5","Type":"ContainerDied","Data":"b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3"} Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.727528 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22"} Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.741741 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.741772 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.741780 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.741793 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.741801 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:40Z","lastTransitionTime":"2026-01-27T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.745711 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.757910 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.771701 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.782836 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.793072 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.807813 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.817122 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.827393 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.836097 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.844080 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.844116 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.844128 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.844180 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.844192 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:40Z","lastTransitionTime":"2026-01-27T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.847194 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.867061 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.876990 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.889142 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.916313 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.936950 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.946252 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.946292 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.946306 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.946322 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.946333 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:40Z","lastTransitionTime":"2026-01-27T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.950560 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.972566 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.986440 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:40 crc kubenswrapper[4966]: I0127 15:42:40.996962 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.009791 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.022539 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.035598 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.047740 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.052271 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.052293 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.052301 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.052314 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.052323 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:41Z","lastTransitionTime":"2026-01-27T15:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.064012 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.075139 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.087675 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.100411 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.112362 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.155633 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.155675 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.155694 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.155730 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.155748 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:41Z","lastTransitionTime":"2026-01-27T15:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.258374 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.258410 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.258419 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.258434 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.258445 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:41Z","lastTransitionTime":"2026-01-27T15:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.361004 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.361049 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.361062 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.361078 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.361109 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:41Z","lastTransitionTime":"2026-01-27T15:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.463208 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.463240 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.463248 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.463271 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.463281 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:41Z","lastTransitionTime":"2026-01-27T15:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.489093 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 21:17:29.885895536 +0000 UTC Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.520837 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:42:41 crc kubenswrapper[4966]: E0127 15:42:41.520979 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.565846 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.565973 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.565999 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.566030 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.566051 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:41Z","lastTransitionTime":"2026-01-27T15:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.668712 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.668771 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.668787 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.668811 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.668828 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:41Z","lastTransitionTime":"2026-01-27T15:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.736036 4966 generic.go:334] "Generic (PLEG): container finished" podID="108b786e-606b-4603-b136-6a3d61fe7ad5" containerID="c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c" exitCode=0 Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.736082 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" event={"ID":"108b786e-606b-4603-b136-6a3d61fe7ad5","Type":"ContainerDied","Data":"c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c"} Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.758189 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.771147 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.771208 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.771226 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.771251 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.771269 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:41Z","lastTransitionTime":"2026-01-27T15:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.774483 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.789291 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.810952 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.826924 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.848310 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.866862 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.874613 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.874678 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.874688 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.874703 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.874712 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:41Z","lastTransitionTime":"2026-01-27T15:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.881690 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.893662 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.907379 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.925368 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.939638 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.951226 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.968712 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.976791 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.976847 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.976860 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.976877 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:41 crc kubenswrapper[4966]: I0127 15:42:41.976888 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:41Z","lastTransitionTime":"2026-01-27T15:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.079540 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.079569 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.079577 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.079589 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.079598 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:42Z","lastTransitionTime":"2026-01-27T15:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.095084 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.095242 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.095283 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:42 crc kubenswrapper[4966]: E0127 15:42:42.095434 4966 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:42:42 crc kubenswrapper[4966]: E0127 15:42:42.095510 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:42:50.095494407 +0000 UTC m=+36.398287905 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:42:42 crc kubenswrapper[4966]: E0127 15:42:42.095963 4966 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:42:42 crc kubenswrapper[4966]: E0127 15:42:42.096026 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:42:50.096010994 +0000 UTC m=+36.398804482 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:42:42 crc kubenswrapper[4966]: E0127 15:42:42.096126 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:42:50.096114427 +0000 UTC m=+36.398907925 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.181960 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.181992 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.182000 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.182013 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.182021 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:42Z","lastTransitionTime":"2026-01-27T15:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.196217 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.196287 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:42:42 crc kubenswrapper[4966]: E0127 15:42:42.196445 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:42:42 crc kubenswrapper[4966]: E0127 15:42:42.196488 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:42:42 crc kubenswrapper[4966]: E0127 15:42:42.196517 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:42:42 crc kubenswrapper[4966]: E0127 15:42:42.196538 4966 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:42 crc kubenswrapper[4966]: E0127 15:42:42.196604 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:42:50.196582341 +0000 UTC m=+36.499375869 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:42 crc kubenswrapper[4966]: E0127 15:42:42.196482 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:42:42 crc kubenswrapper[4966]: E0127 15:42:42.196859 4966 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:42 crc kubenswrapper[4966]: E0127 15:42:42.196992 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:42:50.196963923 +0000 UTC m=+36.499757491 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.214349 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-tgscb"] Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.214737 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tgscb" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.216869 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.217432 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.217533 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.217570 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.252121 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.268312 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.279807 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.283788 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.283826 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.283835 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.283852 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.283862 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:42Z","lastTransitionTime":"2026-01-27T15:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.295043 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.297955 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2zgn\" (UniqueName: \"kubernetes.io/projected/3221fb17-1692-4499-b801-a980276d6162-kube-api-access-x2zgn\") pod \"node-ca-tgscb\" (UID: \"3221fb17-1692-4499-b801-a980276d6162\") " pod="openshift-image-registry/node-ca-tgscb" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.298072 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3221fb17-1692-4499-b801-a980276d6162-host\") pod \"node-ca-tgscb\" (UID: \"3221fb17-1692-4499-b801-a980276d6162\") " pod="openshift-image-registry/node-ca-tgscb" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.298152 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3221fb17-1692-4499-b801-a980276d6162-serviceca\") pod \"node-ca-tgscb\" (UID: \"3221fb17-1692-4499-b801-a980276d6162\") " pod="openshift-image-registry/node-ca-tgscb" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.309686 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.322818 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.349643 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.362934 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.373234 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.383267 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.386153 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.386220 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.386239 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.386263 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.386280 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:42Z","lastTransitionTime":"2026-01-27T15:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.399624 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3221fb17-1692-4499-b801-a980276d6162-host\") pod \"node-ca-tgscb\" (UID: \"3221fb17-1692-4499-b801-a980276d6162\") " pod="openshift-image-registry/node-ca-tgscb" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.399703 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3221fb17-1692-4499-b801-a980276d6162-serviceca\") pod \"node-ca-tgscb\" (UID: \"3221fb17-1692-4499-b801-a980276d6162\") " pod="openshift-image-registry/node-ca-tgscb" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.399756 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2zgn\" (UniqueName: \"kubernetes.io/projected/3221fb17-1692-4499-b801-a980276d6162-kube-api-access-x2zgn\") pod \"node-ca-tgscb\" (UID: \"3221fb17-1692-4499-b801-a980276d6162\") " pod="openshift-image-registry/node-ca-tgscb" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.399763 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3221fb17-1692-4499-b801-a980276d6162-host\") pod \"node-ca-tgscb\" (UID: \"3221fb17-1692-4499-b801-a980276d6162\") " pod="openshift-image-registry/node-ca-tgscb" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.401278 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3221fb17-1692-4499-b801-a980276d6162-serviceca\") pod \"node-ca-tgscb\" (UID: \"3221fb17-1692-4499-b801-a980276d6162\") " pod="openshift-image-registry/node-ca-tgscb" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.405536 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.438042 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2zgn\" (UniqueName: \"kubernetes.io/projected/3221fb17-1692-4499-b801-a980276d6162-kube-api-access-x2zgn\") pod \"node-ca-tgscb\" (UID: \"3221fb17-1692-4499-b801-a980276d6162\") " pod="openshift-image-registry/node-ca-tgscb" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.454880 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.469527 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.487809 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.487852 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.487860 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.487873 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.487883 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:42Z","lastTransitionTime":"2026-01-27T15:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.490189 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:01:59.418565972 +0000 UTC Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.490822 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.505119 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.519860 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.519963 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:42 crc kubenswrapper[4966]: E0127 15:42:42.519996 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:42:42 crc kubenswrapper[4966]: E0127 15:42:42.520114 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.532746 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tgscb" Jan 27 15:42:42 crc kubenswrapper[4966]: W0127 15:42:42.543561 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3221fb17_1692_4499_b801_a980276d6162.slice/crio-8daed3890838427daf05018b1e7d64163848bbfb7c3aeb812816bd9bdc0c8eb7 WatchSource:0}: Error finding container 8daed3890838427daf05018b1e7d64163848bbfb7c3aeb812816bd9bdc0c8eb7: Status 404 returned error can't find the container with id 8daed3890838427daf05018b1e7d64163848bbfb7c3aeb812816bd9bdc0c8eb7 Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.590073 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.590114 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.590124 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.590139 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.590150 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:42Z","lastTransitionTime":"2026-01-27T15:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.693012 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.693058 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.693073 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.693093 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.693301 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:42Z","lastTransitionTime":"2026-01-27T15:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.740726 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tgscb" event={"ID":"3221fb17-1692-4499-b801-a980276d6162","Type":"ContainerStarted","Data":"8daed3890838427daf05018b1e7d64163848bbfb7c3aeb812816bd9bdc0c8eb7"} Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.749439 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerStarted","Data":"a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6"} Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.752272 4966 generic.go:334] "Generic (PLEG): container finished" podID="108b786e-606b-4603-b136-6a3d61fe7ad5" containerID="18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0" exitCode=0 Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.752310 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" event={"ID":"108b786e-606b-4603-b136-6a3d61fe7ad5","Type":"ContainerDied","Data":"18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0"} Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.765823 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.788691 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.796122 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.796152 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.796159 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.796172 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.796180 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:42Z","lastTransitionTime":"2026-01-27T15:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.809477 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.822427 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.835312 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.846728 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.857242 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.868748 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.889350 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.899298 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.899324 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.899332 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.899344 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.899651 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:42Z","lastTransitionTime":"2026-01-27T15:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.901853 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.913531 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.925915 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.946106 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.964724 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:42 crc kubenswrapper[4966]: I0127 15:42:42.974673 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.002557 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.002596 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.002620 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.002637 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.002647 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:43Z","lastTransitionTime":"2026-01-27T15:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.104943 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.105006 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.105020 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.105035 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.105049 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:43Z","lastTransitionTime":"2026-01-27T15:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.208000 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.208038 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.208047 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.208062 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.208072 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:43Z","lastTransitionTime":"2026-01-27T15:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.310350 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.310398 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.310415 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.310435 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.310451 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:43Z","lastTransitionTime":"2026-01-27T15:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.412570 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.412597 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.412606 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.412619 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.412627 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:43Z","lastTransitionTime":"2026-01-27T15:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.490781 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 19:45:32.573443905 +0000 UTC Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.515103 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.515134 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.515144 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.515158 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.515168 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:43Z","lastTransitionTime":"2026-01-27T15:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.519980 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:42:43 crc kubenswrapper[4966]: E0127 15:42:43.520072 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.616927 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.616973 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.616985 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.617004 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.617016 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:43Z","lastTransitionTime":"2026-01-27T15:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.719006 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.719061 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.719079 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.719102 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.719119 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:43Z","lastTransitionTime":"2026-01-27T15:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.758364 4966 generic.go:334] "Generic (PLEG): container finished" podID="108b786e-606b-4603-b136-6a3d61fe7ad5" containerID="7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e" exitCode=0 Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.758451 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" event={"ID":"108b786e-606b-4603-b136-6a3d61fe7ad5","Type":"ContainerDied","Data":"7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e"} Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.762486 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tgscb" event={"ID":"3221fb17-1692-4499-b801-a980276d6162","Type":"ContainerStarted","Data":"30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21"} Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.782276 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:43Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.797161 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:43Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.808008 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:43Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.824677 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:43Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.824850 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.824936 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.824959 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.824989 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.825016 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:43Z","lastTransitionTime":"2026-01-27T15:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.840091 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:43Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.854699 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:43Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.866630 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:43Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.878116 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:43Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.908939 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:43Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.920105 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:43Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.927852 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.927881 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.927890 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.927932 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.927943 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:43Z","lastTransitionTime":"2026-01-27T15:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.934273 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:43Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.950302 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:43Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.970185 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:43Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:43 crc kubenswrapper[4966]: I0127 15:42:43.995127 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:43Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.013651 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.028091 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.030110 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.030147 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.030160 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.030176 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.030187 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:44Z","lastTransitionTime":"2026-01-27T15:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.040963 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.055370 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.066186 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.079426 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.088693 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.099061 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.109104 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.132439 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.132479 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.132489 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.132503 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.132512 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:44Z","lastTransitionTime":"2026-01-27T15:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.134779 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.149467 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.162076 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.172544 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.191667 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.233487 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.234364 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.234404 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.234415 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.234432 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.234443 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:44Z","lastTransitionTime":"2026-01-27T15:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.252617 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.307225 4966 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.337547 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.337586 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.337595 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.337618 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.337627 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:44Z","lastTransitionTime":"2026-01-27T15:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.439790 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.439823 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.439831 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.439843 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.439851 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:44Z","lastTransitionTime":"2026-01-27T15:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.491532 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:20:28.711970217 +0000 UTC Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.519950 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.520051 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:42:44 crc kubenswrapper[4966]: E0127 15:42:44.520131 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:42:44 crc kubenswrapper[4966]: E0127 15:42:44.520250 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.542922 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.542990 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.543015 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.543049 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.543075 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:44Z","lastTransitionTime":"2026-01-27T15:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.545361 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.559038 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.581932 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.602038 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.659388 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.659457 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.659480 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.659504 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.659521 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:44Z","lastTransitionTime":"2026-01-27T15:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.669546 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.691786 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.708035 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.726518 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.742202 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.758450 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.763099 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.763156 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.763171 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.763193 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.763210 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:44Z","lastTransitionTime":"2026-01-27T15:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.769174 4966 generic.go:334] "Generic (PLEG): container finished" podID="108b786e-606b-4603-b136-6a3d61fe7ad5" containerID="480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce" exitCode=0 Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.769383 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" event={"ID":"108b786e-606b-4603-b136-6a3d61fe7ad5","Type":"ContainerDied","Data":"480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce"} Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.785358 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.804756 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.821373 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.842463 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.865602 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.865642 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.865652 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.865668 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.865682 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:44Z","lastTransitionTime":"2026-01-27T15:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.876188 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.897480 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.909283 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.922910 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.939341 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.964452 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.969780 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.969836 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.969851 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.969879 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.969911 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:44Z","lastTransitionTime":"2026-01-27T15:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.985290 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:44 crc kubenswrapper[4966]: I0127 15:42:44.996549 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.012795 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.045667 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.072542 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.072585 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.072596 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.072612 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.072624 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:45Z","lastTransitionTime":"2026-01-27T15:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.083864 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.124374 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.163098 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.175238 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.175702 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.175722 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.175746 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.175763 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:45Z","lastTransitionTime":"2026-01-27T15:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.205831 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.241861 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.278922 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.278972 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.278986 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.279008 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.279023 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:45Z","lastTransitionTime":"2026-01-27T15:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.281866 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.382302 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.382352 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.382363 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.382381 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.382393 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:45Z","lastTransitionTime":"2026-01-27T15:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.485429 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.485502 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.485514 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.485531 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.485542 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:45Z","lastTransitionTime":"2026-01-27T15:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.491941 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 14:29:04.260267262 +0000 UTC Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.520436 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:42:45 crc kubenswrapper[4966]: E0127 15:42:45.520568 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.588682 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.588718 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.588731 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.588745 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.588753 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:45Z","lastTransitionTime":"2026-01-27T15:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.691770 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.691836 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.691857 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.691887 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.691935 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:45Z","lastTransitionTime":"2026-01-27T15:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.775240 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerStarted","Data":"76368f303c39c46fb23e7ec0c1f9ed64c1ddf5a00cf95e9215cd17bdacbf34a6"} Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.776177 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.776252 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.782558 4966 generic.go:334] "Generic (PLEG): container finished" podID="108b786e-606b-4603-b136-6a3d61fe7ad5" containerID="215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a" exitCode=0 Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.782612 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" event={"ID":"108b786e-606b-4603-b136-6a3d61fe7ad5","Type":"ContainerDied","Data":"215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a"} Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.795604 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.795641 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.795649 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.795663 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.795673 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:45Z","lastTransitionTime":"2026-01-27T15:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.797363 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.815242 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.825753 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.836798 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.847958 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.853238 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.853344 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.861525 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.895226 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76368f303c39c46fb23e7ec0c1f9ed64c1ddf5a00cf95e9215cd17bdacbf34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.897648 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.897677 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.897685 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.897697 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.897706 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:45Z","lastTransitionTime":"2026-01-27T15:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.910411 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.921203 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.933565 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.943207 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.959970 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.969210 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.978457 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.989631 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.999395 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.999422 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.999431 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.999444 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:45 crc kubenswrapper[4966]: I0127 15:42:45.999452 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:45Z","lastTransitionTime":"2026-01-27T15:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.005846 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.016929 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.029121 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.040620 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.094946 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76368f303c39c46fb23e7ec0c1f9ed64c1ddf5a00cf95e9215cd17bdacbf34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.101532 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.101624 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.101649 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.101682 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.101705 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:46Z","lastTransitionTime":"2026-01-27T15:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.132459 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.163430 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.204280 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.204353 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.204377 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.204407 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.204428 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:46Z","lastTransitionTime":"2026-01-27T15:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.205833 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.244629 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.287624 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.306854 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.306928 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.306941 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.306957 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.306968 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:46Z","lastTransitionTime":"2026-01-27T15:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.333307 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.366376 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.403443 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.409590 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.409651 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.409676 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.409705 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.409728 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:46Z","lastTransitionTime":"2026-01-27T15:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.447948 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.490337 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.492385 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 09:30:47.68269711 +0000 UTC Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.512852 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.512974 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.512995 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.513020 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.513036 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:46Z","lastTransitionTime":"2026-01-27T15:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.520154 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.520203 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:42:46 crc kubenswrapper[4966]: E0127 15:42:46.520317 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:42:46 crc kubenswrapper[4966]: E0127 15:42:46.520395 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.616188 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.616252 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.616275 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.616297 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.616314 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:46Z","lastTransitionTime":"2026-01-27T15:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.719030 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.719098 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.719121 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.719149 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.719173 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:46Z","lastTransitionTime":"2026-01-27T15:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.791395 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" event={"ID":"108b786e-606b-4603-b136-6a3d61fe7ad5","Type":"ContainerStarted","Data":"aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c"} Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.791470 4966 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.813987 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.822048 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.822124 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.822150 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.822180 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.822203 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:46Z","lastTransitionTime":"2026-01-27T15:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.833655 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.853443 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.900602 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76368f303c39c46fb23e7ec0c1f9ed64c1ddf5a00cf95e9215cd17bdacbf34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.922326 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.924987 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.925048 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.925082 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.925113 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.925138 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:46Z","lastTransitionTime":"2026-01-27T15:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.937151 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.955533 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:46 crc kubenswrapper[4966]: I0127 15:42:46.977403 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.002023 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.017678 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.027766 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.027809 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.027822 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.027839 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.027851 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:47Z","lastTransitionTime":"2026-01-27T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.032649 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.043242 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.051957 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.062531 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.102641 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.130797 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.130861 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.130878 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.130958 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.130995 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:47Z","lastTransitionTime":"2026-01-27T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.233997 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.234058 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.234083 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.234153 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.234177 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:47Z","lastTransitionTime":"2026-01-27T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.337642 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.337713 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.337739 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.337797 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.337826 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:47Z","lastTransitionTime":"2026-01-27T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.440621 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.440697 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.440720 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.440749 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.440771 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:47Z","lastTransitionTime":"2026-01-27T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.493531 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:27:38.971272727 +0000 UTC Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.520104 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:42:47 crc kubenswrapper[4966]: E0127 15:42:47.520264 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.543231 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.543284 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.543304 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.543332 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.543349 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:47Z","lastTransitionTime":"2026-01-27T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.645891 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.645943 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.645955 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.645970 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.645981 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:47Z","lastTransitionTime":"2026-01-27T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.748746 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.748786 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.748798 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.748813 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.748825 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:47Z","lastTransitionTime":"2026-01-27T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.794203 4966 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.851462 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.851504 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.851513 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.851527 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.851539 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:47Z","lastTransitionTime":"2026-01-27T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.954399 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.954477 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.954504 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.954534 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:47 crc kubenswrapper[4966]: I0127 15:42:47.954555 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:47Z","lastTransitionTime":"2026-01-27T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.057036 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.057315 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.057326 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.057343 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.057357 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:48Z","lastTransitionTime":"2026-01-27T15:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.160097 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.160168 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.160193 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.160225 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.160249 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:48Z","lastTransitionTime":"2026-01-27T15:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.262735 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.262772 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.262783 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.262799 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.262813 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:48Z","lastTransitionTime":"2026-01-27T15:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.364771 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.364832 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.364856 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.364888 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.364957 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:48Z","lastTransitionTime":"2026-01-27T15:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.468381 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.468435 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.468453 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.468476 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.468493 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:48Z","lastTransitionTime":"2026-01-27T15:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.494213 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 16:02:20.067236096 +0000 UTC Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.520017 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:48 crc kubenswrapper[4966]: E0127 15:42:48.520227 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.520329 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:42:48 crc kubenswrapper[4966]: E0127 15:42:48.520449 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.571163 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.571211 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.571224 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.571239 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.571248 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:48Z","lastTransitionTime":"2026-01-27T15:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.674827 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.674884 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.674952 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.674975 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.674994 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:48Z","lastTransitionTime":"2026-01-27T15:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.777810 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.777867 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.777883 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.777929 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.777946 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:48Z","lastTransitionTime":"2026-01-27T15:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.800485 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glbg8_4a25d116-d49b-4533-bac7-74bee93062b1/ovnkube-controller/0.log" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.805728 4966 generic.go:334] "Generic (PLEG): container finished" podID="4a25d116-d49b-4533-bac7-74bee93062b1" containerID="76368f303c39c46fb23e7ec0c1f9ed64c1ddf5a00cf95e9215cd17bdacbf34a6" exitCode=1 Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.805795 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerDied","Data":"76368f303c39c46fb23e7ec0c1f9ed64c1ddf5a00cf95e9215cd17bdacbf34a6"} Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.807146 4966 scope.go:117] "RemoveContainer" containerID="76368f303c39c46fb23e7ec0c1f9ed64c1ddf5a00cf95e9215cd17bdacbf34a6" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.830402 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.853192 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.873798 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.880488 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.880586 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.880605 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.880628 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.880645 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:48Z","lastTransitionTime":"2026-01-27T15:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.892889 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.910059 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.924778 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.942347 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.965476 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.983602 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.983669 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.983693 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.983725 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:48 crc kubenswrapper[4966]: I0127 15:42:48.983751 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:48Z","lastTransitionTime":"2026-01-27T15:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.000626 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.024734 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.041649 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.054779 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.074451 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76368f303c39c46fb23e7ec0c1f9ed64c1ddf5a00cf95e9215cd17bdacbf34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76368f303c39c46fb23e7ec0c1f9ed64c1ddf5a00cf95e9215cd17bdacbf34a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:42:48Z\\\",\\\"message\\\":\\\"rvice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:42:48.208610 6271 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 15:42:48.208659 6271 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 15:42:48.208665 6271 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 15:42:48.208696 6271 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 15:42:48.208722 6271 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 15:42:48.208728 6271 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 15:42:48.208734 6271 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 15:42:48.208734 6271 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 15:42:48.208746 6271 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 15:42:48.208753 6271 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 15:42:48.208756 6271 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 15:42:48.208770 6271 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 15:42:48.208778 6271 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 15:42:48.208781 6271 factory.go:656] Stopping watch factory\\\\nI0127 15:42:48.208794 6271 ovnkube.go:599] Stopped ovnkube\\\\nI0127 15:42:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.085989 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.086023 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.086036 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.086052 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.086063 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:49Z","lastTransitionTime":"2026-01-27T15:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.094972 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.104062 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.188471 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.188523 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.188536 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.188555 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.188566 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:49Z","lastTransitionTime":"2026-01-27T15:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.326817 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.326852 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.326864 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.326880 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.326892 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:49Z","lastTransitionTime":"2026-01-27T15:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.428841 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.428886 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.428911 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.428929 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.428938 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:49Z","lastTransitionTime":"2026-01-27T15:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.494944 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 22:37:57.933929967 +0000 UTC Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.520453 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:42:49 crc kubenswrapper[4966]: E0127 15:42:49.520574 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.531187 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.531217 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.531227 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.531242 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.531253 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:49Z","lastTransitionTime":"2026-01-27T15:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.632653 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.632705 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.632714 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.632728 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.632736 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:49Z","lastTransitionTime":"2026-01-27T15:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.735351 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.735402 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.735419 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.735443 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.735461 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:49Z","lastTransitionTime":"2026-01-27T15:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.809871 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glbg8_4a25d116-d49b-4533-bac7-74bee93062b1/ovnkube-controller/1.log" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.810386 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glbg8_4a25d116-d49b-4533-bac7-74bee93062b1/ovnkube-controller/0.log" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.812715 4966 generic.go:334] "Generic (PLEG): container finished" podID="4a25d116-d49b-4533-bac7-74bee93062b1" containerID="5b4c394f31a2c73cc3da7389b3ab78e21848a77e1a1aae2d867c9d9268faea9e" exitCode=1 Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.812751 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerDied","Data":"5b4c394f31a2c73cc3da7389b3ab78e21848a77e1a1aae2d867c9d9268faea9e"} Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.812806 4966 scope.go:117] "RemoveContainer" containerID="76368f303c39c46fb23e7ec0c1f9ed64c1ddf5a00cf95e9215cd17bdacbf34a6" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.813493 4966 scope.go:117] "RemoveContainer" containerID="5b4c394f31a2c73cc3da7389b3ab78e21848a77e1a1aae2d867c9d9268faea9e" Jan 27 15:42:49 crc kubenswrapper[4966]: E0127 15:42:49.813751 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-glbg8_openshift-ovn-kubernetes(4a25d116-d49b-4533-bac7-74bee93062b1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.843407 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.843450 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.843466 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.843483 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.843495 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:49Z","lastTransitionTime":"2026-01-27T15:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.851699 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.872602 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.890227 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4c394f31a2c73cc3da7389b3ab78e21848a77e1a1aae2d867c9d9268faea9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76368f303c39c46fb23e7ec0c1f9ed64c1ddf5a00cf95e9215cd17bdacbf34a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:42:48Z\\\",\\\"message\\\":\\\"rvice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:42:48.208610 6271 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 15:42:48.208659 6271 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 15:42:48.208665 6271 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 15:42:48.208696 6271 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 15:42:48.208722 6271 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 15:42:48.208728 6271 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 15:42:48.208734 6271 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 15:42:48.208734 6271 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 15:42:48.208746 6271 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 15:42:48.208753 6271 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 15:42:48.208756 6271 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 15:42:48.208770 6271 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 15:42:48.208778 6271 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 15:42:48.208781 6271 factory.go:656] Stopping watch factory\\\\nI0127 15:42:48.208794 6271 ovnkube.go:599] Stopped ovnkube\\\\nI0127 15:42:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4c394f31a2c73cc3da7389b3ab78e21848a77e1a1aae2d867c9d9268faea9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:42:49Z\\\",\\\"message\\\":\\\"w object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wtl9v\\\\nI0127 15:42:49.737502 6436 services_controller.go:444] Built service openshift-marketplace/marketplace-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 15:42:49.737509 6436 services_controller.go:445] Built service openshift-marketplace/marketplace-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0127 15:42:49.737516 6436 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-wtl9v in node crc\\\\nF0127 15:42:49.737251 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.903561 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.913667 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.926484 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.937162 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.945880 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.945949 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.945969 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.945989 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.946004 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:49Z","lastTransitionTime":"2026-01-27T15:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.948547 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.959686 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.968329 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.979010 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.987449 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:49 crc kubenswrapper[4966]: I0127 15:42:49.996439 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.008870 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.026521 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.048201 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.048249 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.048261 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.048277 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.048289 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:50Z","lastTransitionTime":"2026-01-27T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.151133 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.151188 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.151204 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.151228 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.151245 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:50Z","lastTransitionTime":"2026-01-27T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.188517 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.188712 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.188761 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:50 crc kubenswrapper[4966]: E0127 15:42:50.188793 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:43:06.188754899 +0000 UTC m=+52.491548407 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:42:50 crc kubenswrapper[4966]: E0127 15:42:50.188937 4966 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:42:50 crc kubenswrapper[4966]: E0127 15:42:50.188958 4966 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:42:50 crc kubenswrapper[4966]: E0127 15:42:50.189017 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:43:06.189002237 +0000 UTC m=+52.491795815 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:42:50 crc kubenswrapper[4966]: E0127 15:42:50.189046 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:43:06.189034368 +0000 UTC m=+52.491827866 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.253592 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.253627 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.253638 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.253654 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.253666 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:50Z","lastTransitionTime":"2026-01-27T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.289568 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.289619 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:42:50 crc kubenswrapper[4966]: E0127 15:42:50.289767 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:42:50 crc kubenswrapper[4966]: E0127 15:42:50.289770 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:42:50 crc kubenswrapper[4966]: E0127 15:42:50.289788 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:42:50 crc kubenswrapper[4966]: E0127 15:42:50.289798 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:42:50 crc kubenswrapper[4966]: E0127 15:42:50.289804 4966 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:50 crc kubenswrapper[4966]: E0127 15:42:50.289809 4966 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:50 crc kubenswrapper[4966]: E0127 15:42:50.289855 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:43:06.289840493 +0000 UTC m=+52.592633991 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:50 crc kubenswrapper[4966]: E0127 15:42:50.289872 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:43:06.289865074 +0000 UTC m=+52.592658572 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.354207 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.354249 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.354261 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.354278 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.354289 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:50Z","lastTransitionTime":"2026-01-27T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:50 crc kubenswrapper[4966]: E0127 15:42:50.372865 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.377728 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.377786 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.377804 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.377829 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.377846 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:50Z","lastTransitionTime":"2026-01-27T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:50 crc kubenswrapper[4966]: E0127 15:42:50.397323 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.400874 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.400939 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.400955 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.400974 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.400988 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:50Z","lastTransitionTime":"2026-01-27T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:50 crc kubenswrapper[4966]: E0127 15:42:50.418173 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.421963 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.422015 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.422033 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.422055 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.422074 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:50Z","lastTransitionTime":"2026-01-27T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:50 crc kubenswrapper[4966]: E0127 15:42:50.439594 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.443168 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.443223 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.443247 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.443277 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.443301 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:50Z","lastTransitionTime":"2026-01-27T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:50 crc kubenswrapper[4966]: E0127 15:42:50.462990 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:50 crc kubenswrapper[4966]: E0127 15:42:50.463219 4966 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.464710 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.464809 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.464834 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.464868 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.464891 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:50Z","lastTransitionTime":"2026-01-27T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.496133 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 16:35:08.346913362 +0000 UTC Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.520501 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.520519 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:42:50 crc kubenswrapper[4966]: E0127 15:42:50.520695 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:42:50 crc kubenswrapper[4966]: E0127 15:42:50.520808 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.567323 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.567375 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.567394 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.567418 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.567436 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:50Z","lastTransitionTime":"2026-01-27T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.670518 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.670567 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.670576 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.670591 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.670601 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:50Z","lastTransitionTime":"2026-01-27T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.773955 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.774018 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.774041 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.774070 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.774091 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:50Z","lastTransitionTime":"2026-01-27T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.819013 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glbg8_4a25d116-d49b-4533-bac7-74bee93062b1/ovnkube-controller/1.log" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.876929 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.876984 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.877002 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.877026 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.877044 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:50Z","lastTransitionTime":"2026-01-27T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.979620 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.979694 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.979849 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.979954 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:50 crc kubenswrapper[4966]: I0127 15:42:50.979989 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:50Z","lastTransitionTime":"2026-01-27T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.084358 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.084426 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.084449 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.084479 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.084502 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:51Z","lastTransitionTime":"2026-01-27T15:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.188722 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.188791 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.188814 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.188842 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.188867 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:51Z","lastTransitionTime":"2026-01-27T15:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.291955 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.292022 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.292047 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.292079 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.292101 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:51Z","lastTransitionTime":"2026-01-27T15:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.395503 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.395545 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.395556 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.395573 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.395583 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:51Z","lastTransitionTime":"2026-01-27T15:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.496341 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 01:07:26.537473172 +0000 UTC Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.498703 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.498760 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.498778 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.498804 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.498821 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:51Z","lastTransitionTime":"2026-01-27T15:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.520401 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:42:51 crc kubenswrapper[4966]: E0127 15:42:51.520636 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.601377 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.601477 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.601507 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.601542 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.601572 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:51Z","lastTransitionTime":"2026-01-27T15:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.677619 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.703267 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4c394f31a2c73cc3da7389b3ab78e21848a77e1a1aae2d867c9d9268faea9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76368f303c39c46fb23e7ec0c1f9ed64c1ddf5a00cf95e9215cd17bdacbf34a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:42:48Z\\\",\\\"message\\\":\\\"rvice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:42:48.208610 6271 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 15:42:48.208659 6271 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 15:42:48.208665 6271 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 15:42:48.208696 6271 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 15:42:48.208722 6271 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 15:42:48.208728 6271 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 15:42:48.208734 6271 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 15:42:48.208734 6271 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 15:42:48.208746 6271 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 15:42:48.208753 6271 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 15:42:48.208756 6271 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 15:42:48.208770 6271 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 15:42:48.208778 6271 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 15:42:48.208781 6271 factory.go:656] Stopping watch factory\\\\nI0127 15:42:48.208794 6271 ovnkube.go:599] Stopped ovnkube\\\\nI0127 15:42:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4c394f31a2c73cc3da7389b3ab78e21848a77e1a1aae2d867c9d9268faea9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:42:49Z\\\",\\\"message\\\":\\\"w object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wtl9v\\\\nI0127 15:42:49.737502 6436 services_controller.go:444] Built service openshift-marketplace/marketplace-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 15:42:49.737509 6436 services_controller.go:445] Built service openshift-marketplace/marketplace-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0127 15:42:49.737516 6436 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-wtl9v in node crc\\\\nF0127 15:42:49.737251 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.704059 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.704102 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.704118 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.704136 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.704148 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:51Z","lastTransitionTime":"2026-01-27T15:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.724396 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.738294 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.760250 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.779688 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.797656 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.806807 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.806844 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.806853 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.806868 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.806878 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:51Z","lastTransitionTime":"2026-01-27T15:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.816758 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.835047 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.854493 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.870472 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.888255 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.908661 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.908719 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.908736 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.908759 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.908781 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:51Z","lastTransitionTime":"2026-01-27T15:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.911362 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.931089 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.947653 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:51 crc kubenswrapper[4966]: I0127 15:42:51.979564 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.011483 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.011548 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.011568 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.011599 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.011634 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:52Z","lastTransitionTime":"2026-01-27T15:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.038306 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg"] Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.038677 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.041867 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.042126 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.058740 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.079143 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.095781 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.110047 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc3cf140-b0df-4b4b-9366-3fa1cb9ac057-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zwhlg\" (UID: \"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.110185 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc3cf140-b0df-4b4b-9366-3fa1cb9ac057-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zwhlg\" (UID: \"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.110274 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc3cf140-b0df-4b4b-9366-3fa1cb9ac057-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zwhlg\" (UID: \"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.110479 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njkrh\" (UniqueName: \"kubernetes.io/projected/fc3cf140-b0df-4b4b-9366-3fa1cb9ac057-kube-api-access-njkrh\") pod \"ovnkube-control-plane-749d76644c-zwhlg\" (UID: \"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.114785 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.114839 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.114861 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.114924 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.114951 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:52Z","lastTransitionTime":"2026-01-27T15:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.118721 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.144543 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4c394f31a2c73cc3da7389b3ab78e21848a77e1a1aae2d867c9d9268faea9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76368f303c39c46fb23e7ec0c1f9ed64c1ddf5a00cf95e9215cd17bdacbf34a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:42:48Z\\\",\\\"message\\\":\\\"rvice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:42:48.208610 6271 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 15:42:48.208659 6271 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 15:42:48.208665 6271 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 15:42:48.208696 6271 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 15:42:48.208722 6271 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 15:42:48.208728 6271 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 15:42:48.208734 6271 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 15:42:48.208734 6271 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 15:42:48.208746 6271 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 15:42:48.208753 6271 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 15:42:48.208756 6271 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 15:42:48.208770 6271 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 15:42:48.208778 6271 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 15:42:48.208781 6271 factory.go:656] Stopping watch factory\\\\nI0127 15:42:48.208794 6271 ovnkube.go:599] Stopped ovnkube\\\\nI0127 15:42:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4c394f31a2c73cc3da7389b3ab78e21848a77e1a1aae2d867c9d9268faea9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:42:49Z\\\",\\\"message\\\":\\\"w object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wtl9v\\\\nI0127 15:42:49.737502 6436 services_controller.go:444] Built service openshift-marketplace/marketplace-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 15:42:49.737509 6436 services_controller.go:445] Built service openshift-marketplace/marketplace-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0127 15:42:49.737516 6436 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-wtl9v in node crc\\\\nF0127 15:42:49.737251 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.167788 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.186471 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.202114 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.211985 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc3cf140-b0df-4b4b-9366-3fa1cb9ac057-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zwhlg\" (UID: \"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.212049 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc3cf140-b0df-4b4b-9366-3fa1cb9ac057-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zwhlg\" (UID: \"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.212084 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc3cf140-b0df-4b4b-9366-3fa1cb9ac057-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zwhlg\" (UID: \"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.212149 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njkrh\" (UniqueName: \"kubernetes.io/projected/fc3cf140-b0df-4b4b-9366-3fa1cb9ac057-kube-api-access-njkrh\") pod \"ovnkube-control-plane-749d76644c-zwhlg\" (UID: \"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.213367 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc3cf140-b0df-4b4b-9366-3fa1cb9ac057-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zwhlg\" (UID: \"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.213792 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc3cf140-b0df-4b4b-9366-3fa1cb9ac057-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zwhlg\" (UID: \"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.217584 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.218099 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.218122 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.218146 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.218162 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:52Z","lastTransitionTime":"2026-01-27T15:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.230362 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.230746 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc3cf140-b0df-4b4b-9366-3fa1cb9ac057-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zwhlg\" (UID: \"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.240625 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njkrh\" (UniqueName: \"kubernetes.io/projected/fc3cf140-b0df-4b4b-9366-3fa1cb9ac057-kube-api-access-njkrh\") pod \"ovnkube-control-plane-749d76644c-zwhlg\" (UID: \"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.251051 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.268074 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.291296 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.305471 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.321027 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.321078 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.321092 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.321114 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.321131 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:52Z","lastTransitionTime":"2026-01-27T15:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.323745 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.339523 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zwhlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.356759 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.358591 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: W0127 15:42:52.375189 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc3cf140_b0df_4b4b_9366_3fa1cb9ac057.slice/crio-bd193431047ae95d0c1eb231bde308f7e0065eade74249bfbf45639407f1a198 WatchSource:0}: Error finding container bd193431047ae95d0c1eb231bde308f7e0065eade74249bfbf45639407f1a198: Status 404 returned error can't find the container with id bd193431047ae95d0c1eb231bde308f7e0065eade74249bfbf45639407f1a198 Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.427758 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.427792 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.427801 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.427814 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.427841 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:52Z","lastTransitionTime":"2026-01-27T15:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.496584 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:18:28.404898297 +0000 UTC Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.520104 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.520247 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:42:52 crc kubenswrapper[4966]: E0127 15:42:52.520250 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:42:52 crc kubenswrapper[4966]: E0127 15:42:52.520362 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.530391 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.530428 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.530438 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.530457 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.530469 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:52Z","lastTransitionTime":"2026-01-27T15:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.632890 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.633037 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.633092 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.633148 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.633199 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:52Z","lastTransitionTime":"2026-01-27T15:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.737984 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.738068 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.738090 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.738440 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.738716 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:52Z","lastTransitionTime":"2026-01-27T15:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.831748 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" event={"ID":"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057","Type":"ContainerStarted","Data":"4f1da1a9e566516a476846e68f4dc9df47d3786b76c83c579bb14dba3de53396"} Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.831839 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" event={"ID":"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057","Type":"ContainerStarted","Data":"4d05a447f563ef4849de41640300bfb6b1eb8074a3312f1695ff6bf531bd02c9"} Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.831859 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" event={"ID":"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057","Type":"ContainerStarted","Data":"bd193431047ae95d0c1eb231bde308f7e0065eade74249bfbf45639407f1a198"} Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.841082 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.841133 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.841172 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.841211 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.841232 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:52Z","lastTransitionTime":"2026-01-27T15:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.844327 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.857685 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.872387 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.883872 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.899797 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4c394f31a2c73cc3da7389b3ab78e21848a77e1a1aae2d867c9d9268faea9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76368f303c39c46fb23e7ec0c1f9ed64c1ddf5a00cf95e9215cd17bdacbf34a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:42:48Z\\\",\\\"message\\\":\\\"rvice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:42:48.208610 6271 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 15:42:48.208659 6271 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 15:42:48.208665 6271 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 15:42:48.208696 6271 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 15:42:48.208722 6271 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 15:42:48.208728 6271 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 15:42:48.208734 6271 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 15:42:48.208734 6271 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 15:42:48.208746 6271 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 15:42:48.208753 6271 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 15:42:48.208756 6271 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 15:42:48.208770 6271 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 15:42:48.208778 6271 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 15:42:48.208781 6271 factory.go:656] Stopping watch factory\\\\nI0127 15:42:48.208794 6271 ovnkube.go:599] Stopped ovnkube\\\\nI0127 15:42:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4c394f31a2c73cc3da7389b3ab78e21848a77e1a1aae2d867c9d9268faea9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:42:49Z\\\",\\\"message\\\":\\\"w object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wtl9v\\\\nI0127 15:42:49.737502 6436 services_controller.go:444] Built service openshift-marketplace/marketplace-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 15:42:49.737509 6436 services_controller.go:445] Built service openshift-marketplace/marketplace-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0127 15:42:49.737516 6436 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-wtl9v in node crc\\\\nF0127 15:42:49.737251 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.911200 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.922144 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.932144 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.943426 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.943742 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.943804 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.943826 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.943854 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.943876 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:52Z","lastTransitionTime":"2026-01-27T15:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.957520 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.969489 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.979363 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:52 crc kubenswrapper[4966]: I0127 15:42:52.989056 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.002244 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.011177 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d05a447f563ef4849de41640300bfb6b1eb8074a3312f1695ff6bf531bd02c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1da1a9e566516a476846e68f4dc9df47d3786b76c83c579bb14dba3de53396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zwhlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.026635 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.046218 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.046286 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.046308 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.046336 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.046357 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:53Z","lastTransitionTime":"2026-01-27T15:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.148209 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.148243 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.148251 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.148262 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.148271 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:53Z","lastTransitionTime":"2026-01-27T15:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.178006 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-2fsdv"] Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.178714 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:42:53 crc kubenswrapper[4966]: E0127 15:42:53.178841 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.190555 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.204118 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.218544 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.237094 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.252319 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.252481 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.252497 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.252580 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.252597 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:53Z","lastTransitionTime":"2026-01-27T15:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.255764 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.269632 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.284079 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.298047 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.309187 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d05a447f563ef4849de41640300bfb6b1eb8074a3312f1695ff6bf531bd02c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1da1a9e566516a476846e68f4dc9df47d3786b76c83c579bb14dba3de53396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zwhlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.321887 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fsdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"311852f1-9764-49e5-a58a-5c2feee4ed1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fsdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.324442 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92dm9\" (UniqueName: \"kubernetes.io/projected/311852f1-9764-49e5-a58a-5c2feee4ed1f-kube-api-access-92dm9\") pod \"network-metrics-daemon-2fsdv\" (UID: \"311852f1-9764-49e5-a58a-5c2feee4ed1f\") " pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.324536 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs\") pod \"network-metrics-daemon-2fsdv\" (UID: \"311852f1-9764-49e5-a58a-5c2feee4ed1f\") " pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.343772 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.355066 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.355129 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.355146 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.355171 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.355188 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:53Z","lastTransitionTime":"2026-01-27T15:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.357489 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.368570 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.378072 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.396179 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4c394f31a2c73cc3da7389b3ab78e21848a77e1a1aae2d867c9d9268faea9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76368f303c39c46fb23e7ec0c1f9ed64c1ddf5a00cf95e9215cd17bdacbf34a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:42:48Z\\\",\\\"message\\\":\\\"rvice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:42:48.208610 6271 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 15:42:48.208659 6271 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 15:42:48.208665 6271 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 15:42:48.208696 6271 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 15:42:48.208722 6271 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 15:42:48.208728 6271 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 15:42:48.208734 6271 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 15:42:48.208734 6271 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 15:42:48.208746 6271 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 15:42:48.208753 6271 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 15:42:48.208756 6271 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 15:42:48.208770 6271 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 15:42:48.208778 6271 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 15:42:48.208781 6271 factory.go:656] Stopping watch factory\\\\nI0127 15:42:48.208794 6271 ovnkube.go:599] Stopped ovnkube\\\\nI0127 15:42:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4c394f31a2c73cc3da7389b3ab78e21848a77e1a1aae2d867c9d9268faea9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:42:49Z\\\",\\\"message\\\":\\\"w object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wtl9v\\\\nI0127 15:42:49.737502 6436 services_controller.go:444] Built service openshift-marketplace/marketplace-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 15:42:49.737509 6436 services_controller.go:445] Built service openshift-marketplace/marketplace-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0127 15:42:49.737516 6436 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-wtl9v in node crc\\\\nF0127 15:42:49.737251 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.412018 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.421870 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.425538 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs\") pod \"network-metrics-daemon-2fsdv\" (UID: \"311852f1-9764-49e5-a58a-5c2feee4ed1f\") " pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.425691 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92dm9\" (UniqueName: \"kubernetes.io/projected/311852f1-9764-49e5-a58a-5c2feee4ed1f-kube-api-access-92dm9\") pod \"network-metrics-daemon-2fsdv\" (UID: \"311852f1-9764-49e5-a58a-5c2feee4ed1f\") " pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:42:53 crc kubenswrapper[4966]: E0127 15:42:53.425931 4966 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:42:53 crc kubenswrapper[4966]: E0127 15:42:53.426080 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs podName:311852f1-9764-49e5-a58a-5c2feee4ed1f nodeName:}" failed. No retries permitted until 2026-01-27 15:42:53.926058262 +0000 UTC m=+40.228851820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs") pod "network-metrics-daemon-2fsdv" (UID: "311852f1-9764-49e5-a58a-5c2feee4ed1f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.444743 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92dm9\" (UniqueName: \"kubernetes.io/projected/311852f1-9764-49e5-a58a-5c2feee4ed1f-kube-api-access-92dm9\") pod \"network-metrics-daemon-2fsdv\" (UID: \"311852f1-9764-49e5-a58a-5c2feee4ed1f\") " pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.458391 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.458445 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.458463 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.458488 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.458507 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:53Z","lastTransitionTime":"2026-01-27T15:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.497534 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 09:20:42.131116813 +0000 UTC Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.519808 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:42:53 crc kubenswrapper[4966]: E0127 15:42:53.519969 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.560361 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.560415 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.560436 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.560459 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.560475 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:53Z","lastTransitionTime":"2026-01-27T15:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.663031 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.663369 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.663383 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.663401 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.663414 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:53Z","lastTransitionTime":"2026-01-27T15:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.765813 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.765869 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.765886 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.765942 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.765960 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:53Z","lastTransitionTime":"2026-01-27T15:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.868771 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.868832 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.868853 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.868880 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.868929 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:53Z","lastTransitionTime":"2026-01-27T15:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.931868 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs\") pod \"network-metrics-daemon-2fsdv\" (UID: \"311852f1-9764-49e5-a58a-5c2feee4ed1f\") " pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:42:53 crc kubenswrapper[4966]: E0127 15:42:53.932322 4966 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:42:53 crc kubenswrapper[4966]: E0127 15:42:53.932424 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs podName:311852f1-9764-49e5-a58a-5c2feee4ed1f nodeName:}" failed. No retries permitted until 2026-01-27 15:42:54.932397908 +0000 UTC m=+41.235191426 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs") pod "network-metrics-daemon-2fsdv" (UID: "311852f1-9764-49e5-a58a-5c2feee4ed1f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.971884 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.971971 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.971989 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.972015 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:53 crc kubenswrapper[4966]: I0127 15:42:53.972032 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:53Z","lastTransitionTime":"2026-01-27T15:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.075431 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.075500 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.075530 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.075562 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.075585 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:54Z","lastTransitionTime":"2026-01-27T15:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.178220 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.178551 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.178669 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.178745 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.178809 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:54Z","lastTransitionTime":"2026-01-27T15:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.282035 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.282105 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.282130 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.282158 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.282178 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:54Z","lastTransitionTime":"2026-01-27T15:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.386034 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.386392 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.386591 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.386741 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.386964 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:54Z","lastTransitionTime":"2026-01-27T15:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.490289 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.490331 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.490347 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.490370 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.490386 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:54Z","lastTransitionTime":"2026-01-27T15:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.498644 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 19:12:06.081123142 +0000 UTC Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.520195 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:54 crc kubenswrapper[4966]: E0127 15:42:54.520566 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.521364 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:42:54 crc kubenswrapper[4966]: E0127 15:42:54.521622 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.523150 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:42:54 crc kubenswrapper[4966]: E0127 15:42:54.523395 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.538051 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.558560 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.579474 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.593356 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.593433 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.593450 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.593478 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.593497 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:54Z","lastTransitionTime":"2026-01-27T15:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.605994 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.640025 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4c394f31a2c73cc3da7389b3ab78e21848a77e1a1aae2d867c9d9268faea9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76368f303c39c46fb23e7ec0c1f9ed64c1ddf5a00cf95e9215cd17bdacbf34a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:42:48Z\\\",\\\"message\\\":\\\"rvice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:42:48.208610 6271 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 15:42:48.208659 6271 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 15:42:48.208665 6271 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 15:42:48.208696 6271 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 15:42:48.208722 6271 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 15:42:48.208728 6271 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 15:42:48.208734 6271 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 15:42:48.208734 6271 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 15:42:48.208746 6271 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 15:42:48.208753 6271 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 15:42:48.208756 6271 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 15:42:48.208770 6271 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 15:42:48.208778 6271 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 15:42:48.208781 6271 factory.go:656] Stopping watch factory\\\\nI0127 15:42:48.208794 6271 ovnkube.go:599] Stopped ovnkube\\\\nI0127 15:42:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4c394f31a2c73cc3da7389b3ab78e21848a77e1a1aae2d867c9d9268faea9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:42:49Z\\\",\\\"message\\\":\\\"w object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wtl9v\\\\nI0127 15:42:49.737502 6436 services_controller.go:444] Built service openshift-marketplace/marketplace-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 15:42:49.737509 6436 services_controller.go:445] Built service openshift-marketplace/marketplace-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0127 15:42:49.737516 6436 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-wtl9v in node crc\\\\nF0127 15:42:49.737251 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.664664 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.684833 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.697678 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.697727 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.697736 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.697752 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.697763 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:54Z","lastTransitionTime":"2026-01-27T15:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.700223 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.713515 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.730032 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.750348 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.766281 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.784561 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.800341 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.800383 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.800395 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.800412 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.800424 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:54Z","lastTransitionTime":"2026-01-27T15:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.805143 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.818150 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d05a447f563ef4849de41640300bfb6b1eb8074a3312f1695ff6bf531bd02c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1da1a9e566516a476846e68f4dc9df47d3786b76c83c579bb14dba3de53396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zwhlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.832982 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fsdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"311852f1-9764-49e5-a58a-5c2feee4ed1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fsdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.879146 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:42:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.904109 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.904176 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.904193 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.904215 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.904233 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:54Z","lastTransitionTime":"2026-01-27T15:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:54 crc kubenswrapper[4966]: I0127 15:42:54.957107 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs\") pod \"network-metrics-daemon-2fsdv\" (UID: \"311852f1-9764-49e5-a58a-5c2feee4ed1f\") " pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:42:54 crc kubenswrapper[4966]: E0127 15:42:54.957256 4966 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:42:54 crc kubenswrapper[4966]: E0127 15:42:54.957321 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs podName:311852f1-9764-49e5-a58a-5c2feee4ed1f nodeName:}" failed. No retries permitted until 2026-01-27 15:42:56.957304695 +0000 UTC m=+43.260098203 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs") pod "network-metrics-daemon-2fsdv" (UID: "311852f1-9764-49e5-a58a-5c2feee4ed1f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.006948 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.006996 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.007008 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.007028 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.007040 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:55Z","lastTransitionTime":"2026-01-27T15:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.109932 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.109972 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.109982 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.109996 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.110005 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:55Z","lastTransitionTime":"2026-01-27T15:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.214135 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.214223 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.214249 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.214283 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.214306 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:55Z","lastTransitionTime":"2026-01-27T15:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.317460 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.317529 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.317550 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.317574 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.317595 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:55Z","lastTransitionTime":"2026-01-27T15:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.420731 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.420787 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.420806 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.420830 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.420847 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:55Z","lastTransitionTime":"2026-01-27T15:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.499312 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:56:35.647335483 +0000 UTC Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.520788 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:42:55 crc kubenswrapper[4966]: E0127 15:42:55.521035 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.523457 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.523521 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.523544 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.523578 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.523601 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:55Z","lastTransitionTime":"2026-01-27T15:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.626700 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.626762 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.626781 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.626804 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.626831 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:55Z","lastTransitionTime":"2026-01-27T15:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.729789 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.729848 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.729867 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.729893 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.729941 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:55Z","lastTransitionTime":"2026-01-27T15:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.833095 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.833156 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.833177 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.833201 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.833219 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:55Z","lastTransitionTime":"2026-01-27T15:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.936393 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.936488 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.936512 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.936543 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:55 crc kubenswrapper[4966]: I0127 15:42:55.936574 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:55Z","lastTransitionTime":"2026-01-27T15:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.039441 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.039530 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.039554 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.039586 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.039611 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:56Z","lastTransitionTime":"2026-01-27T15:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.142339 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.142415 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.142440 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.142469 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.142489 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:56Z","lastTransitionTime":"2026-01-27T15:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.245543 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.245615 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.245640 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.245666 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.245684 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:56Z","lastTransitionTime":"2026-01-27T15:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.347976 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.348024 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.348032 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.348046 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.348055 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:56Z","lastTransitionTime":"2026-01-27T15:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.451603 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.451657 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.451677 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.451702 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.451721 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:56Z","lastTransitionTime":"2026-01-27T15:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.500513 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:34:30.795094308 +0000 UTC Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.519849 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.519852 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:42:56 crc kubenswrapper[4966]: E0127 15:42:56.520095 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.520162 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:42:56 crc kubenswrapper[4966]: E0127 15:42:56.520310 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:42:56 crc kubenswrapper[4966]: E0127 15:42:56.520473 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.555246 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.555306 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.555326 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.555353 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.555376 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:56Z","lastTransitionTime":"2026-01-27T15:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.657821 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.657885 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.657952 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.657987 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.658011 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:56Z","lastTransitionTime":"2026-01-27T15:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.761248 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.761304 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.761321 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.761343 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.761361 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:56Z","lastTransitionTime":"2026-01-27T15:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.863595 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.863678 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.863697 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.863751 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.863768 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:56Z","lastTransitionTime":"2026-01-27T15:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.969376 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.969466 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.969489 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.969520 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.969542 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:56Z","lastTransitionTime":"2026-01-27T15:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:56 crc kubenswrapper[4966]: I0127 15:42:56.980146 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs\") pod \"network-metrics-daemon-2fsdv\" (UID: \"311852f1-9764-49e5-a58a-5c2feee4ed1f\") " pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:42:56 crc kubenswrapper[4966]: E0127 15:42:56.980350 4966 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:42:56 crc kubenswrapper[4966]: E0127 15:42:56.980478 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs podName:311852f1-9764-49e5-a58a-5c2feee4ed1f nodeName:}" failed. No retries permitted until 2026-01-27 15:43:00.98044101 +0000 UTC m=+47.283234588 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs") pod "network-metrics-daemon-2fsdv" (UID: "311852f1-9764-49e5-a58a-5c2feee4ed1f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.072583 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.072640 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.072650 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.072664 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.072674 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:57Z","lastTransitionTime":"2026-01-27T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.175767 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.175847 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.175870 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.175942 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.175966 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:57Z","lastTransitionTime":"2026-01-27T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.279610 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.279676 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.279693 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.279716 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.279733 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:57Z","lastTransitionTime":"2026-01-27T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.383237 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.383310 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.383327 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.383355 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.383372 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:57Z","lastTransitionTime":"2026-01-27T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.485621 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.485681 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.485705 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.485737 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.485759 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:57Z","lastTransitionTime":"2026-01-27T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.501472 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 21:26:57.72240775 +0000 UTC Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.519865 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:42:57 crc kubenswrapper[4966]: E0127 15:42:57.520092 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.589312 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.589388 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.589413 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.589443 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.589465 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:57Z","lastTransitionTime":"2026-01-27T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.691663 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.691711 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.691723 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.691743 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.691756 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:57Z","lastTransitionTime":"2026-01-27T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.794519 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.794561 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.794569 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.794583 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.794594 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:57Z","lastTransitionTime":"2026-01-27T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.897364 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.897406 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.897414 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.897428 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.897437 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:57Z","lastTransitionTime":"2026-01-27T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.999491 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.999543 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.999556 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:57 crc kubenswrapper[4966]: I0127 15:42:57.999581 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:57.999596 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:57Z","lastTransitionTime":"2026-01-27T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.102952 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.103062 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.103088 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.103119 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.103146 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:58Z","lastTransitionTime":"2026-01-27T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.206051 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.206110 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.206129 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.206154 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.206172 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:58Z","lastTransitionTime":"2026-01-27T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.313629 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.313706 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.313741 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.313772 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.313794 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:58Z","lastTransitionTime":"2026-01-27T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.416626 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.416681 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.416700 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.416755 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.416780 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:58Z","lastTransitionTime":"2026-01-27T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.502103 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 01:50:05.245548855 +0000 UTC Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.520370 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.520422 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.520435 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.520454 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.520468 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:58Z","lastTransitionTime":"2026-01-27T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.520694 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.520757 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.520719 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:42:58 crc kubenswrapper[4966]: E0127 15:42:58.520870 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:42:58 crc kubenswrapper[4966]: E0127 15:42:58.520989 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:42:58 crc kubenswrapper[4966]: E0127 15:42:58.521066 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.622982 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.623049 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.623067 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.623091 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.623108 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:58Z","lastTransitionTime":"2026-01-27T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.725444 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.725646 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.725671 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.725694 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.725710 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:58Z","lastTransitionTime":"2026-01-27T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.828157 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.828208 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.828226 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.828249 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.828263 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:58Z","lastTransitionTime":"2026-01-27T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.931344 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.931383 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.931394 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.931412 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:58 crc kubenswrapper[4966]: I0127 15:42:58.931424 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:58Z","lastTransitionTime":"2026-01-27T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.034417 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.034475 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.034492 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.034517 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.034535 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:59Z","lastTransitionTime":"2026-01-27T15:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.137724 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.137867 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.137940 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.138014 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.138085 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:59Z","lastTransitionTime":"2026-01-27T15:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.240152 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.240195 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.240205 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.240237 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.240250 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:59Z","lastTransitionTime":"2026-01-27T15:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.343507 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.343577 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.343597 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.343620 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.343640 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:59Z","lastTransitionTime":"2026-01-27T15:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.447193 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.447254 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.447273 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.447296 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.447312 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:59Z","lastTransitionTime":"2026-01-27T15:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.502284 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 15:47:26.680051499 +0000 UTC Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.520313 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:42:59 crc kubenswrapper[4966]: E0127 15:42:59.520576 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.552429 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.552794 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.553977 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.554077 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.554104 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:59Z","lastTransitionTime":"2026-01-27T15:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.657218 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.657282 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.657300 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.657322 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.657340 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:59Z","lastTransitionTime":"2026-01-27T15:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.759483 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.759530 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.759538 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.759553 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.759563 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:59Z","lastTransitionTime":"2026-01-27T15:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.862313 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.862365 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.862382 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.862410 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.862428 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:59Z","lastTransitionTime":"2026-01-27T15:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.965072 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.965107 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.965117 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.965132 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:42:59 crc kubenswrapper[4966]: I0127 15:42:59.965143 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:42:59Z","lastTransitionTime":"2026-01-27T15:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.068511 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.068579 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.068598 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.068624 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.068642 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:00Z","lastTransitionTime":"2026-01-27T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.170671 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.170731 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.170748 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.170771 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.170788 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:00Z","lastTransitionTime":"2026-01-27T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.273127 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.273157 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.273166 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.273180 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.273189 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:00Z","lastTransitionTime":"2026-01-27T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.376827 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.377119 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.377154 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.377182 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.377203 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:00Z","lastTransitionTime":"2026-01-27T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.479858 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.480324 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.480516 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.480850 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.481062 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:00Z","lastTransitionTime":"2026-01-27T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.502460 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 12:27:06.219941554 +0000 UTC Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.520029 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.520107 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.520236 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:00 crc kubenswrapper[4966]: E0127 15:43:00.520279 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:00 crc kubenswrapper[4966]: E0127 15:43:00.520435 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:00 crc kubenswrapper[4966]: E0127 15:43:00.520531 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.584294 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.584576 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.584699 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.585054 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.585195 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:00Z","lastTransitionTime":"2026-01-27T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.608248 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.608617 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.608835 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.609104 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.609325 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:00Z","lastTransitionTime":"2026-01-27T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:00 crc kubenswrapper[4966]: E0127 15:43:00.630371 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.636533 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.636603 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.636623 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.636649 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.636705 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:00Z","lastTransitionTime":"2026-01-27T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:00 crc kubenswrapper[4966]: E0127 15:43:00.657555 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.662887 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.662987 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.663006 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.663033 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.663051 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:00Z","lastTransitionTime":"2026-01-27T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:00 crc kubenswrapper[4966]: E0127 15:43:00.683635 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.686777 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.686887 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.687000 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.687071 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.687131 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:00Z","lastTransitionTime":"2026-01-27T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:00 crc kubenswrapper[4966]: E0127 15:43:00.699160 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.702272 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.702314 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.702326 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.702347 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.702362 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:00Z","lastTransitionTime":"2026-01-27T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:00 crc kubenswrapper[4966]: E0127 15:43:00.714008 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:00 crc kubenswrapper[4966]: E0127 15:43:00.714120 4966 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.715445 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.715492 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.715509 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.715526 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.715540 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:00Z","lastTransitionTime":"2026-01-27T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.818186 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.818227 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.818239 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.818263 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.818289 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:00Z","lastTransitionTime":"2026-01-27T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.921878 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.921974 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.921998 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.922023 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:00 crc kubenswrapper[4966]: I0127 15:43:00.922040 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:00Z","lastTransitionTime":"2026-01-27T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.021176 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs\") pod \"network-metrics-daemon-2fsdv\" (UID: \"311852f1-9764-49e5-a58a-5c2feee4ed1f\") " pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:01 crc kubenswrapper[4966]: E0127 15:43:01.021339 4966 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:43:01 crc kubenswrapper[4966]: E0127 15:43:01.021441 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs podName:311852f1-9764-49e5-a58a-5c2feee4ed1f nodeName:}" failed. No retries permitted until 2026-01-27 15:43:09.021416033 +0000 UTC m=+55.324209561 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs") pod "network-metrics-daemon-2fsdv" (UID: "311852f1-9764-49e5-a58a-5c2feee4ed1f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.025423 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.025504 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.025526 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.025553 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.025578 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:01Z","lastTransitionTime":"2026-01-27T15:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.128466 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.128528 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.128545 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.128572 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.128590 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:01Z","lastTransitionTime":"2026-01-27T15:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.231809 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.231857 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.231874 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.231904 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.231946 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:01Z","lastTransitionTime":"2026-01-27T15:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.334396 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.334455 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.334523 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.334557 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.334581 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:01Z","lastTransitionTime":"2026-01-27T15:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.437136 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.437269 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.437286 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.437309 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.437328 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:01Z","lastTransitionTime":"2026-01-27T15:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.502986 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 09:30:49.30086237 +0000 UTC Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.521414 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:01 crc kubenswrapper[4966]: E0127 15:43:01.521594 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.522136 4966 scope.go:117] "RemoveContainer" containerID="5b4c394f31a2c73cc3da7389b3ab78e21848a77e1a1aae2d867c9d9268faea9e" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.539057 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.539097 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.539108 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.539124 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.539136 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:01Z","lastTransitionTime":"2026-01-27T15:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.541709 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.566997 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4c394f31a2c73cc3da7389b3ab78e21848a77e1a1aae2d867c9d9268faea9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4c394f31a2c73cc3da7389b3ab78e21848a77e1a1aae2d867c9d9268faea9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:42:49Z\\\",\\\"message\\\":\\\"w object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wtl9v\\\\nI0127 15:42:49.737502 6436 services_controller.go:444] Built service openshift-marketplace/marketplace-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 15:42:49.737509 6436 services_controller.go:445] Built service openshift-marketplace/marketplace-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0127 15:42:49.737516 6436 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-wtl9v in node crc\\\\nF0127 15:42:49.737251 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-glbg8_openshift-ovn-kubernetes(4a25d116-d49b-4533-bac7-74bee93062b1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.584954 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.597505 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.610425 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.622659 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.637880 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.640977 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.641007 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.641016 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.641030 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.641040 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:01Z","lastTransitionTime":"2026-01-27T15:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.649378 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.660373 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.673023 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.688522 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.699123 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.712635 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.723890 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fsdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"311852f1-9764-49e5-a58a-5c2feee4ed1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fsdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.740637 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.743159 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.743195 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.743208 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.743225 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.743237 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:01Z","lastTransitionTime":"2026-01-27T15:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.752653 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d05a447f563ef4849de41640300bfb6b1eb8074a3312f1695ff6bf531bd02c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1da1a9e566516a476846e68f4dc9df47d3786b76c83c579bb14dba3de53396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zwhlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.778708 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.845613 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.845672 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.845697 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.845754 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.845773 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:01Z","lastTransitionTime":"2026-01-27T15:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.868635 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glbg8_4a25d116-d49b-4533-bac7-74bee93062b1/ovnkube-controller/1.log" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.872143 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerStarted","Data":"03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c"} Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.872328 4966 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.899417 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.914933 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.933336 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.947223 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.947264 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.947274 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.947291 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.947301 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:01Z","lastTransitionTime":"2026-01-27T15:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.954996 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4c394f31a2c73cc3da7389b3ab78e21848a77e1a1aae2d867c9d9268faea9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:42:49Z\\\",\\\"message\\\":\\\"w object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wtl9v\\\\nI0127 15:42:49.737502 6436 services_controller.go:444] Built service openshift-marketplace/marketplace-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 15:42:49.737509 6436 services_controller.go:445] Built service openshift-marketplace/marketplace-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0127 15:42:49.737516 6436 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-wtl9v in node crc\\\\nF0127 15:42:49.737251 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.970136 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.982131 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:01 crc kubenswrapper[4966]: I0127 15:43:01.995132 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.009450 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.023645 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.043732 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.049630 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.049677 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.049692 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.049714 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.049727 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:02Z","lastTransitionTime":"2026-01-27T15:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.063485 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.072362 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.080813 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.094028 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.104624 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d05a447f563ef4849de41640300bfb6b1eb8074a3312f1695ff6bf531bd02c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1da1a9e566516a476846e68f4dc9df47d3786b76c83c579bb14dba3de53396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zwhlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.157997 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.158040 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.158057 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.158077 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.158089 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:02Z","lastTransitionTime":"2026-01-27T15:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.165768 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fsdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"311852f1-9764-49e5-a58a-5c2feee4ed1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fsdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.190848 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.260291 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.260337 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.260349 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.260368 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.260381 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:02Z","lastTransitionTime":"2026-01-27T15:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.363685 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.363744 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.363754 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.363776 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.363789 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:02Z","lastTransitionTime":"2026-01-27T15:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.467337 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.467411 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.467424 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.467447 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.467460 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:02Z","lastTransitionTime":"2026-01-27T15:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.503954 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 06:37:25.952566884 +0000 UTC Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.520449 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.520508 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.520562 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:02 crc kubenswrapper[4966]: E0127 15:43:02.520687 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:02 crc kubenswrapper[4966]: E0127 15:43:02.520788 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:02 crc kubenswrapper[4966]: E0127 15:43:02.520929 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.570137 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.570196 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.570211 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.570229 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.570246 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:02Z","lastTransitionTime":"2026-01-27T15:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.672805 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.672862 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.672875 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.672914 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.672926 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:02Z","lastTransitionTime":"2026-01-27T15:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.775789 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.775824 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.775834 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.775852 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.775863 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:02Z","lastTransitionTime":"2026-01-27T15:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.878463 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.878546 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.878575 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.878607 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.878634 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:02Z","lastTransitionTime":"2026-01-27T15:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.879323 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glbg8_4a25d116-d49b-4533-bac7-74bee93062b1/ovnkube-controller/2.log" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.880296 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glbg8_4a25d116-d49b-4533-bac7-74bee93062b1/ovnkube-controller/1.log" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.885767 4966 generic.go:334] "Generic (PLEG): container finished" podID="4a25d116-d49b-4533-bac7-74bee93062b1" containerID="03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c" exitCode=1 Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.885853 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerDied","Data":"03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c"} Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.885965 4966 scope.go:117] "RemoveContainer" containerID="5b4c394f31a2c73cc3da7389b3ab78e21848a77e1a1aae2d867c9d9268faea9e" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.887259 4966 scope.go:117] "RemoveContainer" containerID="03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c" Jan 27 15:43:02 crc kubenswrapper[4966]: E0127 15:43:02.887586 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-glbg8_openshift-ovn-kubernetes(4a25d116-d49b-4533-bac7-74bee93062b1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.908828 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.928466 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.943446 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.977483 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4c394f31a2c73cc3da7389b3ab78e21848a77e1a1aae2d867c9d9268faea9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:42:49Z\\\",\\\"message\\\":\\\"w object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wtl9v\\\\nI0127 15:42:49.737502 6436 services_controller.go:444] Built service openshift-marketplace/marketplace-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 15:42:49.737509 6436 services_controller.go:445] Built service openshift-marketplace/marketplace-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0127 15:42:49.737516 6436 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-wtl9v in node crc\\\\nF0127 15:42:49.737251 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:43:02Z\\\",\\\"message\\\":\\\"ube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:43:02.464440 6646 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Tim\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.982405 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.982512 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.982536 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.982563 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.982582 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:02Z","lastTransitionTime":"2026-01-27T15:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:02 crc kubenswrapper[4966]: I0127 15:43:02.996266 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.009495 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.023295 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.040389 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.058056 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.077706 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.085646 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.085766 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.085788 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.085842 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.085860 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:03Z","lastTransitionTime":"2026-01-27T15:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.099879 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.117193 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.134849 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.150843 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.168815 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d05a447f563ef4849de41640300bfb6b1eb8074a3312f1695ff6bf531bd02c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1da1a9e566516a476846e68f4dc9df47d3786b76c83c579bb14dba3de53396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zwhlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.189512 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.189569 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.189587 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.189615 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.189636 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:03Z","lastTransitionTime":"2026-01-27T15:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.189837 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fsdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"311852f1-9764-49e5-a58a-5c2feee4ed1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fsdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.223671 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.293497 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.293579 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.293596 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.293624 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.293642 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:03Z","lastTransitionTime":"2026-01-27T15:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.395824 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.395884 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.395942 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.395971 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.395987 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:03Z","lastTransitionTime":"2026-01-27T15:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.498869 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.498933 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.498943 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.498958 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.498969 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:03Z","lastTransitionTime":"2026-01-27T15:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.505101 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 02:30:18.724285006 +0000 UTC Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.520498 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:03 crc kubenswrapper[4966]: E0127 15:43:03.520679 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.602014 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.602087 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.602107 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.602131 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.602148 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:03Z","lastTransitionTime":"2026-01-27T15:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.705017 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.705059 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.705087 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.705102 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.705113 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:03Z","lastTransitionTime":"2026-01-27T15:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.807357 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.807642 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.807652 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.807665 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.807675 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:03Z","lastTransitionTime":"2026-01-27T15:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.893227 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glbg8_4a25d116-d49b-4533-bac7-74bee93062b1/ovnkube-controller/2.log" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.911331 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.911392 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.911415 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.911444 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:03 crc kubenswrapper[4966]: I0127 15:43:03.911467 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:03Z","lastTransitionTime":"2026-01-27T15:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.014582 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.014667 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.014688 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.014715 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.014733 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:04Z","lastTransitionTime":"2026-01-27T15:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.117879 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.117996 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.118071 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.118153 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.118184 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:04Z","lastTransitionTime":"2026-01-27T15:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.221160 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.221241 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.221269 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.221308 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.221327 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:04Z","lastTransitionTime":"2026-01-27T15:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.252814 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.254586 4966 scope.go:117] "RemoveContainer" containerID="03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c" Jan 27 15:43:04 crc kubenswrapper[4966]: E0127 15:43:04.254850 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-glbg8_openshift-ovn-kubernetes(4a25d116-d49b-4533-bac7-74bee93062b1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.276942 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.297086 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.317362 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.323702 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.323758 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.323775 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.323801 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.323821 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:04Z","lastTransitionTime":"2026-01-27T15:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.351513 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:43:02Z\\\",\\\"message\\\":\\\"ube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:43:02.464440 6646 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Tim\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:43:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-glbg8_openshift-ovn-kubernetes(4a25d116-d49b-4533-bac7-74bee93062b1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.377796 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.395108 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.413059 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.427205 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.427255 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.427270 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.427291 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.427309 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:04Z","lastTransitionTime":"2026-01-27T15:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.435699 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.451767 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.470962 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.488368 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.499995 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.505310 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 18:29:42.401026003 +0000 UTC Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.511101 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.520769 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:04 crc kubenswrapper[4966]: E0127 15:43:04.520955 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.521225 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:04 crc kubenswrapper[4966]: E0127 15:43:04.521338 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.522938 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:04 crc kubenswrapper[4966]: E0127 15:43:04.523045 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.528304 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.529632 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.529668 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.529682 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.529751 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.529770 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:04Z","lastTransitionTime":"2026-01-27T15:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.541039 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d05a447f563ef4849de41640300bfb6b1eb8074a3312f1695ff6bf531bd02c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1da1a9e566516a476846e68f4dc9df47d3786b76c83c579bb14dba3de53396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zwhlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.552128 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fsdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"311852f1-9764-49e5-a58a-5c2feee4ed1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fsdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.578822 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.596511 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.610619 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.623224 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.631842 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.631882 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.631897 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.631927 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.631939 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:04Z","lastTransitionTime":"2026-01-27T15:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.641740 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.666657 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:43:02Z\\\",\\\"message\\\":\\\"ube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:43:02.464440 6646 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Tim\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:43:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-glbg8_openshift-ovn-kubernetes(4a25d116-d49b-4533-bac7-74bee93062b1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.687390 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.698650 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.710108 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.723029 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.733893 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.734001 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.734018 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.734039 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.734054 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:04Z","lastTransitionTime":"2026-01-27T15:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.734939 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.750837 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.762418 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.781697 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.794109 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.810533 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.822700 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d05a447f563ef4849de41640300bfb6b1eb8074a3312f1695ff6bf531bd02c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1da1a9e566516a476846e68f4dc9df47d3786b76c83c579bb14dba3de53396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zwhlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.833121 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fsdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"311852f1-9764-49e5-a58a-5c2feee4ed1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fsdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.836768 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.836810 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.836828 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.836850 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.836866 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:04Z","lastTransitionTime":"2026-01-27T15:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.940106 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.940202 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.940221 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.940245 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:04 crc kubenswrapper[4966]: I0127 15:43:04.940262 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:04Z","lastTransitionTime":"2026-01-27T15:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.042518 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.042577 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.042587 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.042600 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.042609 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:05Z","lastTransitionTime":"2026-01-27T15:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.145003 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.145242 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.145311 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.145384 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.145450 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:05Z","lastTransitionTime":"2026-01-27T15:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.248256 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.248290 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.248299 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.248312 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.248320 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:05Z","lastTransitionTime":"2026-01-27T15:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.351346 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.351391 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.351415 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.351443 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.351462 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:05Z","lastTransitionTime":"2026-01-27T15:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.453718 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.453771 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.453790 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.453813 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.453830 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:05Z","lastTransitionTime":"2026-01-27T15:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.505952 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:50:11.57543082 +0000 UTC Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.520408 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:05 crc kubenswrapper[4966]: E0127 15:43:05.520587 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.557114 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.557166 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.557189 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.557217 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.557241 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:05Z","lastTransitionTime":"2026-01-27T15:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.660126 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.660160 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.660171 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.660186 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.660200 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:05Z","lastTransitionTime":"2026-01-27T15:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.763419 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.763762 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.763887 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.764047 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.764185 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:05Z","lastTransitionTime":"2026-01-27T15:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.855612 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.868766 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.870085 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.870186 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.870214 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.870246 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.870269 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:05Z","lastTransitionTime":"2026-01-27T15:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.889462 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.912383 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.933488 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.955683 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.973711 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.973749 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.973763 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.973783 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.973798 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:05Z","lastTransitionTime":"2026-01-27T15:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:05 crc kubenswrapper[4966]: I0127 15:43:05.987564 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:43:02Z\\\",\\\"message\\\":\\\"ube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:43:02.464440 6646 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Tim\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:43:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-glbg8_openshift-ovn-kubernetes(4a25d116-d49b-4533-bac7-74bee93062b1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.004980 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.021754 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.038682 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.057966 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.076596 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.076703 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.076751 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.076805 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.076825 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:06Z","lastTransitionTime":"2026-01-27T15:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.081047 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.096607 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.115718 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.129839 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.145146 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.159219 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.171733 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d05a447f563ef4849de41640300bfb6b1eb8074a3312f1695ff6bf531bd02c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1da1a9e566516a476846e68f4dc9df47d3786b76c83c579bb14dba3de53396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zwhlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.179559 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.179631 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.179654 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.179687 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.179705 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:06Z","lastTransitionTime":"2026-01-27T15:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.184521 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fsdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"311852f1-9764-49e5-a58a-5c2feee4ed1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fsdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.219159 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.219308 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:06 crc kubenswrapper[4966]: E0127 15:43:06.219356 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:43:38.219316257 +0000 UTC m=+84.522109795 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:43:06 crc kubenswrapper[4966]: E0127 15:43:06.219432 4966 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.219526 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:06 crc kubenswrapper[4966]: E0127 15:43:06.219640 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:43:38.219536534 +0000 UTC m=+84.522330052 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:43:06 crc kubenswrapper[4966]: E0127 15:43:06.219744 4966 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:43:06 crc kubenswrapper[4966]: E0127 15:43:06.220161 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:43:38.220112923 +0000 UTC m=+84.522906451 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.282376 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.282407 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.282415 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.282428 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.282437 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:06Z","lastTransitionTime":"2026-01-27T15:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.320405 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.320478 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:06 crc kubenswrapper[4966]: E0127 15:43:06.320704 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:43:06 crc kubenswrapper[4966]: E0127 15:43:06.320737 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:43:06 crc kubenswrapper[4966]: E0127 15:43:06.320736 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:43:06 crc kubenswrapper[4966]: E0127 15:43:06.320786 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:43:06 crc kubenswrapper[4966]: E0127 15:43:06.320807 4966 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:43:06 crc kubenswrapper[4966]: E0127 15:43:06.320756 4966 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:43:06 crc kubenswrapper[4966]: E0127 15:43:06.320883 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:43:38.320858495 +0000 UTC m=+84.623652013 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:43:06 crc kubenswrapper[4966]: E0127 15:43:06.321068 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:43:38.32099943 +0000 UTC m=+84.623792958 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.386471 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.386559 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.386583 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.386617 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.386641 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:06Z","lastTransitionTime":"2026-01-27T15:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.489816 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.490000 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.490023 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.490048 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.490067 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:06Z","lastTransitionTime":"2026-01-27T15:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.506872 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 03:43:40.19250454 +0000 UTC Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.520288 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.520390 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:06 crc kubenswrapper[4966]: E0127 15:43:06.520489 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.520417 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:06 crc kubenswrapper[4966]: E0127 15:43:06.520614 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:06 crc kubenswrapper[4966]: E0127 15:43:06.520696 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.592984 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.593054 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.593073 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.593098 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.593116 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:06Z","lastTransitionTime":"2026-01-27T15:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.696731 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.696850 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.696870 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.696925 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.696944 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:06Z","lastTransitionTime":"2026-01-27T15:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.800090 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.800144 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.800155 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.800174 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.800190 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:06Z","lastTransitionTime":"2026-01-27T15:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.903402 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.903465 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.903484 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.903509 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:06 crc kubenswrapper[4966]: I0127 15:43:06.903528 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:06Z","lastTransitionTime":"2026-01-27T15:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.006716 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.006792 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.006809 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.006839 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.006860 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:07Z","lastTransitionTime":"2026-01-27T15:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.110577 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.110637 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.110659 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.110711 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.110737 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:07Z","lastTransitionTime":"2026-01-27T15:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.213366 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.213435 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.213453 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.213478 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.213498 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:07Z","lastTransitionTime":"2026-01-27T15:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.316847 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.316889 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.316950 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.316975 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.316991 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:07Z","lastTransitionTime":"2026-01-27T15:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.420296 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.420355 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.420372 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.420395 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.420412 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:07Z","lastTransitionTime":"2026-01-27T15:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.508136 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 01:35:05.33456162 +0000 UTC Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.521950 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:07 crc kubenswrapper[4966]: E0127 15:43:07.522179 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.524332 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.524378 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.524401 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.524431 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.524455 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:07Z","lastTransitionTime":"2026-01-27T15:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.627279 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.627333 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.627351 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.627376 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.627393 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:07Z","lastTransitionTime":"2026-01-27T15:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.731082 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.731251 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.731331 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.731410 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.731437 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:07Z","lastTransitionTime":"2026-01-27T15:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.834118 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.834195 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.834220 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.834248 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.834265 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:07Z","lastTransitionTime":"2026-01-27T15:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.937571 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.937626 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.937642 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.937665 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:07 crc kubenswrapper[4966]: I0127 15:43:07.937684 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:07Z","lastTransitionTime":"2026-01-27T15:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.041520 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.041598 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.041617 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.041664 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.041688 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:08Z","lastTransitionTime":"2026-01-27T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.145040 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.145151 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.145180 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.145208 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.145424 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:08Z","lastTransitionTime":"2026-01-27T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.247414 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.247503 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.247517 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.247533 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.247548 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:08Z","lastTransitionTime":"2026-01-27T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.350191 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.350258 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.350285 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.350314 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.350336 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:08Z","lastTransitionTime":"2026-01-27T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.453385 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.453457 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.453481 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.453510 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.453531 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:08Z","lastTransitionTime":"2026-01-27T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.508993 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:40:49.05799987 +0000 UTC Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.520603 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.520658 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.520674 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:08 crc kubenswrapper[4966]: E0127 15:43:08.520736 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:08 crc kubenswrapper[4966]: E0127 15:43:08.520871 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:08 crc kubenswrapper[4966]: E0127 15:43:08.521077 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.555880 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.556001 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.556023 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.556053 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.556075 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:08Z","lastTransitionTime":"2026-01-27T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.658204 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.658247 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.658259 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.658275 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.658287 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:08Z","lastTransitionTime":"2026-01-27T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.761166 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.761202 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.761213 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.761228 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.761239 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:08Z","lastTransitionTime":"2026-01-27T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.863162 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.863186 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.863193 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.863205 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.863213 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:08Z","lastTransitionTime":"2026-01-27T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.967085 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.967142 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.967157 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.967179 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:08 crc kubenswrapper[4966]: I0127 15:43:08.967195 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:08Z","lastTransitionTime":"2026-01-27T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.051349 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs\") pod \"network-metrics-daemon-2fsdv\" (UID: \"311852f1-9764-49e5-a58a-5c2feee4ed1f\") " pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:09 crc kubenswrapper[4966]: E0127 15:43:09.051540 4966 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:43:09 crc kubenswrapper[4966]: E0127 15:43:09.051631 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs podName:311852f1-9764-49e5-a58a-5c2feee4ed1f nodeName:}" failed. No retries permitted until 2026-01-27 15:43:25.051602846 +0000 UTC m=+71.354396364 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs") pod "network-metrics-daemon-2fsdv" (UID: "311852f1-9764-49e5-a58a-5c2feee4ed1f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.069834 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.069893 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.069956 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.069979 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.069995 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:09Z","lastTransitionTime":"2026-01-27T15:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.173734 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.173811 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.173846 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.173883 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.173938 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:09Z","lastTransitionTime":"2026-01-27T15:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.276882 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.276940 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.276953 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.276972 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.276983 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:09Z","lastTransitionTime":"2026-01-27T15:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.380147 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.380205 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.380223 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.380248 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.380265 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:09Z","lastTransitionTime":"2026-01-27T15:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.482339 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.482388 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.482401 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.482422 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.482436 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:09Z","lastTransitionTime":"2026-01-27T15:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.509892 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 02:49:42.867376459 +0000 UTC Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.520261 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:09 crc kubenswrapper[4966]: E0127 15:43:09.520417 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.585170 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.585224 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.585236 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.585253 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.585265 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:09Z","lastTransitionTime":"2026-01-27T15:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.687515 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.687557 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.687567 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.687583 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.687593 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:09Z","lastTransitionTime":"2026-01-27T15:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.789804 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.789852 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.789862 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.789876 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.789886 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:09Z","lastTransitionTime":"2026-01-27T15:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.893553 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.893605 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.893621 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.893664 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.893680 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:09Z","lastTransitionTime":"2026-01-27T15:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.997289 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.997363 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.997381 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.997408 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:09 crc kubenswrapper[4966]: I0127 15:43:09.997427 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:09Z","lastTransitionTime":"2026-01-27T15:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.100676 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.100757 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.100793 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.100824 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.100845 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:10Z","lastTransitionTime":"2026-01-27T15:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.204524 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.204585 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.204603 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.204628 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.204646 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:10Z","lastTransitionTime":"2026-01-27T15:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.306714 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.306765 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.306780 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.306802 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.306815 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:10Z","lastTransitionTime":"2026-01-27T15:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.409892 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.410044 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.410069 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.410104 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.410126 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:10Z","lastTransitionTime":"2026-01-27T15:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.510940 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 22:01:48.620253222 +0000 UTC Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.513829 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.513892 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.513951 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.513979 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.514009 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:10Z","lastTransitionTime":"2026-01-27T15:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.520722 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.520744 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:10 crc kubenswrapper[4966]: E0127 15:43:10.520960 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.521001 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:10 crc kubenswrapper[4966]: E0127 15:43:10.521135 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:10 crc kubenswrapper[4966]: E0127 15:43:10.521283 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.616973 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.617042 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.617062 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.617086 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.617104 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:10Z","lastTransitionTime":"2026-01-27T15:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.719821 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.719873 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.719889 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.719922 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.719936 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:10Z","lastTransitionTime":"2026-01-27T15:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.823075 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.823115 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.823129 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.823145 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.823156 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:10Z","lastTransitionTime":"2026-01-27T15:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.925796 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.925856 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.925874 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.925929 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:10 crc kubenswrapper[4966]: I0127 15:43:10.925950 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:10Z","lastTransitionTime":"2026-01-27T15:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.006769 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.006824 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.006840 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.006858 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.006871 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:11Z","lastTransitionTime":"2026-01-27T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:11 crc kubenswrapper[4966]: E0127 15:43:11.020644 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.031206 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.031251 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.031264 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.031281 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.031295 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:11Z","lastTransitionTime":"2026-01-27T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:11 crc kubenswrapper[4966]: E0127 15:43:11.045450 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.050618 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.050669 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.050686 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.050711 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.050730 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:11Z","lastTransitionTime":"2026-01-27T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:11 crc kubenswrapper[4966]: E0127 15:43:11.064947 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.069416 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.069449 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.069458 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.069475 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.069488 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:11Z","lastTransitionTime":"2026-01-27T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:11 crc kubenswrapper[4966]: E0127 15:43:11.086852 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.091816 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.091867 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.091878 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.091911 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.091933 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:11Z","lastTransitionTime":"2026-01-27T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:11 crc kubenswrapper[4966]: E0127 15:43:11.109679 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:11 crc kubenswrapper[4966]: E0127 15:43:11.109962 4966 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.111933 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.111997 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.112021 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.112051 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.112069 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:11Z","lastTransitionTime":"2026-01-27T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.215203 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.215288 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.215314 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.215344 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.215372 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:11Z","lastTransitionTime":"2026-01-27T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.318079 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.318110 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.318121 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.318137 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.318148 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:11Z","lastTransitionTime":"2026-01-27T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.420434 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.420497 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.420509 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.420525 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.420534 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:11Z","lastTransitionTime":"2026-01-27T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.511797 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 18:49:08.398277404 +0000 UTC Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.520098 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:11 crc kubenswrapper[4966]: E0127 15:43:11.520219 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.523326 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.524710 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.525224 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.525256 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.525269 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:11Z","lastTransitionTime":"2026-01-27T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.628828 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.628885 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.628925 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.628949 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.628965 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:11Z","lastTransitionTime":"2026-01-27T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.732316 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.732376 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.732387 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.732405 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.732417 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:11Z","lastTransitionTime":"2026-01-27T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.835576 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.835641 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.835661 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.835687 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.835705 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:11Z","lastTransitionTime":"2026-01-27T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.938530 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.938585 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.938608 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.938637 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:11 crc kubenswrapper[4966]: I0127 15:43:11.938660 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:11Z","lastTransitionTime":"2026-01-27T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.041833 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.041944 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.041972 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.042001 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.042023 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:12Z","lastTransitionTime":"2026-01-27T15:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.145206 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.145285 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.145309 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.145337 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.145354 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:12Z","lastTransitionTime":"2026-01-27T15:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.248059 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.248137 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.248162 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.248192 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.248215 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:12Z","lastTransitionTime":"2026-01-27T15:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.350951 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.351026 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.351039 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.351065 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.351082 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:12Z","lastTransitionTime":"2026-01-27T15:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.454546 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.454631 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.454646 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.454671 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.454685 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:12Z","lastTransitionTime":"2026-01-27T15:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.512286 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:50:16.778444041 +0000 UTC Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.520678 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.520822 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.521113 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:12 crc kubenswrapper[4966]: E0127 15:43:12.521114 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:12 crc kubenswrapper[4966]: E0127 15:43:12.521285 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:12 crc kubenswrapper[4966]: E0127 15:43:12.521421 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.557443 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.557509 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.557526 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.557550 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.557569 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:12Z","lastTransitionTime":"2026-01-27T15:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.660130 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.660224 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.660244 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.660269 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.660287 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:12Z","lastTransitionTime":"2026-01-27T15:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.763530 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.763601 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.763623 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.763648 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.763666 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:12Z","lastTransitionTime":"2026-01-27T15:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.866009 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.866052 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.866068 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.866088 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.866105 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:12Z","lastTransitionTime":"2026-01-27T15:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.968926 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.969275 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.970032 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.970112 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:12 crc kubenswrapper[4966]: I0127 15:43:12.970141 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:12Z","lastTransitionTime":"2026-01-27T15:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.073566 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.073637 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.073654 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.074198 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.074270 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:13Z","lastTransitionTime":"2026-01-27T15:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.176785 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.176873 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.176944 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.176978 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.177004 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:13Z","lastTransitionTime":"2026-01-27T15:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.279831 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.279884 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.279947 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.279970 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.279985 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:13Z","lastTransitionTime":"2026-01-27T15:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.383211 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.383268 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.383286 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.383307 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.383324 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:13Z","lastTransitionTime":"2026-01-27T15:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.486246 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.486297 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.486317 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.486340 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.486357 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:13Z","lastTransitionTime":"2026-01-27T15:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.513298 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 08:38:52.194622145 +0000 UTC Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.520762 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:13 crc kubenswrapper[4966]: E0127 15:43:13.520957 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.589739 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.589800 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.589819 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.589841 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.589858 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:13Z","lastTransitionTime":"2026-01-27T15:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.693239 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.693310 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.693335 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.693368 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.693394 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:13Z","lastTransitionTime":"2026-01-27T15:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.796203 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.796258 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.796276 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.796300 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.796320 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:13Z","lastTransitionTime":"2026-01-27T15:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.899328 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.899390 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.899408 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.899434 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:13 crc kubenswrapper[4966]: I0127 15:43:13.899457 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:13Z","lastTransitionTime":"2026-01-27T15:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.003118 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.003248 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.003274 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.003301 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.003318 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:14Z","lastTransitionTime":"2026-01-27T15:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.106421 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.106526 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.106540 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.106561 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.106572 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:14Z","lastTransitionTime":"2026-01-27T15:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.209330 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.209385 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.209402 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.209425 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.209444 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:14Z","lastTransitionTime":"2026-01-27T15:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.312116 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.312159 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.312170 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.312185 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.312196 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:14Z","lastTransitionTime":"2026-01-27T15:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.415760 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.415818 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.415837 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.415858 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.415875 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:14Z","lastTransitionTime":"2026-01-27T15:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.514295 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:28:25.851629735 +0000 UTC Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.518309 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.518357 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.518368 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.518385 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.518394 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:14Z","lastTransitionTime":"2026-01-27T15:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.520737 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.520756 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.520774 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:14 crc kubenswrapper[4966]: E0127 15:43:14.520865 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:14 crc kubenswrapper[4966]: E0127 15:43:14.520944 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:14 crc kubenswrapper[4966]: E0127 15:43:14.521035 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.547956 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.562830 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.572506 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.591396 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.606071 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.621052 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.621118 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.621128 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.621163 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.621180 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:14Z","lastTransitionTime":"2026-01-27T15:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.624662 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.645661 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:43:02Z\\\",\\\"message\\\":\\\"ube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:43:02.464440 6646 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Tim\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:43:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-glbg8_openshift-ovn-kubernetes(4a25d116-d49b-4533-bac7-74bee93062b1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.657517 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.671126 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.681738 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.692664 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.707198 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f07f3864-2cae-4080-b75f-bf81eb4a1a7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f96604aa4197401e6a4c779ffa48a04a7050400f22f4da428070021f15e905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afcde980381c287a07cdc32f8ed47856c21b63161d0aa2ce3428afa60ee7f57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4962fe98a4d56e85b34bf06c62aba732afd760b74b101291561fb8fe6ac9560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2754f015d7150bace4cf0d897fcb8d116ccc6be32dbe42c5d8b08ed553b1c5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2754f015d7150bace4cf0d897fcb8d116ccc6be32dbe42c5d8b08ed553b1c5b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.720561 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.722993 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.723037 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.723048 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.723066 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.723110 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:14Z","lastTransitionTime":"2026-01-27T15:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.734719 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.748692 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.761731 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.772273 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d05a447f563ef4849de41640300bfb6b1eb8074a3312f1695ff6bf531bd02c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1da1a9e566516a476846e68f4dc9df47d3786b76c83c579bb14dba3de53396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zwhlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.782952 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fsdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"311852f1-9764-49e5-a58a-5c2feee4ed1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fsdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.825518 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.825560 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.825569 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.825582 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.825591 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:14Z","lastTransitionTime":"2026-01-27T15:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.927945 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.927989 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.928001 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.928018 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:14 crc kubenswrapper[4966]: I0127 15:43:14.928029 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:14Z","lastTransitionTime":"2026-01-27T15:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.030930 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.030997 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.031014 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.031039 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.031056 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:15Z","lastTransitionTime":"2026-01-27T15:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.134164 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.134198 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.134213 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.134235 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.134249 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:15Z","lastTransitionTime":"2026-01-27T15:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.236996 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.237051 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.237065 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.237082 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.237117 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:15Z","lastTransitionTime":"2026-01-27T15:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.340256 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.340307 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.340316 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.340337 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.340348 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:15Z","lastTransitionTime":"2026-01-27T15:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.443453 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.443497 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.443509 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.443524 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.443536 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:15Z","lastTransitionTime":"2026-01-27T15:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.515143 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 03:39:30.829366876 +0000 UTC Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.520625 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:15 crc kubenswrapper[4966]: E0127 15:43:15.520805 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.523358 4966 scope.go:117] "RemoveContainer" containerID="03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c" Jan 27 15:43:15 crc kubenswrapper[4966]: E0127 15:43:15.523663 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-glbg8_openshift-ovn-kubernetes(4a25d116-d49b-4533-bac7-74bee93062b1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.545836 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.545872 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.545883 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.545928 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.545964 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:15Z","lastTransitionTime":"2026-01-27T15:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.648755 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.648814 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.648833 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.648857 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.648875 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:15Z","lastTransitionTime":"2026-01-27T15:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.752529 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.752603 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.752620 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.752646 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.752663 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:15Z","lastTransitionTime":"2026-01-27T15:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.855220 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.855307 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.855325 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.855352 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.855413 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:15Z","lastTransitionTime":"2026-01-27T15:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.958747 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.958799 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.958812 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.958829 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:15 crc kubenswrapper[4966]: I0127 15:43:15.958840 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:15Z","lastTransitionTime":"2026-01-27T15:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.061987 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.062027 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.062040 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.062058 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.062068 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:16Z","lastTransitionTime":"2026-01-27T15:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.165769 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.165832 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.165848 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.165871 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.165888 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:16Z","lastTransitionTime":"2026-01-27T15:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.269293 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.269349 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.269367 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.269391 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.269407 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:16Z","lastTransitionTime":"2026-01-27T15:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.371992 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.372054 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.372077 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.372101 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.372116 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:16Z","lastTransitionTime":"2026-01-27T15:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.474453 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.474502 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.474516 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.474535 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.474549 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:16Z","lastTransitionTime":"2026-01-27T15:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.515307 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 00:08:06.318641225 +0000 UTC Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.520711 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.520745 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:16 crc kubenswrapper[4966]: E0127 15:43:16.520948 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.520970 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:16 crc kubenswrapper[4966]: E0127 15:43:16.521035 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:16 crc kubenswrapper[4966]: E0127 15:43:16.521172 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.577590 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.577644 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.577661 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.577690 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.577708 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:16Z","lastTransitionTime":"2026-01-27T15:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.680517 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.680551 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.680564 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.680581 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.680593 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:16Z","lastTransitionTime":"2026-01-27T15:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.783130 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.783168 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.783184 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.783207 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.783223 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:16Z","lastTransitionTime":"2026-01-27T15:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.885798 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.886202 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.886389 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.886564 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.886720 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:16Z","lastTransitionTime":"2026-01-27T15:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.989457 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.989506 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.989519 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.989536 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:16 crc kubenswrapper[4966]: I0127 15:43:16.989548 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:16Z","lastTransitionTime":"2026-01-27T15:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.091430 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.091478 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.091490 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.091506 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.091518 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:17Z","lastTransitionTime":"2026-01-27T15:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.194065 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.194096 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.194106 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.194121 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.194134 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:17Z","lastTransitionTime":"2026-01-27T15:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.296573 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.296604 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.296615 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.296633 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.296645 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:17Z","lastTransitionTime":"2026-01-27T15:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.399230 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.399265 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.399276 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.399292 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.399303 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:17Z","lastTransitionTime":"2026-01-27T15:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.501828 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.501861 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.501872 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.501887 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.501926 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:17Z","lastTransitionTime":"2026-01-27T15:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.515881 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 01:40:15.400733267 +0000 UTC Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.520323 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:17 crc kubenswrapper[4966]: E0127 15:43:17.520498 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.604251 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.604291 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.604308 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.604330 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.604348 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:17Z","lastTransitionTime":"2026-01-27T15:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.707865 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.707926 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.707937 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.707955 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.707966 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:17Z","lastTransitionTime":"2026-01-27T15:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.811090 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.811448 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.811662 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.811870 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.812200 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:17Z","lastTransitionTime":"2026-01-27T15:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.914369 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.914398 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.914406 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.914418 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:17 crc kubenswrapper[4966]: I0127 15:43:17.914427 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:17Z","lastTransitionTime":"2026-01-27T15:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.017381 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.017433 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.017449 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.017470 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.017487 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:18Z","lastTransitionTime":"2026-01-27T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.119828 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.119862 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.119872 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.119886 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.119917 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:18Z","lastTransitionTime":"2026-01-27T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.221451 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.221495 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.221507 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.221524 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.221536 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:18Z","lastTransitionTime":"2026-01-27T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.325186 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.325231 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.325243 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.325261 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.325274 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:18Z","lastTransitionTime":"2026-01-27T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.427725 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.427780 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.427798 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.427823 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.427840 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:18Z","lastTransitionTime":"2026-01-27T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.517069 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 01:33:04.099671337 +0000 UTC Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.520579 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.520615 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.520644 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:18 crc kubenswrapper[4966]: E0127 15:43:18.520758 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:18 crc kubenswrapper[4966]: E0127 15:43:18.520850 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:18 crc kubenswrapper[4966]: E0127 15:43:18.520991 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.529715 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.529750 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.529765 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.529784 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.529797 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:18Z","lastTransitionTime":"2026-01-27T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.633425 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.633512 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.633536 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.633606 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.633625 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:18Z","lastTransitionTime":"2026-01-27T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.737143 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.737194 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.737205 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.737222 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.737235 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:18Z","lastTransitionTime":"2026-01-27T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.839843 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.839877 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.839885 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.839914 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.839925 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:18Z","lastTransitionTime":"2026-01-27T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.943310 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.943374 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.943397 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.943428 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:18 crc kubenswrapper[4966]: I0127 15:43:18.943445 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:18Z","lastTransitionTime":"2026-01-27T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.046185 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.046248 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.046270 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.046299 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.046319 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:19Z","lastTransitionTime":"2026-01-27T15:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.148972 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.149014 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.149022 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.149037 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.149046 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:19Z","lastTransitionTime":"2026-01-27T15:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.251478 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.251535 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.251550 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.251574 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.251589 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:19Z","lastTransitionTime":"2026-01-27T15:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.354057 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.354091 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.354104 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.354120 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.354133 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:19Z","lastTransitionTime":"2026-01-27T15:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.456776 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.456813 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.456823 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.456837 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.456848 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:19Z","lastTransitionTime":"2026-01-27T15:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.517950 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:24:05.509529774 +0000 UTC Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.520317 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:19 crc kubenswrapper[4966]: E0127 15:43:19.520476 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.559302 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.559333 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.559342 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.559373 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.559381 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:19Z","lastTransitionTime":"2026-01-27T15:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.662397 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.662457 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.662471 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.662491 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.662511 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:19Z","lastTransitionTime":"2026-01-27T15:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.764518 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.764558 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.764570 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.764584 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.764594 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:19Z","lastTransitionTime":"2026-01-27T15:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.867367 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.867402 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.867411 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.867428 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.867437 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:19Z","lastTransitionTime":"2026-01-27T15:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.969264 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.969305 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.969319 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.969335 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:19 crc kubenswrapper[4966]: I0127 15:43:19.969347 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:19Z","lastTransitionTime":"2026-01-27T15:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.072269 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.072314 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.072330 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.072346 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.072356 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:20Z","lastTransitionTime":"2026-01-27T15:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.174468 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.174505 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.174515 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.174530 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.174542 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:20Z","lastTransitionTime":"2026-01-27T15:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.277182 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.277224 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.277236 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.277254 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.277266 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:20Z","lastTransitionTime":"2026-01-27T15:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.380052 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.380560 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.380601 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.380635 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.380660 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:20Z","lastTransitionTime":"2026-01-27T15:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.483380 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.483427 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.483440 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.483463 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.483475 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:20Z","lastTransitionTime":"2026-01-27T15:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.518874 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 22:08:59.751285622 +0000 UTC Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.520064 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.520123 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.520129 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:20 crc kubenswrapper[4966]: E0127 15:43:20.520239 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:20 crc kubenswrapper[4966]: E0127 15:43:20.520355 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:20 crc kubenswrapper[4966]: E0127 15:43:20.520728 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.586183 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.586215 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.586224 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.586243 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.586260 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:20Z","lastTransitionTime":"2026-01-27T15:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.689264 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.689333 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.689357 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.689387 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.689408 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:20Z","lastTransitionTime":"2026-01-27T15:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.791917 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.791947 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.791956 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.791988 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.791998 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:20Z","lastTransitionTime":"2026-01-27T15:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.894493 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.894543 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.894559 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.894582 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.894600 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:20Z","lastTransitionTime":"2026-01-27T15:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.996679 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.996721 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.996732 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.996751 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:20 crc kubenswrapper[4966]: I0127 15:43:20.996762 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:20Z","lastTransitionTime":"2026-01-27T15:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.099282 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.099328 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.099343 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.099360 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.099371 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:21Z","lastTransitionTime":"2026-01-27T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.201412 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.201454 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.201466 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.201480 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.201488 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:21Z","lastTransitionTime":"2026-01-27T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.303313 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.303363 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.303375 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.303391 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.303401 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:21Z","lastTransitionTime":"2026-01-27T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.304528 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.304565 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.304576 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.304595 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.304608 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:21Z","lastTransitionTime":"2026-01-27T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:21 crc kubenswrapper[4966]: E0127 15:43:21.316383 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.319343 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.319386 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.319398 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.319414 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.319425 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:21Z","lastTransitionTime":"2026-01-27T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:21 crc kubenswrapper[4966]: E0127 15:43:21.331520 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.334280 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.334316 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.334326 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.334341 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.334351 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:21Z","lastTransitionTime":"2026-01-27T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:21 crc kubenswrapper[4966]: E0127 15:43:21.346455 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.349236 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.349282 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.349299 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.349316 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.349329 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:21Z","lastTransitionTime":"2026-01-27T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:21 crc kubenswrapper[4966]: E0127 15:43:21.361149 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.364743 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.364771 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.364780 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.364794 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.364803 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:21Z","lastTransitionTime":"2026-01-27T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:21 crc kubenswrapper[4966]: E0127 15:43:21.377348 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:21 crc kubenswrapper[4966]: E0127 15:43:21.377475 4966 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.406174 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.406208 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.406217 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.406231 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.406241 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:21Z","lastTransitionTime":"2026-01-27T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.508081 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.508122 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.508133 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.508151 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.508162 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:21Z","lastTransitionTime":"2026-01-27T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.519994 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 03:15:23.054742257 +0000 UTC Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.520135 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:21 crc kubenswrapper[4966]: E0127 15:43:21.520269 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.610534 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.610571 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.610581 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.610596 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.610606 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:21Z","lastTransitionTime":"2026-01-27T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.714087 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.714125 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.714133 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.714150 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.714158 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:21Z","lastTransitionTime":"2026-01-27T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.816931 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.816994 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.817012 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.817036 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.817052 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:21Z","lastTransitionTime":"2026-01-27T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.919853 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.919958 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.919979 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.920006 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:21 crc kubenswrapper[4966]: I0127 15:43:21.920024 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:21Z","lastTransitionTime":"2026-01-27T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.026444 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.026495 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.026506 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.026521 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.026530 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:22Z","lastTransitionTime":"2026-01-27T15:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.128625 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.128663 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.128672 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.128687 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.128696 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:22Z","lastTransitionTime":"2026-01-27T15:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.230582 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.230621 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.230632 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.230648 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.230660 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:22Z","lastTransitionTime":"2026-01-27T15:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.332673 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.332712 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.332724 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.332740 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.332750 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:22Z","lastTransitionTime":"2026-01-27T15:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.435125 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.435170 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.435181 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.435197 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.435205 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:22Z","lastTransitionTime":"2026-01-27T15:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.520143 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.520202 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 07:13:39.58417062 +0000 UTC Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.520146 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:22 crc kubenswrapper[4966]: E0127 15:43:22.520327 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.520366 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:22 crc kubenswrapper[4966]: E0127 15:43:22.520508 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:22 crc kubenswrapper[4966]: E0127 15:43:22.520642 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.538130 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.538165 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.538174 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.538186 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.538195 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:22Z","lastTransitionTime":"2026-01-27T15:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.640828 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.640871 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.640881 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.640912 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.640924 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:22Z","lastTransitionTime":"2026-01-27T15:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.743077 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.743124 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.743142 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.743165 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.743181 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:22Z","lastTransitionTime":"2026-01-27T15:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.846754 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.846797 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.846826 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.846843 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.846851 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:22Z","lastTransitionTime":"2026-01-27T15:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.949413 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.949446 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.949454 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.949466 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:22 crc kubenswrapper[4966]: I0127 15:43:22.949475 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:22Z","lastTransitionTime":"2026-01-27T15:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.053117 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.053162 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.053174 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.053191 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.053203 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:23Z","lastTransitionTime":"2026-01-27T15:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.155489 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.155521 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.155530 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.155543 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.155552 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:23Z","lastTransitionTime":"2026-01-27T15:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.257403 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.257502 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.257514 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.257531 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.257542 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:23Z","lastTransitionTime":"2026-01-27T15:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.359974 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.360020 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.360032 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.360048 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.360059 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:23Z","lastTransitionTime":"2026-01-27T15:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.462697 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.462774 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.462793 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.462816 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.462832 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:23Z","lastTransitionTime":"2026-01-27T15:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.520622 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 05:16:52.703607996 +0000 UTC Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.520831 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:23 crc kubenswrapper[4966]: E0127 15:43:23.521063 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.565513 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.565562 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.565572 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.565587 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.565597 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:23Z","lastTransitionTime":"2026-01-27T15:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.668490 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.668536 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.668546 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.668564 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.668572 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:23Z","lastTransitionTime":"2026-01-27T15:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.772926 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.772996 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.773019 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.773048 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.773070 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:23Z","lastTransitionTime":"2026-01-27T15:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.875452 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.875505 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.875522 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.875540 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.875550 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:23Z","lastTransitionTime":"2026-01-27T15:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.978443 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.978485 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.978500 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.978519 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:23 crc kubenswrapper[4966]: I0127 15:43:23.978531 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:23Z","lastTransitionTime":"2026-01-27T15:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.081059 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.081088 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.081099 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.081113 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.081122 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:24Z","lastTransitionTime":"2026-01-27T15:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.183111 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.183145 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.183152 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.183167 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.183177 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:24Z","lastTransitionTime":"2026-01-27T15:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.285226 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.285256 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.285265 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.285279 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.285288 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:24Z","lastTransitionTime":"2026-01-27T15:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.388033 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.388062 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.388079 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.388095 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.388108 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:24Z","lastTransitionTime":"2026-01-27T15:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.490198 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.490229 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.490241 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.490256 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.490267 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:24Z","lastTransitionTime":"2026-01-27T15:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.521005 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 01:03:31.273388659 +0000 UTC Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.521081 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:24 crc kubenswrapper[4966]: E0127 15:43:24.521181 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.521081 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.521242 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:24 crc kubenswrapper[4966]: E0127 15:43:24.521337 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:24 crc kubenswrapper[4966]: E0127 15:43:24.521546 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.576408 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.592706 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.592738 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.592749 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.592765 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.592775 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:24Z","lastTransitionTime":"2026-01-27T15:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.594498 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.608104 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.625525 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:43:02Z\\\",\\\"message\\\":\\\"ube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:43:02.464440 6646 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Tim\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:43:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-glbg8_openshift-ovn-kubernetes(4a25d116-d49b-4533-bac7-74bee93062b1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.646978 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.662165 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.676445 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.689213 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.695215 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.695250 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.695261 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.695278 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.695288 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:24Z","lastTransitionTime":"2026-01-27T15:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.700509 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.712442 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.722801 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.733091 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.742175 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.750656 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.760738 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f07f3864-2cae-4080-b75f-bf81eb4a1a7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f96604aa4197401e6a4c779ffa48a04a7050400f22f4da428070021f15e905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afcde980381c287a07cdc32f8ed47856c21b63161d0aa2ce3428afa60ee7f57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4962fe98a4d56e85b34bf06c62aba732afd760b74b101291561fb8fe6ac9560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2754f015d7150bace4cf0d897fcb8d116ccc6be32dbe42c5d8b08ed553b1c5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2754f015d7150bace4cf0d897fcb8d116ccc6be32dbe42c5d8b08ed553b1c5b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.772242 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d05a447f563ef4849de41640300bfb6b1eb8074a3312f1695ff6bf531bd02c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1da1a9e566516a476846e68f4dc9df47d3786b76c83c579bb14dba3de53396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zwhlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.783075 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fsdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"311852f1-9764-49e5-a58a-5c2feee4ed1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fsdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.795836 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.796923 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.796971 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.796982 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.797000 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.797009 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:24Z","lastTransitionTime":"2026-01-27T15:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.899912 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.899967 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.899980 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.899993 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:24 crc kubenswrapper[4966]: I0127 15:43:24.900003 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:24Z","lastTransitionTime":"2026-01-27T15:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.001952 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.001985 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.002019 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.002036 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.002045 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:25Z","lastTransitionTime":"2026-01-27T15:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.104625 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.104671 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.104683 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.104702 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.104716 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:25Z","lastTransitionTime":"2026-01-27T15:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.118085 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs\") pod \"network-metrics-daemon-2fsdv\" (UID: \"311852f1-9764-49e5-a58a-5c2feee4ed1f\") " pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:25 crc kubenswrapper[4966]: E0127 15:43:25.118208 4966 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:43:25 crc kubenswrapper[4966]: E0127 15:43:25.118278 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs podName:311852f1-9764-49e5-a58a-5c2feee4ed1f nodeName:}" failed. No retries permitted until 2026-01-27 15:43:57.118261092 +0000 UTC m=+103.421054590 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs") pod "network-metrics-daemon-2fsdv" (UID: "311852f1-9764-49e5-a58a-5c2feee4ed1f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.207417 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.207470 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.207483 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.207499 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.207511 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:25Z","lastTransitionTime":"2026-01-27T15:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.309713 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.309765 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.309778 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.309794 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.309806 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:25Z","lastTransitionTime":"2026-01-27T15:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.411757 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.411814 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.411829 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.411851 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.411868 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:25Z","lastTransitionTime":"2026-01-27T15:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.513984 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.514032 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.514043 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.514060 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.514073 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:25Z","lastTransitionTime":"2026-01-27T15:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.520343 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:25 crc kubenswrapper[4966]: E0127 15:43:25.520475 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.521409 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:54:49.0140672 +0000 UTC Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.616469 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.616496 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.616503 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.616516 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.616525 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:25Z","lastTransitionTime":"2026-01-27T15:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.718869 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.719002 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.719028 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.719059 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.719082 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:25Z","lastTransitionTime":"2026-01-27T15:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.821358 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.821398 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.821408 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.821423 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.821434 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:25Z","lastTransitionTime":"2026-01-27T15:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.923981 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.924024 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.924036 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.924053 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:25 crc kubenswrapper[4966]: I0127 15:43:25.924064 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:25Z","lastTransitionTime":"2026-01-27T15:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.026004 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.026042 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.026055 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.026071 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.026083 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:26Z","lastTransitionTime":"2026-01-27T15:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.128425 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.128500 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.128523 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.128555 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.128577 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:26Z","lastTransitionTime":"2026-01-27T15:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.232126 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.232164 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.232175 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.232189 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.232200 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:26Z","lastTransitionTime":"2026-01-27T15:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.334252 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.334277 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.334285 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.334299 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.334308 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:26Z","lastTransitionTime":"2026-01-27T15:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.436855 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.436914 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.436928 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.436946 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.436958 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:26Z","lastTransitionTime":"2026-01-27T15:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.520362 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.520462 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.520468 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:26 crc kubenswrapper[4966]: E0127 15:43:26.520597 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:26 crc kubenswrapper[4966]: E0127 15:43:26.520687 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:26 crc kubenswrapper[4966]: E0127 15:43:26.520828 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.521581 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 04:26:30.611116807 +0000 UTC Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.539377 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.539413 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.539424 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.539438 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.539449 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:26Z","lastTransitionTime":"2026-01-27T15:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.641753 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.641795 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.641808 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.641826 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.641840 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:26Z","lastTransitionTime":"2026-01-27T15:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.744114 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.744138 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.744147 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.744162 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.744170 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:26Z","lastTransitionTime":"2026-01-27T15:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.846695 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.846743 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.846764 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.846780 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.846788 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:26Z","lastTransitionTime":"2026-01-27T15:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.949004 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.949036 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.949044 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.949056 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.949082 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:26Z","lastTransitionTime":"2026-01-27T15:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.978364 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5xktc_43e2b070-838d-4a18-9a86-1683f64b641c/kube-multus/0.log" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.978412 4966 generic.go:334] "Generic (PLEG): container finished" podID="43e2b070-838d-4a18-9a86-1683f64b641c" containerID="7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117" exitCode=1 Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.978440 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5xktc" event={"ID":"43e2b070-838d-4a18-9a86-1683f64b641c","Type":"ContainerDied","Data":"7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117"} Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.978985 4966 scope.go:117] "RemoveContainer" containerID="7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117" Jan 27 15:43:26 crc kubenswrapper[4966]: I0127 15:43:26.994864 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f07f3864-2cae-4080-b75f-bf81eb4a1a7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f96604aa4197401e6a4c779ffa48a04a7050400f22f4da428070021f15e905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afcde980381c287a07cdc32f8ed47856c21b63161d0aa2ce3428afa60ee7f57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4962fe98a4d56e85b34bf06c62aba732afd760b74b101291561fb8fe6ac9560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2754f015d7150bace4cf0d897fcb8d116ccc6be32dbe42c5d8b08ed553b1c5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2754f015d7150bace4cf0d897fcb8d116ccc6be32dbe42c5d8b08ed553b1c5b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:26Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.007499 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.019753 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.030307 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.041749 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.051104 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.051146 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.051160 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.051175 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.051185 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:27Z","lastTransitionTime":"2026-01-27T15:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.052132 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.061336 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.072528 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.083328 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:43:26Z\\\",\\\"message\\\":\\\"2026-01-27T15:42:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_12c2123c-d843-4abd-8893-28e721d8e235\\\\n2026-01-27T15:42:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_12c2123c-d843-4abd-8893-28e721d8e235 to /host/opt/cni/bin/\\\\n2026-01-27T15:42:41Z [verbose] multus-daemon started\\\\n2026-01-27T15:42:41Z [verbose] Readiness Indicator file check\\\\n2026-01-27T15:43:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.095113 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d05a447f563ef4849de41640300bfb6b1eb8074a3312f1695ff6bf531bd02c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1da1a9e566516a476846e68f4dc9df47d3786b76c83c579bb14dba3de53396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zwhlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.105866 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fsdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"311852f1-9764-49e5-a58a-5c2feee4ed1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fsdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.124475 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.141331 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.153644 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.153667 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.153676 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.153689 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.153698 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:27Z","lastTransitionTime":"2026-01-27T15:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.155344 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.166019 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.180952 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:43:02Z\\\",\\\"message\\\":\\\"ube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:43:02.464440 6646 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Tim\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:43:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-glbg8_openshift-ovn-kubernetes(4a25d116-d49b-4533-bac7-74bee93062b1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.199259 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.208477 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.255963 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.255998 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.256009 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.256025 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.256037 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:27Z","lastTransitionTime":"2026-01-27T15:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.359295 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.359369 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.359393 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.359428 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.359462 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:27Z","lastTransitionTime":"2026-01-27T15:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.462139 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.462172 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.462181 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.462195 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.462204 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:27Z","lastTransitionTime":"2026-01-27T15:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.520134 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:27 crc kubenswrapper[4966]: E0127 15:43:27.520256 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.521228 4966 scope.go:117] "RemoveContainer" containerID="03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.521718 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 20:44:56.925880204 +0000 UTC Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.565722 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.565748 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.565759 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.565774 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.565785 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:27Z","lastTransitionTime":"2026-01-27T15:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.668533 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.668568 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.668577 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.668591 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.668604 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:27Z","lastTransitionTime":"2026-01-27T15:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.771566 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.771606 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.771637 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.771653 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.771666 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:27Z","lastTransitionTime":"2026-01-27T15:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.873295 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.873337 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.873353 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.873375 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.873391 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:27Z","lastTransitionTime":"2026-01-27T15:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.975451 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.975488 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.975496 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.975509 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.975518 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:27Z","lastTransitionTime":"2026-01-27T15:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.983523 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glbg8_4a25d116-d49b-4533-bac7-74bee93062b1/ovnkube-controller/2.log" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.986586 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerStarted","Data":"8cd9525b2970d82e7d4eba30544d239545ef06bbe528b57245859c04e24024cb"} Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.987112 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.988386 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5xktc_43e2b070-838d-4a18-9a86-1683f64b641c/kube-multus/0.log" Jan 27 15:43:27 crc kubenswrapper[4966]: I0127 15:43:27.988461 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5xktc" event={"ID":"43e2b070-838d-4a18-9a86-1683f64b641c","Type":"ContainerStarted","Data":"5c9cd0e94c2c0c257d9004d0cb7ddb379b63e0af8b4de162939674ce4fd270c4"} Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.008375 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.020599 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.030210 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.043117 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.052344 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.063601 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.078657 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.078696 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.078708 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.078730 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.078743 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:28Z","lastTransitionTime":"2026-01-27T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.083695 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9525b2970d82e7d4eba30544d239545ef06bbe528b57245859c04e24024cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:43:02Z\\\",\\\"message\\\":\\\"ube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:43:02.464440 6646 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Tim\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:43:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.097493 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.108145 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.117251 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.126614 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.137221 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f07f3864-2cae-4080-b75f-bf81eb4a1a7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f96604aa4197401e6a4c779ffa48a04a7050400f22f4da428070021f15e905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afcde980381c287a07cdc32f8ed47856c21b63161d0aa2ce3428afa60ee7f57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4962fe98a4d56e85b34bf06c62aba732afd760b74b101291561fb8fe6ac9560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2754f015d7150bace4cf0d897fcb8d116ccc6be32dbe42c5d8b08ed553b1c5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2754f015d7150bace4cf0d897fcb8d116ccc6be32dbe42c5d8b08ed553b1c5b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.151782 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.163827 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.174691 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.181475 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.181514 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.181527 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.181542 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.181554 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:28Z","lastTransitionTime":"2026-01-27T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.187057 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:43:26Z\\\",\\\"message\\\":\\\"2026-01-27T15:42:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_12c2123c-d843-4abd-8893-28e721d8e235\\\\n2026-01-27T15:42:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_12c2123c-d843-4abd-8893-28e721d8e235 to /host/opt/cni/bin/\\\\n2026-01-27T15:42:41Z [verbose] multus-daemon started\\\\n2026-01-27T15:42:41Z [verbose] Readiness Indicator file check\\\\n2026-01-27T15:43:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.200422 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d05a447f563ef4849de41640300bfb6b1eb8074a3312f1695ff6bf531bd02c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1da1a9e566516a476846e68f4dc9df47d3786b76c83c579bb14dba3de53396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zwhlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.210210 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fsdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"311852f1-9764-49e5-a58a-5c2feee4ed1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fsdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.223110 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.235612 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.245527 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.256004 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.267414 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f07f3864-2cae-4080-b75f-bf81eb4a1a7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f96604aa4197401e6a4c779ffa48a04a7050400f22f4da428070021f15e905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afcde980381c287a07cdc32f8ed47856c21b63161d0aa2ce3428afa60ee7f57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4962fe98a4d56e85b34bf06c62aba732afd760b74b101291561fb8fe6ac9560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2754f015d7150bace4cf0d897fcb8d116ccc6be32dbe42c5d8b08ed553b1c5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2754f015d7150bace4cf0d897fcb8d116ccc6be32dbe42c5d8b08ed553b1c5b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.283107 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.284356 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.284420 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.284435 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.284476 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.284492 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:28Z","lastTransitionTime":"2026-01-27T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.294552 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.305236 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.319442 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9cd0e94c2c0c257d9004d0cb7ddb379b63e0af8b4de162939674ce4fd270c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:43:26Z\\\",\\\"message\\\":\\\"2026-01-27T15:42:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_12c2123c-d843-4abd-8893-28e721d8e235\\\\n2026-01-27T15:42:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_12c2123c-d843-4abd-8893-28e721d8e235 to /host/opt/cni/bin/\\\\n2026-01-27T15:42:41Z [verbose] multus-daemon started\\\\n2026-01-27T15:42:41Z [verbose] Readiness Indicator file check\\\\n2026-01-27T15:43:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.330976 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d05a447f563ef4849de41640300bfb6b1eb8074a3312f1695ff6bf531bd02c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1da1a9e566516a476846e68f4dc9df47d3786b76c83c579bb14dba3de53396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zwhlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.340743 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fsdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"311852f1-9764-49e5-a58a-5c2feee4ed1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fsdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.360661 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.373556 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.383775 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.386352 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.386387 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.386397 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.386410 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.386419 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:28Z","lastTransitionTime":"2026-01-27T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.394928 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.408718 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.419159 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.437603 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9525b2970d82e7d4eba30544d239545ef06bbe528b57245859c04e24024cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:43:02Z\\\",\\\"message\\\":\\\"ube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:43:02.464440 6646 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Tim\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:43:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.488927 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.488986 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.489002 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.489025 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.489043 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:28Z","lastTransitionTime":"2026-01-27T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.520665 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:28 crc kubenswrapper[4966]: E0127 15:43:28.520884 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.520970 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:28 crc kubenswrapper[4966]: E0127 15:43:28.521143 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.521532 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:28 crc kubenswrapper[4966]: E0127 15:43:28.521692 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.521799 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 14:10:28.114070726 +0000 UTC Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.591523 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.591585 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.591602 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.591626 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.591644 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:28Z","lastTransitionTime":"2026-01-27T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.694376 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.694459 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.694485 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.694515 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.694537 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:28Z","lastTransitionTime":"2026-01-27T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.797360 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.797419 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.797436 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.797457 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.797473 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:28Z","lastTransitionTime":"2026-01-27T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.900035 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.900072 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.900080 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.900095 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.900104 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:28Z","lastTransitionTime":"2026-01-27T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.994818 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glbg8_4a25d116-d49b-4533-bac7-74bee93062b1/ovnkube-controller/3.log" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.995827 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glbg8_4a25d116-d49b-4533-bac7-74bee93062b1/ovnkube-controller/2.log" Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.999871 4966 generic.go:334] "Generic (PLEG): container finished" podID="4a25d116-d49b-4533-bac7-74bee93062b1" containerID="8cd9525b2970d82e7d4eba30544d239545ef06bbe528b57245859c04e24024cb" exitCode=1 Jan 27 15:43:28 crc kubenswrapper[4966]: I0127 15:43:28.999951 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerDied","Data":"8cd9525b2970d82e7d4eba30544d239545ef06bbe528b57245859c04e24024cb"} Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.000011 4966 scope.go:117] "RemoveContainer" containerID="03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.000959 4966 scope.go:117] "RemoveContainer" containerID="8cd9525b2970d82e7d4eba30544d239545ef06bbe528b57245859c04e24024cb" Jan 27 15:43:29 crc kubenswrapper[4966]: E0127 15:43:29.001280 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-glbg8_openshift-ovn-kubernetes(4a25d116-d49b-4533-bac7-74bee93062b1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.002488 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.002531 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.002548 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.002571 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.002597 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:29Z","lastTransitionTime":"2026-01-27T15:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.024718 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.039833 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.061556 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.080264 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9525b2970d82e7d4eba30544d239545ef06bbe528b57245859c04e24024cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03282f2a380a26bd58abe45e37cae144d7873297a77a20cc7d91aa5f7158384c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:43:02Z\\\",\\\"message\\\":\\\"ube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:02Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:43:02.464440 6646 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.119:443: 10.217.5.119:8443:]}] Rows:[] Columns:[] Mutations:[] Tim\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:43:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd9525b2970d82e7d4eba30544d239545ef06bbe528b57245859c04e24024cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:43:28Z\\\",\\\"message\\\":\\\"22 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:43:28.328122 7022 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:43:28.328227 7022 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 15:43:28.328304 7022 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 15:43:28.328722 7022 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 15:43:28.328827 7022 factory.go:656] Stopping watch factory\\\\nI0127 15:43:28.328851 7022 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 15:43:28.381203 7022 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0127 15:43:28.381242 7022 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0127 15:43:28.381311 7022 ovnkube.go:599] Stopped ovnkube\\\\nI0127 15:43:28.381343 7022 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 15:43:28.381436 7022 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.099205 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.105403 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.105464 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.105481 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.105506 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.105524 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:29Z","lastTransitionTime":"2026-01-27T15:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.111642 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.125171 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.142407 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f07f3864-2cae-4080-b75f-bf81eb4a1a7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f96604aa4197401e6a4c779ffa48a04a7050400f22f4da428070021f15e905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afcde980381c287a07cdc32f8ed47856c21b63161d0aa2ce3428afa60ee7f57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4962fe98a4d56e85b34bf06c62aba732afd760b74b101291561fb8fe6ac9560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2754f015d7150bace4cf0d897fcb8d116ccc6be32dbe42c5d8b08ed553b1c5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2754f015d7150bace4cf0d897fcb8d116ccc6be32dbe42c5d8b08ed553b1c5b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.170928 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.188252 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.203511 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.215962 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.216032 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.216049 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.216073 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.216090 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:29Z","lastTransitionTime":"2026-01-27T15:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.219869 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.238333 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.252955 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.265205 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9cd0e94c2c0c257d9004d0cb7ddb379b63e0af8b4de162939674ce4fd270c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:43:26Z\\\",\\\"message\\\":\\\"2026-01-27T15:42:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_12c2123c-d843-4abd-8893-28e721d8e235\\\\n2026-01-27T15:42:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_12c2123c-d843-4abd-8893-28e721d8e235 to /host/opt/cni/bin/\\\\n2026-01-27T15:42:41Z [verbose] multus-daemon started\\\\n2026-01-27T15:42:41Z [verbose] Readiness Indicator file check\\\\n2026-01-27T15:43:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.276183 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d05a447f563ef4849de41640300bfb6b1eb8074a3312f1695ff6bf531bd02c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1da1a9e566516a476846e68f4dc9df47d3786b76c83c579bb14dba3de53396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zwhlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.285608 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fsdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"311852f1-9764-49e5-a58a-5c2feee4ed1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fsdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.305439 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.318188 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.318211 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.318219 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.318232 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.318242 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:29Z","lastTransitionTime":"2026-01-27T15:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.420796 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.420823 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.420860 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.420874 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.420885 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:29Z","lastTransitionTime":"2026-01-27T15:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.520682 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:29 crc kubenswrapper[4966]: E0127 15:43:29.520860 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.521942 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 20:46:46.577750725 +0000 UTC Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.523503 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.523540 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.523558 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.523578 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.523594 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:29Z","lastTransitionTime":"2026-01-27T15:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.625872 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.625948 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.625966 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.625990 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.626007 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:29Z","lastTransitionTime":"2026-01-27T15:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.728604 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.728649 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.728666 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.728690 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.728713 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:29Z","lastTransitionTime":"2026-01-27T15:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.830765 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.830793 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.830802 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.830814 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.830824 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:29Z","lastTransitionTime":"2026-01-27T15:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.933793 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.933844 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.933858 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.933887 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:29 crc kubenswrapper[4966]: I0127 15:43:29.933948 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:29Z","lastTransitionTime":"2026-01-27T15:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.005411 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glbg8_4a25d116-d49b-4533-bac7-74bee93062b1/ovnkube-controller/3.log" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.009702 4966 scope.go:117] "RemoveContainer" containerID="8cd9525b2970d82e7d4eba30544d239545ef06bbe528b57245859c04e24024cb" Jan 27 15:43:30 crc kubenswrapper[4966]: E0127 15:43:30.010029 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-glbg8_openshift-ovn-kubernetes(4a25d116-d49b-4533-bac7-74bee93062b1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.029243 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.036138 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.036171 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.036179 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.036193 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.036203 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:30Z","lastTransitionTime":"2026-01-27T15:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.059986 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9525b2970d82e7d4eba30544d239545ef06bbe528b57245859c04e24024cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd9525b2970d82e7d4eba30544d239545ef06bbe528b57245859c04e24024cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:43:28Z\\\",\\\"message\\\":\\\"22 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:43:28.328122 7022 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:43:28.328227 7022 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 15:43:28.328304 7022 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 15:43:28.328722 7022 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 15:43:28.328827 7022 factory.go:656] Stopping watch factory\\\\nI0127 15:43:28.328851 7022 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 15:43:28.381203 7022 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0127 15:43:28.381242 7022 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0127 15:43:28.381311 7022 ovnkube.go:599] Stopped ovnkube\\\\nI0127 15:43:28.381343 7022 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 15:43:28.381436 7022 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:43:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-glbg8_openshift-ovn-kubernetes(4a25d116-d49b-4533-bac7-74bee93062b1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.086275 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.102508 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.120178 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.137888 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.138853 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.138948 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.138974 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.139007 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.139026 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:30Z","lastTransitionTime":"2026-01-27T15:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.158835 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.180478 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.196248 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.215697 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.230852 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.241476 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.241549 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.241578 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.241658 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.241684 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:30Z","lastTransitionTime":"2026-01-27T15:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.246699 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.261013 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f07f3864-2cae-4080-b75f-bf81eb4a1a7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f96604aa4197401e6a4c779ffa48a04a7050400f22f4da428070021f15e905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afcde980381c287a07cdc32f8ed47856c21b63161d0aa2ce3428afa60ee7f57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4962fe98a4d56e85b34bf06c62aba732afd760b74b101291561fb8fe6ac9560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2754f015d7150bace4cf0d897fcb8d116ccc6be32dbe42c5d8b08ed553b1c5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2754f015d7150bace4cf0d897fcb8d116ccc6be32dbe42c5d8b08ed553b1c5b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.285830 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.297442 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fsdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"311852f1-9764-49e5-a58a-5c2feee4ed1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fsdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.312500 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9cd0e94c2c0c257d9004d0cb7ddb379b63e0af8b4de162939674ce4fd270c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:43:26Z\\\",\\\"message\\\":\\\"2026-01-27T15:42:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_12c2123c-d843-4abd-8893-28e721d8e235\\\\n2026-01-27T15:42:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_12c2123c-d843-4abd-8893-28e721d8e235 to /host/opt/cni/bin/\\\\n2026-01-27T15:42:41Z [verbose] multus-daemon started\\\\n2026-01-27T15:42:41Z [verbose] Readiness Indicator file check\\\\n2026-01-27T15:43:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.324533 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d05a447f563ef4849de41640300bfb6b1eb8074a3312f1695ff6bf531bd02c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1da1a9e566516a476846e68f4dc9df47d3786b76c83c579bb14dba3de53396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zwhlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.348290 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.348349 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.348368 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.348392 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.348407 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:30Z","lastTransitionTime":"2026-01-27T15:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.352330 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.451260 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.451329 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.451349 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.451373 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.451390 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:30Z","lastTransitionTime":"2026-01-27T15:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.520820 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.520929 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.520850 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:30 crc kubenswrapper[4966]: E0127 15:43:30.521066 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:30 crc kubenswrapper[4966]: E0127 15:43:30.521201 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:30 crc kubenswrapper[4966]: E0127 15:43:30.521373 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.523158 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:00:01.557790495 +0000 UTC Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.555055 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.555137 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.555161 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.555189 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.555207 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:30Z","lastTransitionTime":"2026-01-27T15:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.658690 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.658772 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.658798 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.658830 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.658853 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:30Z","lastTransitionTime":"2026-01-27T15:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.762615 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.762682 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.762700 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.762723 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.762741 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:30Z","lastTransitionTime":"2026-01-27T15:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.866212 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.866298 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.866325 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.866351 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.866370 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:30Z","lastTransitionTime":"2026-01-27T15:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.969974 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.970034 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.970051 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.970074 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:30 crc kubenswrapper[4966]: I0127 15:43:30.970092 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:30Z","lastTransitionTime":"2026-01-27T15:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.073454 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.073539 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.073566 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.073600 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.073625 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:31Z","lastTransitionTime":"2026-01-27T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.176380 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.176427 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.176440 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.176458 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.176469 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:31Z","lastTransitionTime":"2026-01-27T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.280275 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.280344 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.280366 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.280395 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.280412 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:31Z","lastTransitionTime":"2026-01-27T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.383176 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.383254 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.383275 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.383300 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.383318 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:31Z","lastTransitionTime":"2026-01-27T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.485882 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.486090 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.486113 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.486139 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.486156 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:31Z","lastTransitionTime":"2026-01-27T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.520873 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:31 crc kubenswrapper[4966]: E0127 15:43:31.521113 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.524245 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 03:24:49.964121675 +0000 UTC Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.589312 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.589376 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.589460 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.589506 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.589532 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:31Z","lastTransitionTime":"2026-01-27T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.678762 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.678842 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.678865 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.678889 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.678952 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:31Z","lastTransitionTime":"2026-01-27T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:31 crc kubenswrapper[4966]: E0127 15:43:31.697285 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:31Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.702337 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.702423 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.702475 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.702503 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.702520 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:31Z","lastTransitionTime":"2026-01-27T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:31 crc kubenswrapper[4966]: E0127 15:43:31.718307 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:31Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.723215 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.723273 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.723296 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.723326 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.723346 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:31Z","lastTransitionTime":"2026-01-27T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:31 crc kubenswrapper[4966]: E0127 15:43:31.743759 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:31Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.748693 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.748760 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.748778 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.748801 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.748818 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:31Z","lastTransitionTime":"2026-01-27T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:31 crc kubenswrapper[4966]: E0127 15:43:31.769146 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:31Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.773524 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.773574 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.773593 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.773618 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.773635 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:31Z","lastTransitionTime":"2026-01-27T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:31 crc kubenswrapper[4966]: E0127 15:43:31.788451 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b746d435-9a50-4ea2-9e69-34be734a7dee\\\",\\\"systemUUID\\\":\\\"dd047662-73e9-4358-9128-488711b4c80e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:31Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:31 crc kubenswrapper[4966]: E0127 15:43:31.788690 4966 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.790601 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.790654 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.790671 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.790691 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.790707 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:31Z","lastTransitionTime":"2026-01-27T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.893533 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.893609 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.893632 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.893662 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.893683 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:31Z","lastTransitionTime":"2026-01-27T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.996929 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.996980 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.996999 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.997025 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:31 crc kubenswrapper[4966]: I0127 15:43:31.997041 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:31Z","lastTransitionTime":"2026-01-27T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.099708 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.099769 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.099779 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.099792 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.099800 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:32Z","lastTransitionTime":"2026-01-27T15:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.203022 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.203087 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.203102 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.203124 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.203139 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:32Z","lastTransitionTime":"2026-01-27T15:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.313629 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.313715 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.313827 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.313884 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.313981 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:32Z","lastTransitionTime":"2026-01-27T15:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.417451 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.417518 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.417536 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.417565 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.417587 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:32Z","lastTransitionTime":"2026-01-27T15:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.520049 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.520065 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:32 crc kubenswrapper[4966]: E0127 15:43:32.520261 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.520332 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:32 crc kubenswrapper[4966]: E0127 15:43:32.520533 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:32 crc kubenswrapper[4966]: E0127 15:43:32.520657 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.521238 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.521273 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.521283 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.521296 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.521305 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:32Z","lastTransitionTime":"2026-01-27T15:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.524634 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 02:36:42.976277947 +0000 UTC Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.625090 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.625162 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.625184 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.625212 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.625231 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:32Z","lastTransitionTime":"2026-01-27T15:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.729182 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.729245 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.729268 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.729298 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.729319 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:32Z","lastTransitionTime":"2026-01-27T15:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.832372 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.832434 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.832449 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.832474 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.832490 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:32Z","lastTransitionTime":"2026-01-27T15:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.935837 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.935958 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.935985 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.936016 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:32 crc kubenswrapper[4966]: I0127 15:43:32.936040 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:32Z","lastTransitionTime":"2026-01-27T15:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.038449 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.038499 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.038529 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.038570 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.038595 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:33Z","lastTransitionTime":"2026-01-27T15:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.142255 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.142335 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.142352 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.142395 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.142431 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:33Z","lastTransitionTime":"2026-01-27T15:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.245332 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.245393 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.245409 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.245439 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.245464 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:33Z","lastTransitionTime":"2026-01-27T15:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.349760 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.349822 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.349840 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.349863 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.349881 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:33Z","lastTransitionTime":"2026-01-27T15:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.452728 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.452834 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.452854 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.452879 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.452926 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:33Z","lastTransitionTime":"2026-01-27T15:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.519819 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:33 crc kubenswrapper[4966]: E0127 15:43:33.520049 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.524942 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 21:27:06.423609526 +0000 UTC Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.555824 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.555862 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.555871 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.555886 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.555911 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:33Z","lastTransitionTime":"2026-01-27T15:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.659544 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.659595 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.659611 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.659637 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.659659 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:33Z","lastTransitionTime":"2026-01-27T15:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.763439 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.763542 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.763564 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.763593 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.763614 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:33Z","lastTransitionTime":"2026-01-27T15:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.867500 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.867569 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.867587 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.867613 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.867631 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:33Z","lastTransitionTime":"2026-01-27T15:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.971360 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.971422 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.971439 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.971464 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:33 crc kubenswrapper[4966]: I0127 15:43:33.971481 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:33Z","lastTransitionTime":"2026-01-27T15:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.073836 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.073875 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.073882 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.073918 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.073928 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:34Z","lastTransitionTime":"2026-01-27T15:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.177334 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.177395 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.177416 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.177441 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.177459 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:34Z","lastTransitionTime":"2026-01-27T15:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.280043 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.280105 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.280128 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.280156 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.280180 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:34Z","lastTransitionTime":"2026-01-27T15:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.383874 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.383976 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.384000 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.384025 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.384046 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:34Z","lastTransitionTime":"2026-01-27T15:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.487880 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.487974 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.487996 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.488029 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.488051 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:34Z","lastTransitionTime":"2026-01-27T15:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.520035 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.520084 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:34 crc kubenswrapper[4966]: E0127 15:43:34.520207 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.520305 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:34 crc kubenswrapper[4966]: E0127 15:43:34.520440 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:34 crc kubenswrapper[4966]: E0127 15:43:34.520506 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.525517 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 09:09:58.445452872 +0000 UTC Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.556209 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfc6e9e2-890f-4dfb-a4c6-02029ec6b31a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507973827cb4c68b9610a1de3f07b6a3959928c36c4a97b441dc9259572cd3c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda5148ec32ba7bb64ac5fe730c47600d873ffc5049a091257a737135fd1683e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://948553d2636dc8881484f64cb9d5849ce3f8dd60b9efbb8acc1940039700c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80601635c170a1f3bbd0752667c43b724eb1cf20469a1c3f526471802dac9bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ce90e00fbdeee2f9a42183b9f080c3e60ade0e4ce5b2b7646ad8c6732a5a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37b45d3516901652710aa9294f184da2085da58e381b7e56e56483b0ab9a0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94323349ebe00986599e9b3631d4307bf2e3322b268b595285614c75e76f073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd7241e44916391cbca116e1ec1bd4e0a6f3c546db5752cca2f42dbf6248126\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.572967 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tgscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3221fb17-1692-4499-b801-a980276d6162\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fd568247f93ac1b142fb851afb8c489adb7a824c770de50e5de4560d2c4c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2zgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tgscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.591056 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.592042 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.592158 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.592363 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.592544 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:34Z","lastTransitionTime":"2026-01-27T15:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.596484 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c9e5b76e3a28e2f5050daa9ad37410da950c161be8e4ded594bb731a45d02f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.614719 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ba219b186d6a55ff383a7cafeaa3e5d39e9cbadf92fee40eb2925a1654e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f721f1f3c6d8546bd8f8e80b55e726348e34aa723641d552e1c51a6cb2d1b99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.630750 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.658654 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a25d116-d49b-4533-bac7-74bee93062b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9525b2970d82e7d4eba30544d239545ef06bbe528b57245859c04e24024cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd9525b2970d82e7d4eba30544d239545ef06bbe528b57245859c04e24024cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:43:28Z\\\",\\\"message\\\":\\\"22 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:43:28.328122 7022 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:43:28.328227 7022 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 15:43:28.328304 7022 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 15:43:28.328722 7022 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 15:43:28.328827 7022 factory.go:656] Stopping watch factory\\\\nI0127 15:43:28.328851 7022 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 15:43:28.381203 7022 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0127 15:43:28.381242 7022 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0127 15:43:28.381311 7022 ovnkube.go:599] Stopped ovnkube\\\\nI0127 15:43:28.381343 7022 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 15:43:28.381436 7022 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:43:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-glbg8_openshift-ovn-kubernetes(4a25d116-d49b-4533-bac7-74bee93062b1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgdlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glbg8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.680253 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-24tw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"108b786e-606b-4603-b136-6a3d61fe7ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa10cf60c1b10f15157df277c2723ccd995c0bbde11d04a5eead02e9cbcaa76c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f3ad23ac81025a4cfbad9125e05e5566de7134289d8cac84b3a4515e06aea3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2466f177ee16019b62f06108bc26fe6e0b1b702fafdeadad8f3a53c02b3da1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18899ea579c4ea590723c7a61324479031cde32c43bf2de6985cca2265dc5ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7584e828fc44cad3d4d2b6ad618a147fe266860447cdb62196ca43eb6da72f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://480de6b34b6125c166c09df975b7d8d25ae898af8c80924147843ecb5bfb5fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://215810db8b0cbd14546195b0f7cd86173aece79d51bd045c8a53e1911978137a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-24tw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.693519 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.696796 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.696848 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.696866 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.696888 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.696926 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:34Z","lastTransitionTime":"2026-01-27T15:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.703232 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fkbrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b59127c-bd9c-493b-b2fc-5cec06c21bf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee3c56b4e60ec276853c0827d88d786e7ae6899173f4b0fe6f9e5057b7a649e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zggpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fkbrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.713887 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75889828-fc5d-4516-a3c4-db3affd4f810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b4fcadb40567d166116c43fa829760862320a5067e407f83b7e65064e50f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6z46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtl9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.725236 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f07f3864-2cae-4080-b75f-bf81eb4a1a7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f96604aa4197401e6a4c779ffa48a04a7050400f22f4da428070021f15e905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afcde980381c287a07cdc32f8ed47856c21b63161d0aa2ce3428afa60ee7f57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4962fe98a4d56e85b34bf06c62aba732afd760b74b101291561fb8fe6ac9560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2754f015d7150bace4cf0d897fcb8d116ccc6be32dbe42c5d8b08ed553b1c5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2754f015d7150bace4cf0d897fcb8d116ccc6be32dbe42c5d8b08ed553b1c5b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.737197 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da132b9-a1bf-4cad-b364-950b3a4ccc81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 15:42:28.329183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:42:28.331244 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-280619462/tls.crt::/tmp/serving-cert-280619462/tls.key\\\\\\\"\\\\nI0127 15:42:34.290512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:42:34.297451 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:42:34.297488 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:42:34.297526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:42:34.297539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:42:34.311225 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 15:42:34.311253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:42:34.311271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:42:34.311279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:42:34.311284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:42:34.311290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 15:42:34.311403 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 15:42:34.315059 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.750335 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7b99de5-45f0-4ea0-87fc-c8bda27d9c70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5553768ef1a3a298e5a3844ffc302dce940009efc98b0441ffb6df148fb66dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b58906e98a8c00708bf0f6a577da5c1f7623a80efba2293786ece1f8f93417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a9a99f367a6d25839ef6cc5eec87904384090ccf8f7f326db57e85fc000a03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.765420 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.779347 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5160a5589920273fa66a901cac28c88d4926f1b53d40309d3c257bc02b321e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.795277 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xktc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43e2b070-838d-4a18-9a86-1683f64b641c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:43:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9cd0e94c2c0c257d9004d0cb7ddb379b63e0af8b4de162939674ce4fd270c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:43:26Z\\\",\\\"message\\\":\\\"2026-01-27T15:42:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_12c2123c-d843-4abd-8893-28e721d8e235\\\\n2026-01-27T15:42:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_12c2123c-d843-4abd-8893-28e721d8e235 to /host/opt/cni/bin/\\\\n2026-01-27T15:42:41Z [verbose] multus-daemon started\\\\n2026-01-27T15:42:41Z [verbose] Readiness Indicator file check\\\\n2026-01-27T15:43:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:42:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lct5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xktc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.799260 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.799281 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.799288 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.799301 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.799310 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:34Z","lastTransitionTime":"2026-01-27T15:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.810114 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc3cf140-b0df-4b4b-9366-3fa1cb9ac057\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d05a447f563ef4849de41640300bfb6b1eb8074a3312f1695ff6bf531bd02c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1da1a9e566516a476846e68f4dc9df47d3786b76c83c579bb14dba3de53396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zwhlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.824732 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fsdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"311852f1-9764-49e5-a58a-5c2feee4ed1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92dm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fsdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.903541 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.903615 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.903628 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.903649 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:34 crc kubenswrapper[4966]: I0127 15:43:34.903664 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:34Z","lastTransitionTime":"2026-01-27T15:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.006058 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.006099 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.006111 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.006127 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.006136 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:35Z","lastTransitionTime":"2026-01-27T15:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.108603 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.108657 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.108670 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.108691 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.108703 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:35Z","lastTransitionTime":"2026-01-27T15:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.211613 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.211678 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.211704 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.211733 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.211753 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:35Z","lastTransitionTime":"2026-01-27T15:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.314349 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.314396 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.314410 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.314427 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.314440 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:35Z","lastTransitionTime":"2026-01-27T15:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.417853 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.417947 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.417968 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.417992 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.418009 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:35Z","lastTransitionTime":"2026-01-27T15:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.520116 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:35 crc kubenswrapper[4966]: E0127 15:43:35.520719 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.521890 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.521995 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.522012 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.522034 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.522051 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:35Z","lastTransitionTime":"2026-01-27T15:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.526489 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 12:01:23.295889709 +0000 UTC Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.625390 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.625442 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.625461 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.625486 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.625503 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:35Z","lastTransitionTime":"2026-01-27T15:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.728841 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.728927 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.728946 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.728969 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.728987 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:35Z","lastTransitionTime":"2026-01-27T15:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.831475 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.831541 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.831567 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.831597 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.831620 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:35Z","lastTransitionTime":"2026-01-27T15:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.934758 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.934820 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.934842 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.934874 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:35 crc kubenswrapper[4966]: I0127 15:43:35.934929 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:35Z","lastTransitionTime":"2026-01-27T15:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.040678 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.040740 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.040763 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.040791 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.040815 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:36Z","lastTransitionTime":"2026-01-27T15:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.144361 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.144449 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.144472 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.144502 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.144524 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:36Z","lastTransitionTime":"2026-01-27T15:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.247990 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.248053 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.248107 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.248135 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.248152 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:36Z","lastTransitionTime":"2026-01-27T15:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.352930 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.352992 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.353010 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.353035 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.353053 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:36Z","lastTransitionTime":"2026-01-27T15:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.455406 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.455458 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.455474 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.455498 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.455517 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:36Z","lastTransitionTime":"2026-01-27T15:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.519837 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:36 crc kubenswrapper[4966]: E0127 15:43:36.520077 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.520152 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.520242 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:36 crc kubenswrapper[4966]: E0127 15:43:36.520308 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:36 crc kubenswrapper[4966]: E0127 15:43:36.520411 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.526663 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 21:38:50.364937523 +0000 UTC Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.559838 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.559958 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.559993 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.560022 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.560047 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:36Z","lastTransitionTime":"2026-01-27T15:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.663517 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.663581 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.663605 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.663635 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.663655 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:36Z","lastTransitionTime":"2026-01-27T15:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.771368 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.771455 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.771474 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.771499 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.771514 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:36Z","lastTransitionTime":"2026-01-27T15:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.874811 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.874940 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.874972 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.875006 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.875033 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:36Z","lastTransitionTime":"2026-01-27T15:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.977062 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.977129 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.977151 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.977180 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:36 crc kubenswrapper[4966]: I0127 15:43:36.977203 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:36Z","lastTransitionTime":"2026-01-27T15:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.079360 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.079418 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.079429 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.079447 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.079460 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:37Z","lastTransitionTime":"2026-01-27T15:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.182460 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.182506 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.182518 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.182537 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.182549 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:37Z","lastTransitionTime":"2026-01-27T15:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.285936 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.286068 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.286100 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.286130 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.286151 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:37Z","lastTransitionTime":"2026-01-27T15:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.389418 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.389468 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.389476 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.389492 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.389501 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:37Z","lastTransitionTime":"2026-01-27T15:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.491986 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.492041 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.492057 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.492080 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.492096 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:37Z","lastTransitionTime":"2026-01-27T15:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.520636 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:37 crc kubenswrapper[4966]: E0127 15:43:37.520824 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.526871 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 06:12:00.418342987 +0000 UTC Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.595021 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.595081 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.595096 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.595120 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.595139 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:37Z","lastTransitionTime":"2026-01-27T15:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.698274 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.698334 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.698352 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.698375 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.698392 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:37Z","lastTransitionTime":"2026-01-27T15:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.801635 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.801699 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.801722 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.801755 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.801774 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:37Z","lastTransitionTime":"2026-01-27T15:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.904969 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.905035 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.905053 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.905078 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:37 crc kubenswrapper[4966]: I0127 15:43:37.905102 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:37Z","lastTransitionTime":"2026-01-27T15:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.007623 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.007685 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.007705 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.007730 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.007752 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:38Z","lastTransitionTime":"2026-01-27T15:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.110533 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.110591 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.110606 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.110627 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.110641 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:38Z","lastTransitionTime":"2026-01-27T15:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.213760 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.213803 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.213814 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.213832 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.213848 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:38Z","lastTransitionTime":"2026-01-27T15:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.264796 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:43:38 crc kubenswrapper[4966]: E0127 15:43:38.265053 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:42.265023749 +0000 UTC m=+148.567817247 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.265511 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.265668 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:38 crc kubenswrapper[4966]: E0127 15:43:38.265697 4966 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:43:38 crc kubenswrapper[4966]: E0127 15:43:38.265802 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:44:42.265772313 +0000 UTC m=+148.568565841 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:43:38 crc kubenswrapper[4966]: E0127 15:43:38.265818 4966 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:43:38 crc kubenswrapper[4966]: E0127 15:43:38.265865 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:44:42.265855495 +0000 UTC m=+148.568648993 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.317001 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.317071 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.317092 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.317117 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.317136 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:38Z","lastTransitionTime":"2026-01-27T15:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.367058 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.367156 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:38 crc kubenswrapper[4966]: E0127 15:43:38.367238 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:43:38 crc kubenswrapper[4966]: E0127 15:43:38.367280 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:43:38 crc kubenswrapper[4966]: E0127 15:43:38.367300 4966 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:43:38 crc kubenswrapper[4966]: E0127 15:43:38.367314 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:43:38 crc kubenswrapper[4966]: E0127 15:43:38.367339 4966 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:43:38 crc kubenswrapper[4966]: E0127 15:43:38.367356 4966 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:43:38 crc kubenswrapper[4966]: E0127 15:43:38.367389 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:44:42.367356855 +0000 UTC m=+148.670150373 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:43:38 crc kubenswrapper[4966]: E0127 15:43:38.367416 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:44:42.367404116 +0000 UTC m=+148.670197634 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.420164 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.420222 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.420239 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.420263 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.420281 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:38Z","lastTransitionTime":"2026-01-27T15:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.520647 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.520770 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.520654 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:38 crc kubenswrapper[4966]: E0127 15:43:38.520960 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:38 crc kubenswrapper[4966]: E0127 15:43:38.520984 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:38 crc kubenswrapper[4966]: E0127 15:43:38.521153 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.522381 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.522447 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.522471 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.522499 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.522520 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:38Z","lastTransitionTime":"2026-01-27T15:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.527066 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 07:50:21.786423122 +0000 UTC Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.626684 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.626756 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.626778 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.626806 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.626827 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:38Z","lastTransitionTime":"2026-01-27T15:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.729706 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.729744 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.729753 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.729768 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.729800 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:38Z","lastTransitionTime":"2026-01-27T15:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.832949 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.832993 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.833004 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.833019 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.833030 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:38Z","lastTransitionTime":"2026-01-27T15:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.936482 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.936514 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.936522 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.936535 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:38 crc kubenswrapper[4966]: I0127 15:43:38.936543 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:38Z","lastTransitionTime":"2026-01-27T15:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.039510 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.039551 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.039560 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.039574 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.039583 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:39Z","lastTransitionTime":"2026-01-27T15:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.142505 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.142598 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.142632 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.142664 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.142686 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:39Z","lastTransitionTime":"2026-01-27T15:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.245395 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.245450 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.245465 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.245486 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.245532 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:39Z","lastTransitionTime":"2026-01-27T15:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.349537 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.349595 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.349611 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.349634 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.349652 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:39Z","lastTransitionTime":"2026-01-27T15:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.452952 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.453017 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.453039 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.453069 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.453091 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:39Z","lastTransitionTime":"2026-01-27T15:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.519853 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:39 crc kubenswrapper[4966]: E0127 15:43:39.520011 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.528219 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 12:27:16.787112126 +0000 UTC Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.556139 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.556187 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.556203 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.556222 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.556237 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:39Z","lastTransitionTime":"2026-01-27T15:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.658835 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.658891 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.658942 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.658967 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.658985 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:39Z","lastTransitionTime":"2026-01-27T15:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.761406 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.761464 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.761482 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.761506 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.761525 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:39Z","lastTransitionTime":"2026-01-27T15:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.863722 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.863803 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.863821 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.863848 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.863866 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:39Z","lastTransitionTime":"2026-01-27T15:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.967085 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.967142 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.967155 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.967174 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:39 crc kubenswrapper[4966]: I0127 15:43:39.967188 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:39Z","lastTransitionTime":"2026-01-27T15:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.070376 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.070447 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.070470 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.070539 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.070563 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:40Z","lastTransitionTime":"2026-01-27T15:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.173737 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.173781 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.173793 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.173810 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.173824 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:40Z","lastTransitionTime":"2026-01-27T15:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.276545 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.276601 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.276622 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.276652 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.276673 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:40Z","lastTransitionTime":"2026-01-27T15:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.380003 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.380055 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.380073 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.380094 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.380109 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:40Z","lastTransitionTime":"2026-01-27T15:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.482686 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.482764 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.482778 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.482801 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.482816 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:40Z","lastTransitionTime":"2026-01-27T15:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.520116 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.520166 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.520133 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:40 crc kubenswrapper[4966]: E0127 15:43:40.520301 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:40 crc kubenswrapper[4966]: E0127 15:43:40.520456 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:40 crc kubenswrapper[4966]: E0127 15:43:40.520574 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.529325 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:58:25.967406009 +0000 UTC Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.585424 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.585484 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.585500 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.585522 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.585539 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:40Z","lastTransitionTime":"2026-01-27T15:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.687855 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.688028 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.688053 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.688069 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.688081 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:40Z","lastTransitionTime":"2026-01-27T15:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.791487 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.791553 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.791571 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.791597 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.791614 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:40Z","lastTransitionTime":"2026-01-27T15:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.894751 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.894795 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.894809 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.894828 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.894844 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:40Z","lastTransitionTime":"2026-01-27T15:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.997526 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.997564 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.997577 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.997592 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:40 crc kubenswrapper[4966]: I0127 15:43:40.997605 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:40Z","lastTransitionTime":"2026-01-27T15:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.100507 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.100589 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.100615 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.100649 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.100672 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:41Z","lastTransitionTime":"2026-01-27T15:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.203572 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.203654 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.203679 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.203714 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.203732 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:41Z","lastTransitionTime":"2026-01-27T15:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.307123 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.307229 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.307248 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.307276 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.307295 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:41Z","lastTransitionTime":"2026-01-27T15:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.410132 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.410212 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.410236 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.410659 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.410698 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:41Z","lastTransitionTime":"2026-01-27T15:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.515443 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.515505 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.515523 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.515548 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.515567 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:41Z","lastTransitionTime":"2026-01-27T15:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.520117 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:41 crc kubenswrapper[4966]: E0127 15:43:41.520440 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.530211 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:01:21.941143447 +0000 UTC Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.534509 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.618364 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.618393 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.618401 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.618414 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.618422 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:41Z","lastTransitionTime":"2026-01-27T15:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.721218 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.721291 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.721300 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.721314 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.721323 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:41Z","lastTransitionTime":"2026-01-27T15:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.824972 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.825006 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.825014 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.825030 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.825041 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:41Z","lastTransitionTime":"2026-01-27T15:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.928805 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.928881 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.928940 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.928973 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.928993 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:41Z","lastTransitionTime":"2026-01-27T15:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.947383 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.947446 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.947471 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.947533 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:43:41 crc kubenswrapper[4966]: I0127 15:43:41.947551 4966 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:43:41Z","lastTransitionTime":"2026-01-27T15:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.008126 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-cmsm6"] Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.008561 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cmsm6" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.010999 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.011307 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.011349 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.013102 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.038622 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=66.03858114 podStartE2EDuration="1m6.03858114s" podCreationTimestamp="2026-01-27 15:42:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:43:42.035961338 +0000 UTC m=+88.338754876" watchObservedRunningTime="2026-01-27 15:43:42.03858114 +0000 UTC m=+88.341374648" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.060152 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=1.060132221 podStartE2EDuration="1.060132221s" podCreationTimestamp="2026-01-27 15:43:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:43:42.059427399 +0000 UTC m=+88.362220917" watchObservedRunningTime="2026-01-27 15:43:42.060132221 +0000 UTC m=+88.362925719" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.109392 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8ec263f5-3c90-48a5-9baa-9703e5d7e0ae-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cmsm6\" (UID: \"8ec263f5-3c90-48a5-9baa-9703e5d7e0ae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cmsm6" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.109435 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8ec263f5-3c90-48a5-9baa-9703e5d7e0ae-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cmsm6\" (UID: \"8ec263f5-3c90-48a5-9baa-9703e5d7e0ae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cmsm6" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.109478 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8ec263f5-3c90-48a5-9baa-9703e5d7e0ae-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cmsm6\" (UID: \"8ec263f5-3c90-48a5-9baa-9703e5d7e0ae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cmsm6" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.109503 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ec263f5-3c90-48a5-9baa-9703e5d7e0ae-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cmsm6\" (UID: \"8ec263f5-3c90-48a5-9baa-9703e5d7e0ae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cmsm6" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.109524 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ec263f5-3c90-48a5-9baa-9703e5d7e0ae-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cmsm6\" (UID: \"8ec263f5-3c90-48a5-9baa-9703e5d7e0ae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cmsm6" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.142616 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-24tw6" podStartSLOduration=64.142600498 podStartE2EDuration="1m4.142600498s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:43:42.130821901 +0000 UTC m=+88.433615419" watchObservedRunningTime="2026-01-27 15:43:42.142600498 +0000 UTC m=+88.445393986" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.156371 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-tgscb" podStartSLOduration=64.156354266 podStartE2EDuration="1m4.156354266s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:43:42.143120294 +0000 UTC m=+88.445913792" watchObservedRunningTime="2026-01-27 15:43:42.156354266 +0000 UTC m=+88.459147754" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.209263 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=66.209244492 podStartE2EDuration="1m6.209244492s" podCreationTimestamp="2026-01-27 15:42:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:43:42.196890688 +0000 UTC m=+88.499703657" watchObservedRunningTime="2026-01-27 15:43:42.209244492 +0000 UTC m=+88.512037990" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.210018 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8ec263f5-3c90-48a5-9baa-9703e5d7e0ae-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cmsm6\" (UID: \"8ec263f5-3c90-48a5-9baa-9703e5d7e0ae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cmsm6" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.210052 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8ec263f5-3c90-48a5-9baa-9703e5d7e0ae-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cmsm6\" (UID: \"8ec263f5-3c90-48a5-9baa-9703e5d7e0ae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cmsm6" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.210090 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8ec263f5-3c90-48a5-9baa-9703e5d7e0ae-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cmsm6\" (UID: \"8ec263f5-3c90-48a5-9baa-9703e5d7e0ae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cmsm6" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.210113 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ec263f5-3c90-48a5-9baa-9703e5d7e0ae-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cmsm6\" (UID: \"8ec263f5-3c90-48a5-9baa-9703e5d7e0ae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cmsm6" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.210136 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ec263f5-3c90-48a5-9baa-9703e5d7e0ae-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cmsm6\" (UID: \"8ec263f5-3c90-48a5-9baa-9703e5d7e0ae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cmsm6" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.210187 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8ec263f5-3c90-48a5-9baa-9703e5d7e0ae-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cmsm6\" (UID: \"8ec263f5-3c90-48a5-9baa-9703e5d7e0ae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cmsm6" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.210252 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8ec263f5-3c90-48a5-9baa-9703e5d7e0ae-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cmsm6\" (UID: \"8ec263f5-3c90-48a5-9baa-9703e5d7e0ae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cmsm6" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.211075 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8ec263f5-3c90-48a5-9baa-9703e5d7e0ae-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cmsm6\" (UID: \"8ec263f5-3c90-48a5-9baa-9703e5d7e0ae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cmsm6" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.218751 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ec263f5-3c90-48a5-9baa-9703e5d7e0ae-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cmsm6\" (UID: \"8ec263f5-3c90-48a5-9baa-9703e5d7e0ae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cmsm6" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.228322 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ec263f5-3c90-48a5-9baa-9703e5d7e0ae-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cmsm6\" (UID: \"8ec263f5-3c90-48a5-9baa-9703e5d7e0ae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cmsm6" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.267544 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-fkbrc" podStartSLOduration=64.267521167 podStartE2EDuration="1m4.267521167s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:43:42.254542882 +0000 UTC m=+88.557336390" watchObservedRunningTime="2026-01-27 15:43:42.267521167 +0000 UTC m=+88.570314645" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.286677 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=37.286641402 podStartE2EDuration="37.286641402s" podCreationTimestamp="2026-01-27 15:43:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:43:42.285859547 +0000 UTC m=+88.588653045" watchObservedRunningTime="2026-01-27 15:43:42.286641402 +0000 UTC m=+88.589434890" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.287022 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podStartSLOduration=64.287016134 podStartE2EDuration="1m4.287016134s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:43:42.268071763 +0000 UTC m=+88.570865271" watchObservedRunningTime="2026-01-27 15:43:42.287016134 +0000 UTC m=+88.589809622" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.299786 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=68.29975938 podStartE2EDuration="1m8.29975938s" podCreationTimestamp="2026-01-27 15:42:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:43:42.298515172 +0000 UTC m=+88.601308670" watchObservedRunningTime="2026-01-27 15:43:42.29975938 +0000 UTC m=+88.602552868" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.323045 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cmsm6" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.333515 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-5xktc" podStartSLOduration=64.33349908 podStartE2EDuration="1m4.33349908s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:43:42.332420456 +0000 UTC m=+88.635213954" watchObservedRunningTime="2026-01-27 15:43:42.33349908 +0000 UTC m=+88.636292568" Jan 27 15:43:42 crc kubenswrapper[4966]: W0127 15:43:42.341304 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ec263f5_3c90_48a5_9baa_9703e5d7e0ae.slice/crio-7d09b8f4f5f95199cf92aceff45b1c2429fa4dd842f8152ebb0884e00d1c16bf WatchSource:0}: Error finding container 7d09b8f4f5f95199cf92aceff45b1c2429fa4dd842f8152ebb0884e00d1c16bf: Status 404 returned error can't find the container with id 7d09b8f4f5f95199cf92aceff45b1c2429fa4dd842f8152ebb0884e00d1c16bf Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.520201 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.520206 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:42 crc kubenswrapper[4966]: E0127 15:43:42.520338 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.520443 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:42 crc kubenswrapper[4966]: E0127 15:43:42.520597 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:42 crc kubenswrapper[4966]: E0127 15:43:42.521307 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.521981 4966 scope.go:117] "RemoveContainer" containerID="8cd9525b2970d82e7d4eba30544d239545ef06bbe528b57245859c04e24024cb" Jan 27 15:43:42 crc kubenswrapper[4966]: E0127 15:43:42.522287 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-glbg8_openshift-ovn-kubernetes(4a25d116-d49b-4533-bac7-74bee93062b1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.530615 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 22:02:01.221475457 +0000 UTC Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.530713 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 15:43:42 crc kubenswrapper[4966]: I0127 15:43:42.542031 4966 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 15:43:43 crc kubenswrapper[4966]: I0127 15:43:43.063659 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cmsm6" event={"ID":"8ec263f5-3c90-48a5-9baa-9703e5d7e0ae","Type":"ContainerStarted","Data":"557299c706f2de580e00dfa8295d765aa846e66d53e8e506d874e6214d8738f1"} Jan 27 15:43:43 crc kubenswrapper[4966]: I0127 15:43:43.063740 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cmsm6" event={"ID":"8ec263f5-3c90-48a5-9baa-9703e5d7e0ae","Type":"ContainerStarted","Data":"7d09b8f4f5f95199cf92aceff45b1c2429fa4dd842f8152ebb0884e00d1c16bf"} Jan 27 15:43:43 crc kubenswrapper[4966]: I0127 15:43:43.084636 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zwhlg" podStartSLOduration=65.08460497 podStartE2EDuration="1m5.08460497s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:43:42.35950595 +0000 UTC m=+88.662299438" watchObservedRunningTime="2026-01-27 15:43:43.08460497 +0000 UTC m=+89.387398498" Jan 27 15:43:43 crc kubenswrapper[4966]: I0127 15:43:43.085959 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cmsm6" podStartSLOduration=65.085944851 podStartE2EDuration="1m5.085944851s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:43:43.083438973 +0000 UTC m=+89.386232501" watchObservedRunningTime="2026-01-27 15:43:43.085944851 +0000 UTC m=+89.388738379" Jan 27 15:43:43 crc kubenswrapper[4966]: I0127 15:43:43.520511 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:43 crc kubenswrapper[4966]: E0127 15:43:43.520725 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:44 crc kubenswrapper[4966]: I0127 15:43:44.519834 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:44 crc kubenswrapper[4966]: I0127 15:43:44.519885 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:44 crc kubenswrapper[4966]: I0127 15:43:44.519885 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:44 crc kubenswrapper[4966]: E0127 15:43:44.522519 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:44 crc kubenswrapper[4966]: E0127 15:43:44.522617 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:44 crc kubenswrapper[4966]: E0127 15:43:44.522732 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:45 crc kubenswrapper[4966]: I0127 15:43:45.520205 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:45 crc kubenswrapper[4966]: E0127 15:43:45.520415 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:46 crc kubenswrapper[4966]: I0127 15:43:46.520725 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:46 crc kubenswrapper[4966]: I0127 15:43:46.520785 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:46 crc kubenswrapper[4966]: I0127 15:43:46.520859 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:46 crc kubenswrapper[4966]: E0127 15:43:46.520969 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:46 crc kubenswrapper[4966]: E0127 15:43:46.521056 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:46 crc kubenswrapper[4966]: E0127 15:43:46.521302 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:47 crc kubenswrapper[4966]: I0127 15:43:47.519814 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:47 crc kubenswrapper[4966]: E0127 15:43:47.519955 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:48 crc kubenswrapper[4966]: I0127 15:43:48.519891 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:48 crc kubenswrapper[4966]: E0127 15:43:48.520068 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:48 crc kubenswrapper[4966]: I0127 15:43:48.520124 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:48 crc kubenswrapper[4966]: I0127 15:43:48.519932 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:48 crc kubenswrapper[4966]: E0127 15:43:48.520195 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:48 crc kubenswrapper[4966]: E0127 15:43:48.520427 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:49 crc kubenswrapper[4966]: I0127 15:43:49.520307 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:49 crc kubenswrapper[4966]: E0127 15:43:49.520427 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:50 crc kubenswrapper[4966]: I0127 15:43:50.520139 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:50 crc kubenswrapper[4966]: E0127 15:43:50.520568 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:50 crc kubenswrapper[4966]: I0127 15:43:50.520334 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:50 crc kubenswrapper[4966]: I0127 15:43:50.520285 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:50 crc kubenswrapper[4966]: E0127 15:43:50.520644 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:50 crc kubenswrapper[4966]: E0127 15:43:50.520857 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:51 crc kubenswrapper[4966]: I0127 15:43:51.519811 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:51 crc kubenswrapper[4966]: E0127 15:43:51.520055 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:52 crc kubenswrapper[4966]: I0127 15:43:52.519976 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:52 crc kubenswrapper[4966]: I0127 15:43:52.520041 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:52 crc kubenswrapper[4966]: E0127 15:43:52.520127 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:52 crc kubenswrapper[4966]: I0127 15:43:52.520056 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:52 crc kubenswrapper[4966]: E0127 15:43:52.520292 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:52 crc kubenswrapper[4966]: E0127 15:43:52.520291 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:53 crc kubenswrapper[4966]: I0127 15:43:53.520628 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:53 crc kubenswrapper[4966]: E0127 15:43:53.520754 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:54 crc kubenswrapper[4966]: I0127 15:43:54.520422 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:54 crc kubenswrapper[4966]: E0127 15:43:54.522420 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:54 crc kubenswrapper[4966]: I0127 15:43:54.522520 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:54 crc kubenswrapper[4966]: I0127 15:43:54.522636 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:54 crc kubenswrapper[4966]: E0127 15:43:54.522681 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:54 crc kubenswrapper[4966]: E0127 15:43:54.522965 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:55 crc kubenswrapper[4966]: I0127 15:43:55.520829 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:55 crc kubenswrapper[4966]: E0127 15:43:55.520975 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:56 crc kubenswrapper[4966]: I0127 15:43:56.520699 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:56 crc kubenswrapper[4966]: I0127 15:43:56.520753 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:56 crc kubenswrapper[4966]: I0127 15:43:56.521161 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:56 crc kubenswrapper[4966]: E0127 15:43:56.521364 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:56 crc kubenswrapper[4966]: E0127 15:43:56.521530 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:56 crc kubenswrapper[4966]: E0127 15:43:56.521706 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:57 crc kubenswrapper[4966]: I0127 15:43:57.175929 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs\") pod \"network-metrics-daemon-2fsdv\" (UID: \"311852f1-9764-49e5-a58a-5c2feee4ed1f\") " pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:57 crc kubenswrapper[4966]: E0127 15:43:57.176184 4966 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:43:57 crc kubenswrapper[4966]: E0127 15:43:57.176294 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs podName:311852f1-9764-49e5-a58a-5c2feee4ed1f nodeName:}" failed. No retries permitted until 2026-01-27 15:45:01.17627182 +0000 UTC m=+167.479065508 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs") pod "network-metrics-daemon-2fsdv" (UID: "311852f1-9764-49e5-a58a-5c2feee4ed1f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:43:57 crc kubenswrapper[4966]: I0127 15:43:57.520754 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:57 crc kubenswrapper[4966]: E0127 15:43:57.521158 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:43:57 crc kubenswrapper[4966]: I0127 15:43:57.521386 4966 scope.go:117] "RemoveContainer" containerID="8cd9525b2970d82e7d4eba30544d239545ef06bbe528b57245859c04e24024cb" Jan 27 15:43:57 crc kubenswrapper[4966]: E0127 15:43:57.521571 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-glbg8_openshift-ovn-kubernetes(4a25d116-d49b-4533-bac7-74bee93062b1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" Jan 27 15:43:58 crc kubenswrapper[4966]: I0127 15:43:58.520351 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:43:58 crc kubenswrapper[4966]: I0127 15:43:58.520494 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:43:58 crc kubenswrapper[4966]: I0127 15:43:58.520552 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:43:58 crc kubenswrapper[4966]: E0127 15:43:58.520760 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:43:58 crc kubenswrapper[4966]: E0127 15:43:58.521271 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:43:58 crc kubenswrapper[4966]: E0127 15:43:58.521841 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:43:59 crc kubenswrapper[4966]: I0127 15:43:59.520529 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:43:59 crc kubenswrapper[4966]: E0127 15:43:59.520763 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:44:00 crc kubenswrapper[4966]: I0127 15:44:00.520252 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:44:00 crc kubenswrapper[4966]: I0127 15:44:00.520347 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:44:00 crc kubenswrapper[4966]: I0127 15:44:00.520254 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:44:00 crc kubenswrapper[4966]: E0127 15:44:00.520473 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:44:00 crc kubenswrapper[4966]: E0127 15:44:00.520579 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:44:00 crc kubenswrapper[4966]: E0127 15:44:00.520642 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:44:01 crc kubenswrapper[4966]: I0127 15:44:01.520737 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:44:01 crc kubenswrapper[4966]: E0127 15:44:01.520976 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:44:02 crc kubenswrapper[4966]: I0127 15:44:02.519990 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:44:02 crc kubenswrapper[4966]: I0127 15:44:02.520040 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:44:02 crc kubenswrapper[4966]: I0127 15:44:02.520015 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:44:02 crc kubenswrapper[4966]: E0127 15:44:02.520156 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:44:02 crc kubenswrapper[4966]: E0127 15:44:02.520258 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:44:02 crc kubenswrapper[4966]: E0127 15:44:02.520361 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:44:03 crc kubenswrapper[4966]: I0127 15:44:03.520521 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:44:03 crc kubenswrapper[4966]: E0127 15:44:03.520834 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:44:04 crc kubenswrapper[4966]: I0127 15:44:04.520651 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:44:04 crc kubenswrapper[4966]: I0127 15:44:04.521004 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:44:04 crc kubenswrapper[4966]: I0127 15:44:04.521150 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:44:04 crc kubenswrapper[4966]: E0127 15:44:04.522828 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:44:04 crc kubenswrapper[4966]: E0127 15:44:04.523096 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:44:04 crc kubenswrapper[4966]: E0127 15:44:04.523253 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:44:05 crc kubenswrapper[4966]: I0127 15:44:05.520662 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:44:05 crc kubenswrapper[4966]: E0127 15:44:05.520811 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:44:06 crc kubenswrapper[4966]: I0127 15:44:06.520130 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:44:06 crc kubenswrapper[4966]: I0127 15:44:06.520183 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:44:06 crc kubenswrapper[4966]: E0127 15:44:06.520325 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:44:06 crc kubenswrapper[4966]: I0127 15:44:06.520433 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:44:06 crc kubenswrapper[4966]: E0127 15:44:06.520583 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:44:06 crc kubenswrapper[4966]: E0127 15:44:06.520711 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:44:07 crc kubenswrapper[4966]: I0127 15:44:07.520219 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:44:07 crc kubenswrapper[4966]: E0127 15:44:07.521029 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:44:08 crc kubenswrapper[4966]: I0127 15:44:08.520211 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:44:08 crc kubenswrapper[4966]: I0127 15:44:08.520479 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:44:08 crc kubenswrapper[4966]: I0127 15:44:08.520560 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:44:08 crc kubenswrapper[4966]: E0127 15:44:08.520715 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:44:08 crc kubenswrapper[4966]: E0127 15:44:08.521137 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:44:08 crc kubenswrapper[4966]: E0127 15:44:08.521418 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:44:09 crc kubenswrapper[4966]: I0127 15:44:09.520789 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:44:09 crc kubenswrapper[4966]: E0127 15:44:09.521229 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:44:09 crc kubenswrapper[4966]: I0127 15:44:09.521422 4966 scope.go:117] "RemoveContainer" containerID="8cd9525b2970d82e7d4eba30544d239545ef06bbe528b57245859c04e24024cb" Jan 27 15:44:10 crc kubenswrapper[4966]: I0127 15:44:10.168284 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glbg8_4a25d116-d49b-4533-bac7-74bee93062b1/ovnkube-controller/3.log" Jan 27 15:44:10 crc kubenswrapper[4966]: I0127 15:44:10.171079 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerStarted","Data":"6b348f76c6afdfb4c2ef04e847af8d9f695c74115e83f8ca62e06a7f00deddae"} Jan 27 15:44:10 crc kubenswrapper[4966]: I0127 15:44:10.172047 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:44:10 crc kubenswrapper[4966]: I0127 15:44:10.475141 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" podStartSLOduration=92.475120952 podStartE2EDuration="1m32.475120952s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:10.205672175 +0000 UTC m=+116.508465673" watchObservedRunningTime="2026-01-27 15:44:10.475120952 +0000 UTC m=+116.777914440" Jan 27 15:44:10 crc kubenswrapper[4966]: I0127 15:44:10.475544 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2fsdv"] Jan 27 15:44:10 crc kubenswrapper[4966]: I0127 15:44:10.475668 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:44:10 crc kubenswrapper[4966]: E0127 15:44:10.475774 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:44:10 crc kubenswrapper[4966]: I0127 15:44:10.520597 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:44:10 crc kubenswrapper[4966]: I0127 15:44:10.520737 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:44:10 crc kubenswrapper[4966]: E0127 15:44:10.520850 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:44:10 crc kubenswrapper[4966]: E0127 15:44:10.520994 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:44:11 crc kubenswrapper[4966]: I0127 15:44:11.519977 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:44:11 crc kubenswrapper[4966]: E0127 15:44:11.520238 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.520987 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.521084 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:44:12 crc kubenswrapper[4966]: E0127 15:44:12.521142 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:44:12 crc kubenswrapper[4966]: E0127 15:44:12.521419 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fsdv" podUID="311852f1-9764-49e5-a58a-5c2feee4ed1f" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.521626 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:44:12 crc kubenswrapper[4966]: E0127 15:44:12.521812 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.632288 4966 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.632482 4966 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.675881 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xgjgw"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.676382 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.681307 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.681653 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.682028 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.682463 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.682644 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-w8jb4"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.683170 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.683492 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-f2z26"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.683610 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.683837 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.684032 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-f2z26" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.684521 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.684752 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.684983 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.685071 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rckw5"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.685184 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.685491 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.685600 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.685839 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.691450 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.692848 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.699952 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-sgfdr"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.700547 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.720876 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.721134 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.721351 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.721376 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.721407 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.721646 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.721712 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.721854 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.721957 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.722027 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.738989 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.739008 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.739601 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.739669 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.739988 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.740129 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.746693 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.746727 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fws6n"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.746858 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.746990 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.747177 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.747416 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.747946 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wpbsk"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.748172 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.748207 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.748186 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.748466 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.748485 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.748552 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.748600 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.748684 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.748758 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.748806 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.748880 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.748759 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.748885 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.749130 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.749280 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.749338 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.749353 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.749369 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.749435 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.749492 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.749521 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.749586 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.751063 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.751172 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c4rfs"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.751827 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c4rfs" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.751976 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.753846 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.757742 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xdlgl"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.758471 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xdlgl" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.761509 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.762357 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.763005 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.763598 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c28bj"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.764808 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-console-config\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.764843 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac92efeb-93b0-4044-9b79-fbfc19fc629e-config\") pod \"route-controller-manager-6576b87f9c-p7z6k\" (UID: \"ac92efeb-93b0-4044-9b79-fbfc19fc629e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.764866 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-encryption-config\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.764886 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-trusted-ca-bundle\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.764982 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-serving-cert\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765052 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb7f0aaf-703f-4d9c-89c8-701f0707ab18-images\") pod \"machine-api-operator-5694c8668f-f2z26\" (UID: \"eb7f0aaf-703f-4d9c-89c8-701f0707ab18\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f2z26" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765073 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb7f0aaf-703f-4d9c-89c8-701f0707ab18-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-f2z26\" (UID: \"eb7f0aaf-703f-4d9c-89c8-701f0707ab18\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f2z26" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765090 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0046cf8f-c67b-4936-b3d6-1f7ac02eb919-serving-cert\") pod \"openshift-config-operator-7777fb866f-pn2q5\" (UID: \"0046cf8f-c67b-4936-b3d6-1f7ac02eb919\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765106 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765121 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-client-ca\") pod \"controller-manager-879f6c89f-w8jb4\" (UID: \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765135 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765158 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-node-pullsecrets\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765173 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3c5438a-013d-48da-8a1b-8dd23e17bce6-console-serving-cert\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765188 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765203 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765217 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765232 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pw74\" (UniqueName: \"kubernetes.io/projected/0046cf8f-c67b-4936-b3d6-1f7ac02eb919-kube-api-access-4pw74\") pod \"openshift-config-operator-7777fb866f-pn2q5\" (UID: \"0046cf8f-c67b-4936-b3d6-1f7ac02eb919\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765246 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-etcd-serving-ca\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765263 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765278 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-etcd-client\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765295 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-config\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765309 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-w8jb4\" (UID: \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765322 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzlnp\" (UniqueName: \"kubernetes.io/projected/612fb5e2-ec40-4a52-b6fb-463e64e0e872-kube-api-access-rzlnp\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765335 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-audit-dir\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765356 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9wzv\" (UniqueName: \"kubernetes.io/projected/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-kube-api-access-r9wzv\") pod \"controller-manager-879f6c89f-w8jb4\" (UID: \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765370 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765384 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb7f0aaf-703f-4d9c-89c8-701f0707ab18-config\") pod \"machine-api-operator-5694c8668f-f2z26\" (UID: \"eb7f0aaf-703f-4d9c-89c8-701f0707ab18\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f2z26" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765398 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh4n4\" (UniqueName: \"kubernetes.io/projected/ac92efeb-93b0-4044-9b79-fbfc19fc629e-kube-api-access-rh4n4\") pod \"route-controller-manager-6576b87f9c-p7z6k\" (UID: \"ac92efeb-93b0-4044-9b79-fbfc19fc629e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765412 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/612fb5e2-ec40-4a52-b6fb-463e64e0e872-audit-dir\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765426 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765448 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z96mp\" (UniqueName: \"kubernetes.io/projected/eb7f0aaf-703f-4d9c-89c8-701f0707ab18-kube-api-access-z96mp\") pod \"machine-api-operator-5694c8668f-f2z26\" (UID: \"eb7f0aaf-703f-4d9c-89c8-701f0707ab18\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f2z26" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765463 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-config\") pod \"controller-manager-879f6c89f-w8jb4\" (UID: \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765476 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-serving-cert\") pod \"controller-manager-879f6c89f-w8jb4\" (UID: \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765489 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-service-ca\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765502 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac92efeb-93b0-4044-9b79-fbfc19fc629e-serving-cert\") pod \"route-controller-manager-6576b87f9c-p7z6k\" (UID: \"ac92efeb-93b0-4044-9b79-fbfc19fc629e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765518 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765539 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a3c5438a-013d-48da-8a1b-8dd23e17bce6-console-oauth-config\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765554 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-audit-policies\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765569 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0046cf8f-c67b-4936-b3d6-1f7ac02eb919-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pn2q5\" (UID: \"0046cf8f-c67b-4936-b3d6-1f7ac02eb919\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765584 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac92efeb-93b0-4044-9b79-fbfc19fc629e-client-ca\") pod \"route-controller-manager-6576b87f9c-p7z6k\" (UID: \"ac92efeb-93b0-4044-9b79-fbfc19fc629e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765597 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765611 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7pvp\" (UniqueName: \"kubernetes.io/projected/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-kube-api-access-s7pvp\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765635 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-oauth-serving-cert\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765650 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765671 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-audit\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765688 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-848k7\" (UniqueName: \"kubernetes.io/projected/a3c5438a-013d-48da-8a1b-8dd23e17bce6-kube-api-access-848k7\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765701 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-image-import-ca\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.765715 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.768177 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hq4gf"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.768494 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hq4gf" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.768544 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c28bj" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.771747 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.772919 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.773026 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.773096 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.773173 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.773739 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.773806 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.773871 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.773981 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.774091 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.774498 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.774590 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.774662 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.774789 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.774852 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.777439 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.777807 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-kmmq4"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.778288 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kctp8"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.778599 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-52vpt"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.779089 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-52vpt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.779429 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-kmmq4" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.779628 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.781942 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.782157 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.782383 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.782675 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.783384 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.784015 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.784266 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.784351 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.784511 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.806280 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.806544 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.807214 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ztxq2"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.807387 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.807806 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.807938 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.807850 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hhnc"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.809544 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.826932 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4jwq"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.828124 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hhnc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.828261 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4jwq" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.829883 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.830515 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.831391 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-w8jb4"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.831505 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.831728 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.832144 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.832465 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.832750 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.833030 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.833146 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.833167 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.833229 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.833812 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.838266 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdhr5"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.838552 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.838759 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdhr5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.838922 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.839071 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.838963 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.839208 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.839323 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.839537 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4lwnj"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.839608 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.839930 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.844121 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qrsjk"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.844445 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4lwnj" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.844682 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.844977 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-qrsjk" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.846465 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.848390 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.848781 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.851193 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g8mr8"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.851370 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.851703 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tjhrf"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.852104 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g8mr8" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.852141 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfvh"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.852244 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.853587 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ddksw"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.853644 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfvh" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.854030 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-wzrqq"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.854473 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.856283 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ddksw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.856419 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.857889 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.858715 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.859058 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.859350 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.859379 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.859867 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.860058 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.861356 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hl7j2"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.861754 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.861843 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.861874 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hl7j2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.861790 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.863569 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-275fm"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.863849 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.864188 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.864358 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.864415 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-f2z26"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.864498 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-275fm" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866359 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-etcd-client\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866466 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4ecd65d-fa3e-456b-8db6-314cc20216ed-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fws6n\" (UID: \"c4ecd65d-fa3e-456b-8db6-314cc20216ed\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866491 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fe20580-4a7b-4b46-9cc2-07c852e9c866-serving-cert\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866512 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-config\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866528 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-w8jb4\" (UID: \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866545 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64e4f5ae-5aab-40a4-855b-8d7904027e63-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xdlgl\" (UID: \"64e4f5ae-5aab-40a4-855b-8d7904027e63\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xdlgl" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866562 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3fe20580-4a7b-4b46-9cc2-07c852e9c866-audit-policies\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866579 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9wzv\" (UniqueName: \"kubernetes.io/projected/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-kube-api-access-r9wzv\") pod \"controller-manager-879f6c89f-w8jb4\" (UID: \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866594 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzlnp\" (UniqueName: \"kubernetes.io/projected/612fb5e2-ec40-4a52-b6fb-463e64e0e872-kube-api-access-rzlnp\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866609 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-audit-dir\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866624 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866639 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4ecd65d-fa3e-456b-8db6-314cc20216ed-config\") pod \"authentication-operator-69f744f599-fws6n\" (UID: \"c4ecd65d-fa3e-456b-8db6-314cc20216ed\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866654 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l25fx\" (UniqueName: \"kubernetes.io/projected/1349c57b-da7f-4882-bb6f-73a883b23cea-kube-api-access-l25fx\") pod \"dns-operator-744455d44c-52vpt\" (UID: \"1349c57b-da7f-4882-bb6f-73a883b23cea\") " pod="openshift-dns-operator/dns-operator-744455d44c-52vpt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866670 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/612fb5e2-ec40-4a52-b6fb-463e64e0e872-audit-dir\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866686 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866700 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb7f0aaf-703f-4d9c-89c8-701f0707ab18-config\") pod \"machine-api-operator-5694c8668f-f2z26\" (UID: \"eb7f0aaf-703f-4d9c-89c8-701f0707ab18\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f2z26" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866714 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh4n4\" (UniqueName: \"kubernetes.io/projected/ac92efeb-93b0-4044-9b79-fbfc19fc629e-kube-api-access-rh4n4\") pod \"route-controller-manager-6576b87f9c-p7z6k\" (UID: \"ac92efeb-93b0-4044-9b79-fbfc19fc629e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866729 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-config\") pod \"controller-manager-879f6c89f-w8jb4\" (UID: \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866743 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-serving-cert\") pod \"controller-manager-879f6c89f-w8jb4\" (UID: \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866838 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z96mp\" (UniqueName: \"kubernetes.io/projected/eb7f0aaf-703f-4d9c-89c8-701f0707ab18-kube-api-access-z96mp\") pod \"machine-api-operator-5694c8668f-f2z26\" (UID: \"eb7f0aaf-703f-4d9c-89c8-701f0707ab18\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f2z26" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866857 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fjf2\" (UniqueName: \"kubernetes.io/projected/3fe20580-4a7b-4b46-9cc2-07c852e9c866-kube-api-access-4fjf2\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866874 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-service-ca\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866971 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac92efeb-93b0-4044-9b79-fbfc19fc629e-serving-cert\") pod \"route-controller-manager-6576b87f9c-p7z6k\" (UID: \"ac92efeb-93b0-4044-9b79-fbfc19fc629e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.866993 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmchm\" (UniqueName: \"kubernetes.io/projected/12b9edc6-687c-47b9-b8c6-8fa656fc40de-kube-api-access-bmchm\") pod \"console-operator-58897d9998-wpbsk\" (UID: \"12b9edc6-687c-47b9-b8c6-8fa656fc40de\") " pod="openshift-console-operator/console-operator-58897d9998-wpbsk" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.867009 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3fe20580-4a7b-4b46-9cc2-07c852e9c866-etcd-client\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.867026 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.867063 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-audit-policies\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.867078 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0046cf8f-c67b-4936-b3d6-1f7ac02eb919-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pn2q5\" (UID: \"0046cf8f-c67b-4936-b3d6-1f7ac02eb919\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.867112 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7vn7\" (UniqueName: \"kubernetes.io/projected/64e4f5ae-5aab-40a4-855b-8d7904027e63-kube-api-access-z7vn7\") pod \"openshift-controller-manager-operator-756b6f6bc6-xdlgl\" (UID: \"64e4f5ae-5aab-40a4-855b-8d7904027e63\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xdlgl" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.867146 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a3c5438a-013d-48da-8a1b-8dd23e17bce6-console-oauth-config\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.867164 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac92efeb-93b0-4044-9b79-fbfc19fc629e-client-ca\") pod \"route-controller-manager-6576b87f9c-p7z6k\" (UID: \"ac92efeb-93b0-4044-9b79-fbfc19fc629e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.867179 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.867195 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3fe20580-4a7b-4b46-9cc2-07c852e9c866-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.867211 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7pvp\" (UniqueName: \"kubernetes.io/projected/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-kube-api-access-s7pvp\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.867228 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.867242 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6902c6a5-688c-4d89-9f2d-126e6cdd5879-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hq4gf\" (UID: \"6902c6a5-688c-4d89-9f2d-126e6cdd5879\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hq4gf" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.867258 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-oauth-serving-cert\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.867300 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78vmx\" (UniqueName: \"kubernetes.io/projected/6902c6a5-688c-4d89-9f2d-126e6cdd5879-kube-api-access-78vmx\") pod \"openshift-apiserver-operator-796bbdcf4f-hq4gf\" (UID: \"6902c6a5-688c-4d89-9f2d-126e6cdd5879\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hq4gf" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.867325 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-audit\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.867339 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1349c57b-da7f-4882-bb6f-73a883b23cea-metrics-tls\") pod \"dns-operator-744455d44c-52vpt\" (UID: \"1349c57b-da7f-4882-bb6f-73a883b23cea\") " pod="openshift-dns-operator/dns-operator-744455d44c-52vpt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.867356 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-848k7\" (UniqueName: \"kubernetes.io/projected/a3c5438a-013d-48da-8a1b-8dd23e17bce6-kube-api-access-848k7\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.867373 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-image-import-ca\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.867390 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.867509 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-audit-dir\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.869301 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-console-config\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.869420 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac92efeb-93b0-4044-9b79-fbfc19fc629e-config\") pod \"route-controller-manager-6576b87f9c-p7z6k\" (UID: \"ac92efeb-93b0-4044-9b79-fbfc19fc629e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.869476 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-encryption-config\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.869512 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-serving-cert\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.869541 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12b9edc6-687c-47b9-b8c6-8fa656fc40de-config\") pod \"console-operator-58897d9998-wpbsk\" (UID: \"12b9edc6-687c-47b9-b8c6-8fa656fc40de\") " pod="openshift-console-operator/console-operator-58897d9998-wpbsk" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.869624 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12b9edc6-687c-47b9-b8c6-8fa656fc40de-trusted-ca\") pod \"console-operator-58897d9998-wpbsk\" (UID: \"12b9edc6-687c-47b9-b8c6-8fa656fc40de\") " pod="openshift-console-operator/console-operator-58897d9998-wpbsk" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.869665 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4ecd65d-fa3e-456b-8db6-314cc20216ed-service-ca-bundle\") pod \"authentication-operator-69f744f599-fws6n\" (UID: \"c4ecd65d-fa3e-456b-8db6-314cc20216ed\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.869704 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-trusted-ca-bundle\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.869731 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a48af85-5ce3-4cd9-85bd-d0f88d38103a-etcd-service-ca\") pod \"etcd-operator-b45778765-ztxq2\" (UID: \"1a48af85-5ce3-4cd9-85bd-d0f88d38103a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.869731 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xgjgw"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.869853 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a48af85-5ce3-4cd9-85bd-d0f88d38103a-serving-cert\") pod \"etcd-operator-b45778765-ztxq2\" (UID: \"1a48af85-5ce3-4cd9-85bd-d0f88d38103a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.869941 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb7f0aaf-703f-4d9c-89c8-701f0707ab18-images\") pod \"machine-api-operator-5694c8668f-f2z26\" (UID: \"eb7f0aaf-703f-4d9c-89c8-701f0707ab18\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f2z26" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.870120 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a48af85-5ce3-4cd9-85bd-d0f88d38103a-config\") pod \"etcd-operator-b45778765-ztxq2\" (UID: \"1a48af85-5ce3-4cd9-85bd-d0f88d38103a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.870156 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7g7t\" (UniqueName: \"kubernetes.io/projected/a0612699-805c-409c-a48a-d9852f1c7f4f-kube-api-access-k7g7t\") pod \"cluster-samples-operator-665b6dd947-c28bj\" (UID: \"a0612699-805c-409c-a48a-d9852f1c7f4f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c28bj" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.870212 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb7f0aaf-703f-4d9c-89c8-701f0707ab18-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-f2z26\" (UID: \"eb7f0aaf-703f-4d9c-89c8-701f0707ab18\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f2z26" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.870430 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0046cf8f-c67b-4936-b3d6-1f7ac02eb919-serving-cert\") pod \"openshift-config-operator-7777fb866f-pn2q5\" (UID: \"0046cf8f-c67b-4936-b3d6-1f7ac02eb919\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.870467 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64e4f5ae-5aab-40a4-855b-8d7904027e63-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xdlgl\" (UID: \"64e4f5ae-5aab-40a4-855b-8d7904027e63\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xdlgl" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.870672 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-client-ca\") pod \"controller-manager-879f6c89f-w8jb4\" (UID: \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.870846 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12b9edc6-687c-47b9-b8c6-8fa656fc40de-serving-cert\") pod \"console-operator-58897d9998-wpbsk\" (UID: \"12b9edc6-687c-47b9-b8c6-8fa656fc40de\") " pod="openshift-console-operator/console-operator-58897d9998-wpbsk" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.870884 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0612699-805c-409c-a48a-d9852f1c7f4f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-c28bj\" (UID: \"a0612699-805c-409c-a48a-d9852f1c7f4f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c28bj" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.871089 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.871126 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.871236 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-node-pullsecrets\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.871376 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.871423 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6902c6a5-688c-4d89-9f2d-126e6cdd5879-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hq4gf\" (UID: \"6902c6a5-688c-4d89-9f2d-126e6cdd5879\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hq4gf" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.871575 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3c5438a-013d-48da-8a1b-8dd23e17bce6-console-serving-cert\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.871623 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.871738 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z56mh\" (UniqueName: \"kubernetes.io/projected/1a48af85-5ce3-4cd9-85bd-d0f88d38103a-kube-api-access-z56mh\") pod \"etcd-operator-b45778765-ztxq2\" (UID: \"1a48af85-5ce3-4cd9-85bd-d0f88d38103a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.871811 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-etcd-serving-ca\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.871856 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.872002 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pw74\" (UniqueName: \"kubernetes.io/projected/0046cf8f-c67b-4936-b3d6-1f7ac02eb919-kube-api-access-4pw74\") pod \"openshift-config-operator-7777fb866f-pn2q5\" (UID: \"0046cf8f-c67b-4936-b3d6-1f7ac02eb919\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.872045 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4ecd65d-fa3e-456b-8db6-314cc20216ed-serving-cert\") pod \"authentication-operator-69f744f599-fws6n\" (UID: \"c4ecd65d-fa3e-456b-8db6-314cc20216ed\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.872129 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fe20580-4a7b-4b46-9cc2-07c852e9c866-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.872222 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3fe20580-4a7b-4b46-9cc2-07c852e9c866-encryption-config\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.872262 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.872404 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8f6t\" (UniqueName: \"kubernetes.io/projected/c4ecd65d-fa3e-456b-8db6-314cc20216ed-kube-api-access-p8f6t\") pod \"authentication-operator-69f744f599-fws6n\" (UID: \"c4ecd65d-fa3e-456b-8db6-314cc20216ed\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.872481 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1a48af85-5ce3-4cd9-85bd-d0f88d38103a-etcd-ca\") pod \"etcd-operator-b45778765-ztxq2\" (UID: \"1a48af85-5ce3-4cd9-85bd-d0f88d38103a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.872499 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1a48af85-5ce3-4cd9-85bd-d0f88d38103a-etcd-client\") pod \"etcd-operator-b45778765-ztxq2\" (UID: \"1a48af85-5ce3-4cd9-85bd-d0f88d38103a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.872647 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3fe20580-4a7b-4b46-9cc2-07c852e9c866-audit-dir\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.874372 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fws6n"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.874698 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb7f0aaf-703f-4d9c-89c8-701f0707ab18-config\") pod \"machine-api-operator-5694c8668f-f2z26\" (UID: \"eb7f0aaf-703f-4d9c-89c8-701f0707ab18\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f2z26" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.875446 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-config\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.874820 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.875846 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-service-ca\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.876526 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0046cf8f-c67b-4936-b3d6-1f7ac02eb919-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pn2q5\" (UID: \"0046cf8f-c67b-4936-b3d6-1f7ac02eb919\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.876626 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-console-config\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.877483 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/612fb5e2-ec40-4a52-b6fb-463e64e0e872-audit-dir\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.878737 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-audit-policies\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.878141 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.880745 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-config\") pod \"controller-manager-879f6c89f-w8jb4\" (UID: \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.890517 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.891665 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a3c5438a-013d-48da-8a1b-8dd23e17bce6-console-oauth-config\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.891720 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.891764 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.887621 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-etcd-client\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.892144 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac92efeb-93b0-4044-9b79-fbfc19fc629e-config\") pod \"route-controller-manager-6576b87f9c-p7z6k\" (UID: \"ac92efeb-93b0-4044-9b79-fbfc19fc629e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.892203 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-trusted-ca-bundle\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.892253 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.892380 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.893032 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-image-import-ca\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.894752 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3c5438a-013d-48da-8a1b-8dd23e17bce6-console-serving-cert\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.895216 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb7f0aaf-703f-4d9c-89c8-701f0707ab18-images\") pod \"machine-api-operator-5694c8668f-f2z26\" (UID: \"eb7f0aaf-703f-4d9c-89c8-701f0707ab18\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f2z26" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.895396 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-audit\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.895447 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-encryption-config\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.895402 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-node-pullsecrets\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.895676 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-client-ca\") pod \"controller-manager-879f6c89f-w8jb4\" (UID: \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.895916 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-etcd-serving-ca\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.896014 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-oauth-serving-cert\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.896583 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.896725 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-serving-cert\") pod \"controller-manager-879f6c89f-w8jb4\" (UID: \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.896734 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac92efeb-93b0-4044-9b79-fbfc19fc629e-client-ca\") pod \"route-controller-manager-6576b87f9c-p7z6k\" (UID: \"ac92efeb-93b0-4044-9b79-fbfc19fc629e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.897279 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-w8jb4\" (UID: \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.897476 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-96rmm"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.897527 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.897657 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac92efeb-93b0-4044-9b79-fbfc19fc629e-serving-cert\") pod \"route-controller-manager-6576b87f9c-p7z6k\" (UID: \"ac92efeb-93b0-4044-9b79-fbfc19fc629e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.898515 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.903641 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.903683 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c4rfs"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.903694 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rckw5"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.903704 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.903784 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-96rmm" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.904106 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g8mr8"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.905856 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.907227 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0046cf8f-c67b-4936-b3d6-1f7ac02eb919-serving-cert\") pod \"openshift-config-operator-7777fb866f-pn2q5\" (UID: \"0046cf8f-c67b-4936-b3d6-1f7ac02eb919\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.907243 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.907403 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.907767 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wpbsk"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.907887 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.908075 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb7f0aaf-703f-4d9c-89c8-701f0707ab18-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-f2z26\" (UID: \"eb7f0aaf-703f-4d9c-89c8-701f0707ab18\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f2z26" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.911761 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfvh"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.913130 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4lwnj"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.915014 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-kmmq4"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.917037 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hq4gf"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.919472 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xdlgl"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.921400 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-serving-cert\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.922283 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.924186 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sgfdr"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.926184 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4jwq"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.927433 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kctp8"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.928604 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qrsjk"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.930876 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-52vpt"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.931986 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.933647 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c28bj"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.934256 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.935800 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.936359 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdhr5"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.937408 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.938696 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tjhrf"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.938958 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.939838 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ddksw"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.941528 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ztxq2"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.943006 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hhnc"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.944106 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.945098 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-nb7qq"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.945880 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nb7qq" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.946997 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-9pb5n"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.948244 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9pb5n" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.948479 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9pb5n"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.950303 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.951427 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nb7qq"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.952599 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-96rmm"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.954311 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-275fm"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.956163 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.957552 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hl7j2"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.958055 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.958639 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-h2595"] Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.959131 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-h2595" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973521 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6902c6a5-688c-4d89-9f2d-126e6cdd5879-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hq4gf\" (UID: \"6902c6a5-688c-4d89-9f2d-126e6cdd5879\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hq4gf" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973552 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z56mh\" (UniqueName: \"kubernetes.io/projected/1a48af85-5ce3-4cd9-85bd-d0f88d38103a-kube-api-access-z56mh\") pod \"etcd-operator-b45778765-ztxq2\" (UID: \"1a48af85-5ce3-4cd9-85bd-d0f88d38103a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973578 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4ecd65d-fa3e-456b-8db6-314cc20216ed-serving-cert\") pod \"authentication-operator-69f744f599-fws6n\" (UID: \"c4ecd65d-fa3e-456b-8db6-314cc20216ed\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973593 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fe20580-4a7b-4b46-9cc2-07c852e9c866-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973609 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3fe20580-4a7b-4b46-9cc2-07c852e9c866-encryption-config\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973628 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8f6t\" (UniqueName: \"kubernetes.io/projected/c4ecd65d-fa3e-456b-8db6-314cc20216ed-kube-api-access-p8f6t\") pod \"authentication-operator-69f744f599-fws6n\" (UID: \"c4ecd65d-fa3e-456b-8db6-314cc20216ed\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973643 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1a48af85-5ce3-4cd9-85bd-d0f88d38103a-etcd-ca\") pod \"etcd-operator-b45778765-ztxq2\" (UID: \"1a48af85-5ce3-4cd9-85bd-d0f88d38103a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973660 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1a48af85-5ce3-4cd9-85bd-d0f88d38103a-etcd-client\") pod \"etcd-operator-b45778765-ztxq2\" (UID: \"1a48af85-5ce3-4cd9-85bd-d0f88d38103a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973675 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3fe20580-4a7b-4b46-9cc2-07c852e9c866-audit-dir\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973690 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4ecd65d-fa3e-456b-8db6-314cc20216ed-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fws6n\" (UID: \"c4ecd65d-fa3e-456b-8db6-314cc20216ed\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973707 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fe20580-4a7b-4b46-9cc2-07c852e9c866-serving-cert\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973729 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64e4f5ae-5aab-40a4-855b-8d7904027e63-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xdlgl\" (UID: \"64e4f5ae-5aab-40a4-855b-8d7904027e63\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xdlgl" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973750 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3fe20580-4a7b-4b46-9cc2-07c852e9c866-audit-policies\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973775 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4ecd65d-fa3e-456b-8db6-314cc20216ed-config\") pod \"authentication-operator-69f744f599-fws6n\" (UID: \"c4ecd65d-fa3e-456b-8db6-314cc20216ed\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973789 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l25fx\" (UniqueName: \"kubernetes.io/projected/1349c57b-da7f-4882-bb6f-73a883b23cea-kube-api-access-l25fx\") pod \"dns-operator-744455d44c-52vpt\" (UID: \"1349c57b-da7f-4882-bb6f-73a883b23cea\") " pod="openshift-dns-operator/dns-operator-744455d44c-52vpt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973823 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmchm\" (UniqueName: \"kubernetes.io/projected/12b9edc6-687c-47b9-b8c6-8fa656fc40de-kube-api-access-bmchm\") pod \"console-operator-58897d9998-wpbsk\" (UID: \"12b9edc6-687c-47b9-b8c6-8fa656fc40de\") " pod="openshift-console-operator/console-operator-58897d9998-wpbsk" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973839 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3fe20580-4a7b-4b46-9cc2-07c852e9c866-etcd-client\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973854 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fjf2\" (UniqueName: \"kubernetes.io/projected/3fe20580-4a7b-4b46-9cc2-07c852e9c866-kube-api-access-4fjf2\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973879 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7vn7\" (UniqueName: \"kubernetes.io/projected/64e4f5ae-5aab-40a4-855b-8d7904027e63-kube-api-access-z7vn7\") pod \"openshift-controller-manager-operator-756b6f6bc6-xdlgl\" (UID: \"64e4f5ae-5aab-40a4-855b-8d7904027e63\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xdlgl" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973909 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3fe20580-4a7b-4b46-9cc2-07c852e9c866-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973931 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6902c6a5-688c-4d89-9f2d-126e6cdd5879-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hq4gf\" (UID: \"6902c6a5-688c-4d89-9f2d-126e6cdd5879\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hq4gf" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973946 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78vmx\" (UniqueName: \"kubernetes.io/projected/6902c6a5-688c-4d89-9f2d-126e6cdd5879-kube-api-access-78vmx\") pod \"openshift-apiserver-operator-796bbdcf4f-hq4gf\" (UID: \"6902c6a5-688c-4d89-9f2d-126e6cdd5879\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hq4gf" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973967 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1349c57b-da7f-4882-bb6f-73a883b23cea-metrics-tls\") pod \"dns-operator-744455d44c-52vpt\" (UID: \"1349c57b-da7f-4882-bb6f-73a883b23cea\") " pod="openshift-dns-operator/dns-operator-744455d44c-52vpt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.973993 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12b9edc6-687c-47b9-b8c6-8fa656fc40de-config\") pod \"console-operator-58897d9998-wpbsk\" (UID: \"12b9edc6-687c-47b9-b8c6-8fa656fc40de\") " pod="openshift-console-operator/console-operator-58897d9998-wpbsk" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.974009 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12b9edc6-687c-47b9-b8c6-8fa656fc40de-trusted-ca\") pod \"console-operator-58897d9998-wpbsk\" (UID: \"12b9edc6-687c-47b9-b8c6-8fa656fc40de\") " pod="openshift-console-operator/console-operator-58897d9998-wpbsk" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.974024 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4ecd65d-fa3e-456b-8db6-314cc20216ed-service-ca-bundle\") pod \"authentication-operator-69f744f599-fws6n\" (UID: \"c4ecd65d-fa3e-456b-8db6-314cc20216ed\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.974039 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a48af85-5ce3-4cd9-85bd-d0f88d38103a-etcd-service-ca\") pod \"etcd-operator-b45778765-ztxq2\" (UID: \"1a48af85-5ce3-4cd9-85bd-d0f88d38103a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.974054 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a48af85-5ce3-4cd9-85bd-d0f88d38103a-serving-cert\") pod \"etcd-operator-b45778765-ztxq2\" (UID: \"1a48af85-5ce3-4cd9-85bd-d0f88d38103a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.974076 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a48af85-5ce3-4cd9-85bd-d0f88d38103a-config\") pod \"etcd-operator-b45778765-ztxq2\" (UID: \"1a48af85-5ce3-4cd9-85bd-d0f88d38103a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.974091 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7g7t\" (UniqueName: \"kubernetes.io/projected/a0612699-805c-409c-a48a-d9852f1c7f4f-kube-api-access-k7g7t\") pod \"cluster-samples-operator-665b6dd947-c28bj\" (UID: \"a0612699-805c-409c-a48a-d9852f1c7f4f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c28bj" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.974107 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64e4f5ae-5aab-40a4-855b-8d7904027e63-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xdlgl\" (UID: \"64e4f5ae-5aab-40a4-855b-8d7904027e63\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xdlgl" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.974125 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12b9edc6-687c-47b9-b8c6-8fa656fc40de-serving-cert\") pod \"console-operator-58897d9998-wpbsk\" (UID: \"12b9edc6-687c-47b9-b8c6-8fa656fc40de\") " pod="openshift-console-operator/console-operator-58897d9998-wpbsk" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.974127 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fe20580-4a7b-4b46-9cc2-07c852e9c866-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.974141 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0612699-805c-409c-a48a-d9852f1c7f4f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-c28bj\" (UID: \"a0612699-805c-409c-a48a-d9852f1c7f4f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c28bj" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.975004 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1a48af85-5ce3-4cd9-85bd-d0f88d38103a-etcd-ca\") pod \"etcd-operator-b45778765-ztxq2\" (UID: \"1a48af85-5ce3-4cd9-85bd-d0f88d38103a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.975192 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6902c6a5-688c-4d89-9f2d-126e6cdd5879-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hq4gf\" (UID: \"6902c6a5-688c-4d89-9f2d-126e6cdd5879\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hq4gf" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.975728 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3fe20580-4a7b-4b46-9cc2-07c852e9c866-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.976128 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64e4f5ae-5aab-40a4-855b-8d7904027e63-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xdlgl\" (UID: \"64e4f5ae-5aab-40a4-855b-8d7904027e63\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xdlgl" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.976696 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6902c6a5-688c-4d89-9f2d-126e6cdd5879-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hq4gf\" (UID: \"6902c6a5-688c-4d89-9f2d-126e6cdd5879\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hq4gf" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.976753 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12b9edc6-687c-47b9-b8c6-8fa656fc40de-config\") pod \"console-operator-58897d9998-wpbsk\" (UID: \"12b9edc6-687c-47b9-b8c6-8fa656fc40de\") " pod="openshift-console-operator/console-operator-58897d9998-wpbsk" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.976776 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3fe20580-4a7b-4b46-9cc2-07c852e9c866-audit-dir\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.976975 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0612699-805c-409c-a48a-d9852f1c7f4f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-c28bj\" (UID: \"a0612699-805c-409c-a48a-d9852f1c7f4f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c28bj" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.977018 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12b9edc6-687c-47b9-b8c6-8fa656fc40de-trusted-ca\") pod \"console-operator-58897d9998-wpbsk\" (UID: \"12b9edc6-687c-47b9-b8c6-8fa656fc40de\") " pod="openshift-console-operator/console-operator-58897d9998-wpbsk" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.977021 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4ecd65d-fa3e-456b-8db6-314cc20216ed-serving-cert\") pod \"authentication-operator-69f744f599-fws6n\" (UID: \"c4ecd65d-fa3e-456b-8db6-314cc20216ed\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.977446 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3fe20580-4a7b-4b46-9cc2-07c852e9c866-audit-policies\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.977640 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4ecd65d-fa3e-456b-8db6-314cc20216ed-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fws6n\" (UID: \"c4ecd65d-fa3e-456b-8db6-314cc20216ed\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.977693 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4ecd65d-fa3e-456b-8db6-314cc20216ed-service-ca-bundle\") pod \"authentication-operator-69f744f599-fws6n\" (UID: \"c4ecd65d-fa3e-456b-8db6-314cc20216ed\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.977894 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1a48af85-5ce3-4cd9-85bd-d0f88d38103a-etcd-client\") pod \"etcd-operator-b45778765-ztxq2\" (UID: \"1a48af85-5ce3-4cd9-85bd-d0f88d38103a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.977974 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4ecd65d-fa3e-456b-8db6-314cc20216ed-config\") pod \"authentication-operator-69f744f599-fws6n\" (UID: \"c4ecd65d-fa3e-456b-8db6-314cc20216ed\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.978177 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a48af85-5ce3-4cd9-85bd-d0f88d38103a-etcd-service-ca\") pod \"etcd-operator-b45778765-ztxq2\" (UID: \"1a48af85-5ce3-4cd9-85bd-d0f88d38103a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.978387 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.980192 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1349c57b-da7f-4882-bb6f-73a883b23cea-metrics-tls\") pod \"dns-operator-744455d44c-52vpt\" (UID: \"1349c57b-da7f-4882-bb6f-73a883b23cea\") " pod="openshift-dns-operator/dns-operator-744455d44c-52vpt" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.980562 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64e4f5ae-5aab-40a4-855b-8d7904027e63-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xdlgl\" (UID: \"64e4f5ae-5aab-40a4-855b-8d7904027e63\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xdlgl" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.981029 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3fe20580-4a7b-4b46-9cc2-07c852e9c866-encryption-config\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.981066 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fe20580-4a7b-4b46-9cc2-07c852e9c866-serving-cert\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.981596 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12b9edc6-687c-47b9-b8c6-8fa656fc40de-serving-cert\") pod \"console-operator-58897d9998-wpbsk\" (UID: \"12b9edc6-687c-47b9-b8c6-8fa656fc40de\") " pod="openshift-console-operator/console-operator-58897d9998-wpbsk" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.981865 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a48af85-5ce3-4cd9-85bd-d0f88d38103a-serving-cert\") pod \"etcd-operator-b45778765-ztxq2\" (UID: \"1a48af85-5ce3-4cd9-85bd-d0f88d38103a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.985192 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a48af85-5ce3-4cd9-85bd-d0f88d38103a-config\") pod \"etcd-operator-b45778765-ztxq2\" (UID: \"1a48af85-5ce3-4cd9-85bd-d0f88d38103a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" Jan 27 15:44:12 crc kubenswrapper[4966]: I0127 15:44:12.988629 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3fe20580-4a7b-4b46-9cc2-07c852e9c866-etcd-client\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.018740 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.038383 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.058018 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.078506 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.098626 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.117986 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.159098 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.178665 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.198791 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.218715 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.239316 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.259376 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.279116 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.298824 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.319185 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.338853 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.358564 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.378537 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.399110 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.419094 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.439891 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.458681 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.479660 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.500126 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.519800 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.520543 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.539188 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.559100 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.579364 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.599603 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.618952 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.655005 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.669525 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.680754 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.698931 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.718703 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.738324 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.758753 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.779399 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.798629 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.819663 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.839492 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.858345 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.877583 4966 request.go:700] Waited for 1.019799364s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-certs-default&limit=500&resourceVersion=0 Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.879995 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.899254 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.918448 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.939335 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.959012 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.979676 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 15:44:13 crc kubenswrapper[4966]: I0127 15:44:13.999642 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.019268 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.039666 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.059227 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.079581 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.099788 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.119081 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.139258 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.159072 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.180325 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.200178 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.219602 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.239674 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.259042 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.279362 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.299577 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.319608 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.339587 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.369515 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.379412 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.400393 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.419350 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.440073 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.459637 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.520203 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.520271 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.520511 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.619237 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.638716 4966 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.658978 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.679142 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.699184 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.718938 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.738399 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.759114 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.778813 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.798769 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.818866 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.839705 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.858808 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 15:44:14 crc kubenswrapper[4966]: I0127 15:44:14.897290 4966 request.go:700] Waited for 1.923165012s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.079138 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.099659 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.102926 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8729\" (UniqueName: \"kubernetes.io/projected/b8111aeb-2c95-4953-a2d0-586c5fcd4940-kube-api-access-f8729\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.102976 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8111aeb-2c95-4953-a2d0-586c5fcd4940-trusted-ca\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.103012 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3630c88c-69cc-44c2-8a80-90c02ace87f5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c4rfs\" (UID: \"3630c88c-69cc-44c2-8a80-90c02ace87f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c4rfs" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.103101 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b8111aeb-2c95-4953-a2d0-586c5fcd4940-bound-sa-token\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.103181 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b8111aeb-2c95-4953-a2d0-586c5fcd4940-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.103241 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3630c88c-69cc-44c2-8a80-90c02ace87f5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c4rfs\" (UID: \"3630c88c-69cc-44c2-8a80-90c02ace87f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c4rfs" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.103440 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b8111aeb-2c95-4953-a2d0-586c5fcd4940-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.103477 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9phft\" (UniqueName: \"kubernetes.io/projected/3630c88c-69cc-44c2-8a80-90c02ace87f5-kube-api-access-9phft\") pod \"cluster-image-registry-operator-dc59b4c8b-c4rfs\" (UID: \"3630c88c-69cc-44c2-8a80-90c02ace87f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c4rfs" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.103520 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.103541 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b8111aeb-2c95-4953-a2d0-586c5fcd4940-registry-tls\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.103579 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3630c88c-69cc-44c2-8a80-90c02ace87f5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c4rfs\" (UID: \"3630c88c-69cc-44c2-8a80-90c02ace87f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c4rfs" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.103602 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b8111aeb-2c95-4953-a2d0-586c5fcd4940-registry-certificates\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.103624 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnfl5\" (UniqueName: \"kubernetes.io/projected/36c5078e-fb86-4817-a08e-6d4b4e2bee7f-kube-api-access-hnfl5\") pod \"downloads-7954f5f757-kmmq4\" (UID: \"36c5078e-fb86-4817-a08e-6d4b4e2bee7f\") " pod="openshift-console/downloads-7954f5f757-kmmq4" Jan 27 15:44:15 crc kubenswrapper[4966]: E0127 15:44:15.104106 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:15.604079099 +0000 UTC m=+121.906872627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.118824 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.139411 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.159973 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.180681 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.199607 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.204632 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:15 crc kubenswrapper[4966]: E0127 15:44:15.204828 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:15.704783973 +0000 UTC m=+122.007577501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.204891 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6c2j\" (UniqueName: \"kubernetes.io/projected/3438edc5-62d1-4e68-b4ac-aa41a4240e78-kube-api-access-c6c2j\") pod \"machine-config-controller-84d6567774-4lwnj\" (UID: \"3438edc5-62d1-4e68-b4ac-aa41a4240e78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4lwnj" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.204935 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tjhrf\" (UID: \"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.204988 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p45j\" (UniqueName: \"kubernetes.io/projected/1317de86-7041-4b5a-8403-98489b8dc338-kube-api-access-7p45j\") pod \"catalog-operator-68c6474976-zk5gr\" (UID: \"1317de86-7041-4b5a-8403-98489b8dc338\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.205443 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fccdaa28-9674-4bb6-9c58-3f3905df1e56-socket-dir\") pod \"csi-hostpathplugin-96rmm\" (UID: \"fccdaa28-9674-4bb6-9c58-3f3905df1e56\") " pod="hostpath-provisioner/csi-hostpathplugin-96rmm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.205566 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n4jx\" (UniqueName: \"kubernetes.io/projected/330c84f4-5179-4ec9-92d5-4a5dd18c799b-kube-api-access-6n4jx\") pod \"machine-config-operator-74547568cd-v4bvw\" (UID: \"330c84f4-5179-4ec9-92d5-4a5dd18c799b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.205612 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56ckh\" (UniqueName: \"kubernetes.io/projected/4d3b7ce8-d257-4466-8385-0e506ba4cb38-kube-api-access-56ckh\") pod \"collect-profiles-29492130-nl8kr\" (UID: \"4d3b7ce8-d257-4466-8385-0e506ba4cb38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.205650 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfxs4\" (UniqueName: \"kubernetes.io/projected/1348f4a2-fbf7-4e01-82ad-3943f3bc628e-kube-api-access-rfxs4\") pod \"migrator-59844c95c7-ssfvh\" (UID: \"1348f4a2-fbf7-4e01-82ad-3943f3bc628e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfvh" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.205714 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vshm4\" (UniqueName: \"kubernetes.io/projected/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6-kube-api-access-vshm4\") pod \"marketplace-operator-79b997595-tjhrf\" (UID: \"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.205840 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b8111aeb-2c95-4953-a2d0-586c5fcd4940-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.206282 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dzk6\" (UniqueName: \"kubernetes.io/projected/44a925db-1525-422d-ac47-5e1ded16d64f-kube-api-access-5dzk6\") pod \"multus-admission-controller-857f4d67dd-qrsjk\" (UID: \"44a925db-1525-422d-ac47-5e1ded16d64f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qrsjk" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.206330 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b8111aeb-2c95-4953-a2d0-586c5fcd4940-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.206415 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b8111aeb-2c95-4953-a2d0-586c5fcd4940-registry-tls\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.206452 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zjzc\" (UniqueName: \"kubernetes.io/projected/3e3f05ac-eaa1-4dfb-9a2f-d2dfd9c695eb-kube-api-access-8zjzc\") pod \"machine-config-server-h2595\" (UID: \"3e3f05ac-eaa1-4dfb-9a2f-d2dfd9c695eb\") " pod="openshift-machine-config-operator/machine-config-server-h2595" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.206754 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.208280 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/08398814-3579-49c5-bf30-b8e700fabdab-stats-auth\") pod \"router-default-5444994796-wzrqq\" (UID: \"08398814-3579-49c5-bf30-b8e700fabdab\") " pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.208341 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3e3f05ac-eaa1-4dfb-9a2f-d2dfd9c695eb-node-bootstrap-token\") pod \"machine-config-server-h2595\" (UID: \"3e3f05ac-eaa1-4dfb-9a2f-d2dfd9c695eb\") " pod="openshift-machine-config-operator/machine-config-server-h2595" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.208377 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r777x\" (UniqueName: \"kubernetes.io/projected/d00c259a-2ad3-44dd-97d2-53e763da5ab1-kube-api-access-r777x\") pod \"control-plane-machine-set-operator-78cbb6b69f-l4jwq\" (UID: \"d00c259a-2ad3-44dd-97d2-53e763da5ab1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4jwq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.208408 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv77k\" (UniqueName: \"kubernetes.io/projected/ec3fb998-85ac-4c16-be5e-3dd48da929df-kube-api-access-pv77k\") pod \"ingress-canary-9pb5n\" (UID: \"ec3fb998-85ac-4c16-be5e-3dd48da929df\") " pod="openshift-ingress-canary/ingress-canary-9pb5n" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.208468 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1317de86-7041-4b5a-8403-98489b8dc338-srv-cert\") pod \"catalog-operator-68c6474976-zk5gr\" (UID: \"1317de86-7041-4b5a-8403-98489b8dc338\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.208498 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/330c84f4-5179-4ec9-92d5-4a5dd18c799b-proxy-tls\") pod \"machine-config-operator-74547568cd-v4bvw\" (UID: \"330c84f4-5179-4ec9-92d5-4a5dd18c799b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.208526 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fccdaa28-9674-4bb6-9c58-3f3905df1e56-csi-data-dir\") pod \"csi-hostpathplugin-96rmm\" (UID: \"fccdaa28-9674-4bb6-9c58-3f3905df1e56\") " pod="hostpath-provisioner/csi-hostpathplugin-96rmm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.208576 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08398814-3579-49c5-bf30-b8e700fabdab-metrics-certs\") pod \"router-default-5444994796-wzrqq\" (UID: \"08398814-3579-49c5-bf30-b8e700fabdab\") " pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.208631 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7e627b1-5b87-424a-8640-f721cae406ec-config-volume\") pod \"dns-default-nb7qq\" (UID: \"d7e627b1-5b87-424a-8640-f721cae406ec\") " pod="openshift-dns/dns-default-nb7qq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.208694 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpg5v\" (UniqueName: \"kubernetes.io/projected/70f9fda4-72f3-4f6a-8b6a-38dffbb6c958-kube-api-access-fpg5v\") pod \"packageserver-d55dfcdfc-fgklq\" (UID: \"70f9fda4-72f3-4f6a-8b6a-38dffbb6c958\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.208756 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dab55df7-5589-4454-9f76-2318e87b02bb-metrics-tls\") pod \"ingress-operator-5b745b69d9-nhgff\" (UID: \"dab55df7-5589-4454-9f76-2318e87b02bb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.208786 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/805c6450-f56a-415f-85de-4a5b1df954b3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sdhr5\" (UID: \"805c6450-f56a-415f-85de-4a5b1df954b3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdhr5" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.208821 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b8111aeb-2c95-4953-a2d0-586c5fcd4940-bound-sa-token\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.208875 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbk28\" (UniqueName: \"kubernetes.io/projected/6048e5b0-2eb7-41b9-a0e1-53651ff008e2-kube-api-access-qbk28\") pod \"service-ca-operator-777779d784-275fm\" (UID: \"6048e5b0-2eb7-41b9-a0e1-53651ff008e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-275fm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.209190 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/805c6450-f56a-415f-85de-4a5b1df954b3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sdhr5\" (UID: \"805c6450-f56a-415f-85de-4a5b1df954b3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdhr5" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.209230 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b8111aeb-2c95-4953-a2d0-586c5fcd4940-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.209262 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cr8s9\" (UID: \"a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.209312 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdcf7\" (UniqueName: \"kubernetes.io/projected/08398814-3579-49c5-bf30-b8e700fabdab-kube-api-access-cdcf7\") pod \"router-default-5444994796-wzrqq\" (UID: \"08398814-3579-49c5-bf30-b8e700fabdab\") " pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.209378 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4f1e013-7175-4268-908c-10658a7a8f1f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ddksw\" (UID: \"b4f1e013-7175-4268-908c-10658a7a8f1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ddksw" Jan 27 15:44:15 crc kubenswrapper[4966]: E0127 15:44:15.209450 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:15.709414067 +0000 UTC m=+122.012207625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.209506 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3438edc5-62d1-4e68-b4ac-aa41a4240e78-proxy-tls\") pod \"machine-config-controller-84d6567774-4lwnj\" (UID: \"3438edc5-62d1-4e68-b4ac-aa41a4240e78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4lwnj" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.209546 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d3b7ce8-d257-4466-8385-0e506ba4cb38-secret-volume\") pod \"collect-profiles-29492130-nl8kr\" (UID: \"4d3b7ce8-d257-4466-8385-0e506ba4cb38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.209573 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3630c88c-69cc-44c2-8a80-90c02ace87f5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c4rfs\" (UID: \"3630c88c-69cc-44c2-8a80-90c02ace87f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c4rfs" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.209597 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d00c259a-2ad3-44dd-97d2-53e763da5ab1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l4jwq\" (UID: \"d00c259a-2ad3-44dd-97d2-53e763da5ab1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4jwq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.209621 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4f1e013-7175-4268-908c-10658a7a8f1f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ddksw\" (UID: \"b4f1e013-7175-4268-908c-10658a7a8f1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ddksw" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.209689 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f02135d5-ce67-4a94-9f94-60c29b672231-srv-cert\") pod \"olm-operator-6b444d44fb-m7rc2\" (UID: \"f02135d5-ce67-4a94-9f94-60c29b672231\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.209713 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805c6450-f56a-415f-85de-4a5b1df954b3-config\") pod \"kube-controller-manager-operator-78b949d7b-sdhr5\" (UID: \"805c6450-f56a-415f-85de-4a5b1df954b3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdhr5" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.209745 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhbg4\" (UniqueName: \"kubernetes.io/projected/fccdaa28-9674-4bb6-9c58-3f3905df1e56-kube-api-access-lhbg4\") pod \"csi-hostpathplugin-96rmm\" (UID: \"fccdaa28-9674-4bb6-9c58-3f3905df1e56\") " pod="hostpath-provisioner/csi-hostpathplugin-96rmm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.209769 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/767a1177-008b-4103-b582-5a679d5d6384-auth-proxy-config\") pod \"machine-approver-56656f9798-clgw2\" (UID: \"767a1177-008b-4103-b582-5a679d5d6384\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.209811 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecccc492-0c85-414f-9ae9-2f5aa8df4e0d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hhnc\" (UID: \"ecccc492-0c85-414f-9ae9-2f5aa8df4e0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hhnc" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.210511 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fccdaa28-9674-4bb6-9c58-3f3905df1e56-registration-dir\") pod \"csi-hostpathplugin-96rmm\" (UID: \"fccdaa28-9674-4bb6-9c58-3f3905df1e56\") " pod="hostpath-provisioner/csi-hostpathplugin-96rmm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.210582 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6048e5b0-2eb7-41b9-a0e1-53651ff008e2-serving-cert\") pod \"service-ca-operator-777779d784-275fm\" (UID: \"6048e5b0-2eb7-41b9-a0e1-53651ff008e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-275fm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.210644 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/767a1177-008b-4103-b582-5a679d5d6384-config\") pod \"machine-approver-56656f9798-clgw2\" (UID: \"767a1177-008b-4103-b582-5a679d5d6384\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.210704 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9e5d666-9ed7-45b5-9e80-b1a8cab36cda-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-g8mr8\" (UID: \"f9e5d666-9ed7-45b5-9e80-b1a8cab36cda\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g8mr8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.210745 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/08398814-3579-49c5-bf30-b8e700fabdab-default-certificate\") pod \"router-default-5444994796-wzrqq\" (UID: \"08398814-3579-49c5-bf30-b8e700fabdab\") " pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.210778 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj7pt\" (UniqueName: \"kubernetes.io/projected/a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7-kube-api-access-bj7pt\") pod \"package-server-manager-789f6589d5-cr8s9\" (UID: \"a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.211614 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/330c84f4-5179-4ec9-92d5-4a5dd18c799b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-v4bvw\" (UID: \"330c84f4-5179-4ec9-92d5-4a5dd18c799b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.211666 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d3b7ce8-d257-4466-8385-0e506ba4cb38-config-volume\") pod \"collect-profiles-29492130-nl8kr\" (UID: \"4d3b7ce8-d257-4466-8385-0e506ba4cb38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.211846 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rblmp\" (UniqueName: \"kubernetes.io/projected/767a1177-008b-4103-b582-5a679d5d6384-kube-api-access-rblmp\") pod \"machine-approver-56656f9798-clgw2\" (UID: \"767a1177-008b-4103-b582-5a679d5d6384\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.211954 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9phft\" (UniqueName: \"kubernetes.io/projected/3630c88c-69cc-44c2-8a80-90c02ace87f5-kube-api-access-9phft\") pod \"cluster-image-registry-operator-dc59b4c8b-c4rfs\" (UID: \"3630c88c-69cc-44c2-8a80-90c02ace87f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c4rfs" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.212000 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fccdaa28-9674-4bb6-9c58-3f3905df1e56-plugins-dir\") pod \"csi-hostpathplugin-96rmm\" (UID: \"fccdaa28-9674-4bb6-9c58-3f3905df1e56\") " pod="hostpath-provisioner/csi-hostpathplugin-96rmm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.212064 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x287c\" (UniqueName: \"kubernetes.io/projected/f9e5d666-9ed7-45b5-9e80-b1a8cab36cda-kube-api-access-x287c\") pod \"kube-storage-version-migrator-operator-b67b599dd-g8mr8\" (UID: \"f9e5d666-9ed7-45b5-9e80-b1a8cab36cda\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g8mr8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.212103 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3e3f05ac-eaa1-4dfb-9a2f-d2dfd9c695eb-certs\") pod \"machine-config-server-h2595\" (UID: \"3e3f05ac-eaa1-4dfb-9a2f-d2dfd9c695eb\") " pod="openshift-machine-config-operator/machine-config-server-h2595" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.212168 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4f1e013-7175-4268-908c-10658a7a8f1f-config\") pod \"kube-apiserver-operator-766d6c64bb-ddksw\" (UID: \"b4f1e013-7175-4268-908c-10658a7a8f1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ddksw" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.212221 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dab55df7-5589-4454-9f76-2318e87b02bb-trusted-ca\") pod \"ingress-operator-5b745b69d9-nhgff\" (UID: \"dab55df7-5589-4454-9f76-2318e87b02bb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.212254 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mwc8\" (UniqueName: \"kubernetes.io/projected/cfed2cb5-8390-4b53-998a-195d7cc17c90-kube-api-access-9mwc8\") pod \"service-ca-9c57cc56f-hl7j2\" (UID: \"cfed2cb5-8390-4b53-998a-195d7cc17c90\") " pod="openshift-service-ca/service-ca-9c57cc56f-hl7j2" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.212298 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3630c88c-69cc-44c2-8a80-90c02ace87f5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c4rfs\" (UID: \"3630c88c-69cc-44c2-8a80-90c02ace87f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c4rfs" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.212332 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70f9fda4-72f3-4f6a-8b6a-38dffbb6c958-webhook-cert\") pod \"packageserver-d55dfcdfc-fgklq\" (UID: \"70f9fda4-72f3-4f6a-8b6a-38dffbb6c958\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.212369 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b8111aeb-2c95-4953-a2d0-586c5fcd4940-registry-certificates\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.212406 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnfl5\" (UniqueName: \"kubernetes.io/projected/36c5078e-fb86-4817-a08e-6d4b4e2bee7f-kube-api-access-hnfl5\") pod \"downloads-7954f5f757-kmmq4\" (UID: \"36c5078e-fb86-4817-a08e-6d4b4e2bee7f\") " pod="openshift-console/downloads-7954f5f757-kmmq4" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.212440 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecccc492-0c85-414f-9ae9-2f5aa8df4e0d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hhnc\" (UID: \"ecccc492-0c85-414f-9ae9-2f5aa8df4e0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hhnc" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.212834 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tjhrf\" (UID: \"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.212963 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cfed2cb5-8390-4b53-998a-195d7cc17c90-signing-key\") pod \"service-ca-9c57cc56f-hl7j2\" (UID: \"cfed2cb5-8390-4b53-998a-195d7cc17c90\") " pod="openshift-service-ca/service-ca-9c57cc56f-hl7j2" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.213055 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/70f9fda4-72f3-4f6a-8b6a-38dffbb6c958-tmpfs\") pod \"packageserver-d55dfcdfc-fgklq\" (UID: \"70f9fda4-72f3-4f6a-8b6a-38dffbb6c958\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.213203 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecccc492-0c85-414f-9ae9-2f5aa8df4e0d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hhnc\" (UID: \"ecccc492-0c85-414f-9ae9-2f5aa8df4e0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hhnc" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.213362 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec3fb998-85ac-4c16-be5e-3dd48da929df-cert\") pod \"ingress-canary-9pb5n\" (UID: \"ec3fb998-85ac-4c16-be5e-3dd48da929df\") " pod="openshift-ingress-canary/ingress-canary-9pb5n" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.213639 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8729\" (UniqueName: \"kubernetes.io/projected/b8111aeb-2c95-4953-a2d0-586c5fcd4940-kube-api-access-f8729\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.213746 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fccdaa28-9674-4bb6-9c58-3f3905df1e56-mountpoint-dir\") pod \"csi-hostpathplugin-96rmm\" (UID: \"fccdaa28-9674-4bb6-9c58-3f3905df1e56\") " pod="hostpath-provisioner/csi-hostpathplugin-96rmm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.213889 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44a925db-1525-422d-ac47-5e1ded16d64f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qrsjk\" (UID: \"44a925db-1525-422d-ac47-5e1ded16d64f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qrsjk" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.214054 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cfed2cb5-8390-4b53-998a-195d7cc17c90-signing-cabundle\") pod \"service-ca-9c57cc56f-hl7j2\" (UID: \"cfed2cb5-8390-4b53-998a-195d7cc17c90\") " pod="openshift-service-ca/service-ca-9c57cc56f-hl7j2" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.214139 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/767a1177-008b-4103-b582-5a679d5d6384-machine-approver-tls\") pod \"machine-approver-56656f9798-clgw2\" (UID: \"767a1177-008b-4103-b582-5a679d5d6384\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.214225 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj858\" (UniqueName: \"kubernetes.io/projected/dab55df7-5589-4454-9f76-2318e87b02bb-kube-api-access-fj858\") pod \"ingress-operator-5b745b69d9-nhgff\" (UID: \"dab55df7-5589-4454-9f76-2318e87b02bb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.214359 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8111aeb-2c95-4953-a2d0-586c5fcd4940-trusted-ca\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.214467 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3630c88c-69cc-44c2-8a80-90c02ace87f5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c4rfs\" (UID: \"3630c88c-69cc-44c2-8a80-90c02ace87f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c4rfs" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.214545 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6048e5b0-2eb7-41b9-a0e1-53651ff008e2-config\") pod \"service-ca-operator-777779d784-275fm\" (UID: \"6048e5b0-2eb7-41b9-a0e1-53651ff008e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-275fm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.215085 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9e5d666-9ed7-45b5-9e80-b1a8cab36cda-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-g8mr8\" (UID: \"f9e5d666-9ed7-45b5-9e80-b1a8cab36cda\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g8mr8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.215112 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/330c84f4-5179-4ec9-92d5-4a5dd18c799b-images\") pod \"machine-config-operator-74547568cd-v4bvw\" (UID: \"330c84f4-5179-4ec9-92d5-4a5dd18c799b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.215167 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f02135d5-ce67-4a94-9f94-60c29b672231-profile-collector-cert\") pod \"olm-operator-6b444d44fb-m7rc2\" (UID: \"f02135d5-ce67-4a94-9f94-60c29b672231\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.215185 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dab55df7-5589-4454-9f76-2318e87b02bb-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nhgff\" (UID: \"dab55df7-5589-4454-9f76-2318e87b02bb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.215199 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08398814-3579-49c5-bf30-b8e700fabdab-service-ca-bundle\") pod \"router-default-5444994796-wzrqq\" (UID: \"08398814-3579-49c5-bf30-b8e700fabdab\") " pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.215215 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92cg5\" (UniqueName: \"kubernetes.io/projected/f02135d5-ce67-4a94-9f94-60c29b672231-kube-api-access-92cg5\") pod \"olm-operator-6b444d44fb-m7rc2\" (UID: \"f02135d5-ce67-4a94-9f94-60c29b672231\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.215310 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1317de86-7041-4b5a-8403-98489b8dc338-profile-collector-cert\") pod \"catalog-operator-68c6474976-zk5gr\" (UID: \"1317de86-7041-4b5a-8403-98489b8dc338\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.215380 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/70f9fda4-72f3-4f6a-8b6a-38dffbb6c958-apiservice-cert\") pod \"packageserver-d55dfcdfc-fgklq\" (UID: \"70f9fda4-72f3-4f6a-8b6a-38dffbb6c958\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.215484 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bm5r\" (UniqueName: \"kubernetes.io/projected/d7e627b1-5b87-424a-8640-f721cae406ec-kube-api-access-7bm5r\") pod \"dns-default-nb7qq\" (UID: \"d7e627b1-5b87-424a-8640-f721cae406ec\") " pod="openshift-dns/dns-default-nb7qq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.215554 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3438edc5-62d1-4e68-b4ac-aa41a4240e78-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4lwnj\" (UID: \"3438edc5-62d1-4e68-b4ac-aa41a4240e78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4lwnj" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.215600 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d7e627b1-5b87-424a-8640-f721cae406ec-metrics-tls\") pod \"dns-default-nb7qq\" (UID: \"d7e627b1-5b87-424a-8640-f721cae406ec\") " pod="openshift-dns/dns-default-nb7qq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.215721 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b8111aeb-2c95-4953-a2d0-586c5fcd4940-registry-certificates\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.218307 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.239127 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.257931 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.278887 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.298339 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.316369 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:15 crc kubenswrapper[4966]: E0127 15:44:15.316557 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:15.816521221 +0000 UTC m=+122.119314749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.316651 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fccdaa28-9674-4bb6-9c58-3f3905df1e56-registration-dir\") pod \"csi-hostpathplugin-96rmm\" (UID: \"fccdaa28-9674-4bb6-9c58-3f3905df1e56\") " pod="hostpath-provisioner/csi-hostpathplugin-96rmm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.316698 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6048e5b0-2eb7-41b9-a0e1-53651ff008e2-serving-cert\") pod \"service-ca-operator-777779d784-275fm\" (UID: \"6048e5b0-2eb7-41b9-a0e1-53651ff008e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-275fm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.316755 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/767a1177-008b-4103-b582-5a679d5d6384-config\") pod \"machine-approver-56656f9798-clgw2\" (UID: \"767a1177-008b-4103-b582-5a679d5d6384\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.316808 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9e5d666-9ed7-45b5-9e80-b1a8cab36cda-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-g8mr8\" (UID: \"f9e5d666-9ed7-45b5-9e80-b1a8cab36cda\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g8mr8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.316861 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/08398814-3579-49c5-bf30-b8e700fabdab-default-certificate\") pod \"router-default-5444994796-wzrqq\" (UID: \"08398814-3579-49c5-bf30-b8e700fabdab\") " pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.317174 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj7pt\" (UniqueName: \"kubernetes.io/projected/a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7-kube-api-access-bj7pt\") pod \"package-server-manager-789f6589d5-cr8s9\" (UID: \"a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.317208 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fccdaa28-9674-4bb6-9c58-3f3905df1e56-registration-dir\") pod \"csi-hostpathplugin-96rmm\" (UID: \"fccdaa28-9674-4bb6-9c58-3f3905df1e56\") " pod="hostpath-provisioner/csi-hostpathplugin-96rmm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.317241 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d3b7ce8-d257-4466-8385-0e506ba4cb38-config-volume\") pod \"collect-profiles-29492130-nl8kr\" (UID: \"4d3b7ce8-d257-4466-8385-0e506ba4cb38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.317308 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/330c84f4-5179-4ec9-92d5-4a5dd18c799b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-v4bvw\" (UID: \"330c84f4-5179-4ec9-92d5-4a5dd18c799b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.317377 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rblmp\" (UniqueName: \"kubernetes.io/projected/767a1177-008b-4103-b582-5a679d5d6384-kube-api-access-rblmp\") pod \"machine-approver-56656f9798-clgw2\" (UID: \"767a1177-008b-4103-b582-5a679d5d6384\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.317424 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fccdaa28-9674-4bb6-9c58-3f3905df1e56-plugins-dir\") pod \"csi-hostpathplugin-96rmm\" (UID: \"fccdaa28-9674-4bb6-9c58-3f3905df1e56\") " pod="hostpath-provisioner/csi-hostpathplugin-96rmm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.317503 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4f1e013-7175-4268-908c-10658a7a8f1f-config\") pod \"kube-apiserver-operator-766d6c64bb-ddksw\" (UID: \"b4f1e013-7175-4268-908c-10658a7a8f1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ddksw" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.317568 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fccdaa28-9674-4bb6-9c58-3f3905df1e56-plugins-dir\") pod \"csi-hostpathplugin-96rmm\" (UID: \"fccdaa28-9674-4bb6-9c58-3f3905df1e56\") " pod="hostpath-provisioner/csi-hostpathplugin-96rmm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.317608 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x287c\" (UniqueName: \"kubernetes.io/projected/f9e5d666-9ed7-45b5-9e80-b1a8cab36cda-kube-api-access-x287c\") pod \"kube-storage-version-migrator-operator-b67b599dd-g8mr8\" (UID: \"f9e5d666-9ed7-45b5-9e80-b1a8cab36cda\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g8mr8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.317659 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3e3f05ac-eaa1-4dfb-9a2f-d2dfd9c695eb-certs\") pod \"machine-config-server-h2595\" (UID: \"3e3f05ac-eaa1-4dfb-9a2f-d2dfd9c695eb\") " pod="openshift-machine-config-operator/machine-config-server-h2595" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.317723 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dab55df7-5589-4454-9f76-2318e87b02bb-trusted-ca\") pod \"ingress-operator-5b745b69d9-nhgff\" (UID: \"dab55df7-5589-4454-9f76-2318e87b02bb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.317772 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mwc8\" (UniqueName: \"kubernetes.io/projected/cfed2cb5-8390-4b53-998a-195d7cc17c90-kube-api-access-9mwc8\") pod \"service-ca-9c57cc56f-hl7j2\" (UID: \"cfed2cb5-8390-4b53-998a-195d7cc17c90\") " pod="openshift-service-ca/service-ca-9c57cc56f-hl7j2" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.317819 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70f9fda4-72f3-4f6a-8b6a-38dffbb6c958-webhook-cert\") pod \"packageserver-d55dfcdfc-fgklq\" (UID: \"70f9fda4-72f3-4f6a-8b6a-38dffbb6c958\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.317950 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecccc492-0c85-414f-9ae9-2f5aa8df4e0d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hhnc\" (UID: \"ecccc492-0c85-414f-9ae9-2f5aa8df4e0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hhnc" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.318003 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cfed2cb5-8390-4b53-998a-195d7cc17c90-signing-key\") pod \"service-ca-9c57cc56f-hl7j2\" (UID: \"cfed2cb5-8390-4b53-998a-195d7cc17c90\") " pod="openshift-service-ca/service-ca-9c57cc56f-hl7j2" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.318014 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/330c84f4-5179-4ec9-92d5-4a5dd18c799b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-v4bvw\" (UID: \"330c84f4-5179-4ec9-92d5-4a5dd18c799b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.318056 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tjhrf\" (UID: \"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.318108 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec3fb998-85ac-4c16-be5e-3dd48da929df-cert\") pod \"ingress-canary-9pb5n\" (UID: \"ec3fb998-85ac-4c16-be5e-3dd48da929df\") " pod="openshift-ingress-canary/ingress-canary-9pb5n" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.318152 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/70f9fda4-72f3-4f6a-8b6a-38dffbb6c958-tmpfs\") pod \"packageserver-d55dfcdfc-fgklq\" (UID: \"70f9fda4-72f3-4f6a-8b6a-38dffbb6c958\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.318223 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecccc492-0c85-414f-9ae9-2f5aa8df4e0d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hhnc\" (UID: \"ecccc492-0c85-414f-9ae9-2f5aa8df4e0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hhnc" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.318309 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fccdaa28-9674-4bb6-9c58-3f3905df1e56-mountpoint-dir\") pod \"csi-hostpathplugin-96rmm\" (UID: \"fccdaa28-9674-4bb6-9c58-3f3905df1e56\") " pod="hostpath-provisioner/csi-hostpathplugin-96rmm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.318364 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44a925db-1525-422d-ac47-5e1ded16d64f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qrsjk\" (UID: \"44a925db-1525-422d-ac47-5e1ded16d64f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qrsjk" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.318413 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cfed2cb5-8390-4b53-998a-195d7cc17c90-signing-cabundle\") pod \"service-ca-9c57cc56f-hl7j2\" (UID: \"cfed2cb5-8390-4b53-998a-195d7cc17c90\") " pod="openshift-service-ca/service-ca-9c57cc56f-hl7j2" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.318425 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fccdaa28-9674-4bb6-9c58-3f3905df1e56-mountpoint-dir\") pod \"csi-hostpathplugin-96rmm\" (UID: \"fccdaa28-9674-4bb6-9c58-3f3905df1e56\") " pod="hostpath-provisioner/csi-hostpathplugin-96rmm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.318461 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj858\" (UniqueName: \"kubernetes.io/projected/dab55df7-5589-4454-9f76-2318e87b02bb-kube-api-access-fj858\") pod \"ingress-operator-5b745b69d9-nhgff\" (UID: \"dab55df7-5589-4454-9f76-2318e87b02bb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.318510 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/767a1177-008b-4103-b582-5a679d5d6384-machine-approver-tls\") pod \"machine-approver-56656f9798-clgw2\" (UID: \"767a1177-008b-4103-b582-5a679d5d6384\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.318590 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6048e5b0-2eb7-41b9-a0e1-53651ff008e2-config\") pod \"service-ca-operator-777779d784-275fm\" (UID: \"6048e5b0-2eb7-41b9-a0e1-53651ff008e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-275fm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.318685 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9e5d666-9ed7-45b5-9e80-b1a8cab36cda-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-g8mr8\" (UID: \"f9e5d666-9ed7-45b5-9e80-b1a8cab36cda\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g8mr8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.318736 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/330c84f4-5179-4ec9-92d5-4a5dd18c799b-images\") pod \"machine-config-operator-74547568cd-v4bvw\" (UID: \"330c84f4-5179-4ec9-92d5-4a5dd18c799b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.318810 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f02135d5-ce67-4a94-9f94-60c29b672231-profile-collector-cert\") pod \"olm-operator-6b444d44fb-m7rc2\" (UID: \"f02135d5-ce67-4a94-9f94-60c29b672231\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.318864 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dab55df7-5589-4454-9f76-2318e87b02bb-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nhgff\" (UID: \"dab55df7-5589-4454-9f76-2318e87b02bb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.318954 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1317de86-7041-4b5a-8403-98489b8dc338-profile-collector-cert\") pod \"catalog-operator-68c6474976-zk5gr\" (UID: \"1317de86-7041-4b5a-8403-98489b8dc338\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.319006 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08398814-3579-49c5-bf30-b8e700fabdab-service-ca-bundle\") pod \"router-default-5444994796-wzrqq\" (UID: \"08398814-3579-49c5-bf30-b8e700fabdab\") " pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.319057 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92cg5\" (UniqueName: \"kubernetes.io/projected/f02135d5-ce67-4a94-9f94-60c29b672231-kube-api-access-92cg5\") pod \"olm-operator-6b444d44fb-m7rc2\" (UID: \"f02135d5-ce67-4a94-9f94-60c29b672231\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.319004 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/70f9fda4-72f3-4f6a-8b6a-38dffbb6c958-tmpfs\") pod \"packageserver-d55dfcdfc-fgklq\" (UID: \"70f9fda4-72f3-4f6a-8b6a-38dffbb6c958\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.319113 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/70f9fda4-72f3-4f6a-8b6a-38dffbb6c958-apiservice-cert\") pod \"packageserver-d55dfcdfc-fgklq\" (UID: \"70f9fda4-72f3-4f6a-8b6a-38dffbb6c958\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.319177 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d7e627b1-5b87-424a-8640-f721cae406ec-metrics-tls\") pod \"dns-default-nb7qq\" (UID: \"d7e627b1-5b87-424a-8640-f721cae406ec\") " pod="openshift-dns/dns-default-nb7qq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.319274 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bm5r\" (UniqueName: \"kubernetes.io/projected/d7e627b1-5b87-424a-8640-f721cae406ec-kube-api-access-7bm5r\") pod \"dns-default-nb7qq\" (UID: \"d7e627b1-5b87-424a-8640-f721cae406ec\") " pod="openshift-dns/dns-default-nb7qq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.319329 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3438edc5-62d1-4e68-b4ac-aa41a4240e78-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4lwnj\" (UID: \"3438edc5-62d1-4e68-b4ac-aa41a4240e78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4lwnj" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.319385 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tjhrf\" (UID: \"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.319437 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6c2j\" (UniqueName: \"kubernetes.io/projected/3438edc5-62d1-4e68-b4ac-aa41a4240e78-kube-api-access-c6c2j\") pod \"machine-config-controller-84d6567774-4lwnj\" (UID: \"3438edc5-62d1-4e68-b4ac-aa41a4240e78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4lwnj" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.319497 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p45j\" (UniqueName: \"kubernetes.io/projected/1317de86-7041-4b5a-8403-98489b8dc338-kube-api-access-7p45j\") pod \"catalog-operator-68c6474976-zk5gr\" (UID: \"1317de86-7041-4b5a-8403-98489b8dc338\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.319574 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fccdaa28-9674-4bb6-9c58-3f3905df1e56-socket-dir\") pod \"csi-hostpathplugin-96rmm\" (UID: \"fccdaa28-9674-4bb6-9c58-3f3905df1e56\") " pod="hostpath-provisioner/csi-hostpathplugin-96rmm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.319629 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n4jx\" (UniqueName: \"kubernetes.io/projected/330c84f4-5179-4ec9-92d5-4a5dd18c799b-kube-api-access-6n4jx\") pod \"machine-config-operator-74547568cd-v4bvw\" (UID: \"330c84f4-5179-4ec9-92d5-4a5dd18c799b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.319680 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56ckh\" (UniqueName: \"kubernetes.io/projected/4d3b7ce8-d257-4466-8385-0e506ba4cb38-kube-api-access-56ckh\") pod \"collect-profiles-29492130-nl8kr\" (UID: \"4d3b7ce8-d257-4466-8385-0e506ba4cb38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.319731 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfxs4\" (UniqueName: \"kubernetes.io/projected/1348f4a2-fbf7-4e01-82ad-3943f3bc628e-kube-api-access-rfxs4\") pod \"migrator-59844c95c7-ssfvh\" (UID: \"1348f4a2-fbf7-4e01-82ad-3943f3bc628e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfvh" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.319784 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dzk6\" (UniqueName: \"kubernetes.io/projected/44a925db-1525-422d-ac47-5e1ded16d64f-kube-api-access-5dzk6\") pod \"multus-admission-controller-857f4d67dd-qrsjk\" (UID: \"44a925db-1525-422d-ac47-5e1ded16d64f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qrsjk" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.319834 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vshm4\" (UniqueName: \"kubernetes.io/projected/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6-kube-api-access-vshm4\") pod \"marketplace-operator-79b997595-tjhrf\" (UID: \"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.320111 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.320182 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zjzc\" (UniqueName: \"kubernetes.io/projected/3e3f05ac-eaa1-4dfb-9a2f-d2dfd9c695eb-kube-api-access-8zjzc\") pod \"machine-config-server-h2595\" (UID: \"3e3f05ac-eaa1-4dfb-9a2f-d2dfd9c695eb\") " pod="openshift-machine-config-operator/machine-config-server-h2595" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.320247 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3438edc5-62d1-4e68-b4ac-aa41a4240e78-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4lwnj\" (UID: \"3438edc5-62d1-4e68-b4ac-aa41a4240e78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4lwnj" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.320259 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/08398814-3579-49c5-bf30-b8e700fabdab-stats-auth\") pod \"router-default-5444994796-wzrqq\" (UID: \"08398814-3579-49c5-bf30-b8e700fabdab\") " pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.320316 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3e3f05ac-eaa1-4dfb-9a2f-d2dfd9c695eb-node-bootstrap-token\") pod \"machine-config-server-h2595\" (UID: \"3e3f05ac-eaa1-4dfb-9a2f-d2dfd9c695eb\") " pod="openshift-machine-config-operator/machine-config-server-h2595" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.320367 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r777x\" (UniqueName: \"kubernetes.io/projected/d00c259a-2ad3-44dd-97d2-53e763da5ab1-kube-api-access-r777x\") pod \"control-plane-machine-set-operator-78cbb6b69f-l4jwq\" (UID: \"d00c259a-2ad3-44dd-97d2-53e763da5ab1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4jwq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.320261 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fccdaa28-9674-4bb6-9c58-3f3905df1e56-socket-dir\") pod \"csi-hostpathplugin-96rmm\" (UID: \"fccdaa28-9674-4bb6-9c58-3f3905df1e56\") " pod="hostpath-provisioner/csi-hostpathplugin-96rmm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.320419 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv77k\" (UniqueName: \"kubernetes.io/projected/ec3fb998-85ac-4c16-be5e-3dd48da929df-kube-api-access-pv77k\") pod \"ingress-canary-9pb5n\" (UID: \"ec3fb998-85ac-4c16-be5e-3dd48da929df\") " pod="openshift-ingress-canary/ingress-canary-9pb5n" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.320477 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1317de86-7041-4b5a-8403-98489b8dc338-srv-cert\") pod \"catalog-operator-68c6474976-zk5gr\" (UID: \"1317de86-7041-4b5a-8403-98489b8dc338\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.320548 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/330c84f4-5179-4ec9-92d5-4a5dd18c799b-proxy-tls\") pod \"machine-config-operator-74547568cd-v4bvw\" (UID: \"330c84f4-5179-4ec9-92d5-4a5dd18c799b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.320596 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fccdaa28-9674-4bb6-9c58-3f3905df1e56-csi-data-dir\") pod \"csi-hostpathplugin-96rmm\" (UID: \"fccdaa28-9674-4bb6-9c58-3f3905df1e56\") " pod="hostpath-provisioner/csi-hostpathplugin-96rmm" Jan 27 15:44:15 crc kubenswrapper[4966]: E0127 15:44:15.320624 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:15.820610078 +0000 UTC m=+122.123403566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.320649 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08398814-3579-49c5-bf30-b8e700fabdab-metrics-certs\") pod \"router-default-5444994796-wzrqq\" (UID: \"08398814-3579-49c5-bf30-b8e700fabdab\") " pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.320671 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7e627b1-5b87-424a-8640-f721cae406ec-config-volume\") pod \"dns-default-nb7qq\" (UID: \"d7e627b1-5b87-424a-8640-f721cae406ec\") " pod="openshift-dns/dns-default-nb7qq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.320694 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpg5v\" (UniqueName: \"kubernetes.io/projected/70f9fda4-72f3-4f6a-8b6a-38dffbb6c958-kube-api-access-fpg5v\") pod \"packageserver-d55dfcdfc-fgklq\" (UID: \"70f9fda4-72f3-4f6a-8b6a-38dffbb6c958\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.320713 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/805c6450-f56a-415f-85de-4a5b1df954b3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sdhr5\" (UID: \"805c6450-f56a-415f-85de-4a5b1df954b3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdhr5" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.320732 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dab55df7-5589-4454-9f76-2318e87b02bb-metrics-tls\") pod \"ingress-operator-5b745b69d9-nhgff\" (UID: \"dab55df7-5589-4454-9f76-2318e87b02bb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.320757 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbk28\" (UniqueName: \"kubernetes.io/projected/6048e5b0-2eb7-41b9-a0e1-53651ff008e2-kube-api-access-qbk28\") pod \"service-ca-operator-777779d784-275fm\" (UID: \"6048e5b0-2eb7-41b9-a0e1-53651ff008e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-275fm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.320755 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fccdaa28-9674-4bb6-9c58-3f3905df1e56-csi-data-dir\") pod \"csi-hostpathplugin-96rmm\" (UID: \"fccdaa28-9674-4bb6-9c58-3f3905df1e56\") " pod="hostpath-provisioner/csi-hostpathplugin-96rmm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.320781 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/805c6450-f56a-415f-85de-4a5b1df954b3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sdhr5\" (UID: \"805c6450-f56a-415f-85de-4a5b1df954b3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdhr5" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.320860 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdcf7\" (UniqueName: \"kubernetes.io/projected/08398814-3579-49c5-bf30-b8e700fabdab-kube-api-access-cdcf7\") pod \"router-default-5444994796-wzrqq\" (UID: \"08398814-3579-49c5-bf30-b8e700fabdab\") " pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.320951 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cr8s9\" (UID: \"a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.321032 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4f1e013-7175-4268-908c-10658a7a8f1f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ddksw\" (UID: \"b4f1e013-7175-4268-908c-10658a7a8f1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ddksw" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.321088 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d3b7ce8-d257-4466-8385-0e506ba4cb38-secret-volume\") pod \"collect-profiles-29492130-nl8kr\" (UID: \"4d3b7ce8-d257-4466-8385-0e506ba4cb38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.321140 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3438edc5-62d1-4e68-b4ac-aa41a4240e78-proxy-tls\") pod \"machine-config-controller-84d6567774-4lwnj\" (UID: \"3438edc5-62d1-4e68-b4ac-aa41a4240e78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4lwnj" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.321206 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d00c259a-2ad3-44dd-97d2-53e763da5ab1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l4jwq\" (UID: \"d00c259a-2ad3-44dd-97d2-53e763da5ab1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4jwq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.321256 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4f1e013-7175-4268-908c-10658a7a8f1f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ddksw\" (UID: \"b4f1e013-7175-4268-908c-10658a7a8f1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ddksw" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.321332 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805c6450-f56a-415f-85de-4a5b1df954b3-config\") pod \"kube-controller-manager-operator-78b949d7b-sdhr5\" (UID: \"805c6450-f56a-415f-85de-4a5b1df954b3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdhr5" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.321378 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f02135d5-ce67-4a94-9f94-60c29b672231-srv-cert\") pod \"olm-operator-6b444d44fb-m7rc2\" (UID: \"f02135d5-ce67-4a94-9f94-60c29b672231\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.321436 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhbg4\" (UniqueName: \"kubernetes.io/projected/fccdaa28-9674-4bb6-9c58-3f3905df1e56-kube-api-access-lhbg4\") pod \"csi-hostpathplugin-96rmm\" (UID: \"fccdaa28-9674-4bb6-9c58-3f3905df1e56\") " pod="hostpath-provisioner/csi-hostpathplugin-96rmm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.321481 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/767a1177-008b-4103-b582-5a679d5d6384-auth-proxy-config\") pod \"machine-approver-56656f9798-clgw2\" (UID: \"767a1177-008b-4103-b582-5a679d5d6384\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.321523 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecccc492-0c85-414f-9ae9-2f5aa8df4e0d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hhnc\" (UID: \"ecccc492-0c85-414f-9ae9-2f5aa8df4e0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hhnc" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.321619 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7e627b1-5b87-424a-8640-f721cae406ec-config-volume\") pod \"dns-default-nb7qq\" (UID: \"d7e627b1-5b87-424a-8640-f721cae406ec\") " pod="openshift-dns/dns-default-nb7qq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.322431 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.323366 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3e3f05ac-eaa1-4dfb-9a2f-d2dfd9c695eb-certs\") pod \"machine-config-server-h2595\" (UID: \"3e3f05ac-eaa1-4dfb-9a2f-d2dfd9c695eb\") " pod="openshift-machine-config-operator/machine-config-server-h2595" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.324993 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec3fb998-85ac-4c16-be5e-3dd48da929df-cert\") pod \"ingress-canary-9pb5n\" (UID: \"ec3fb998-85ac-4c16-be5e-3dd48da929df\") " pod="openshift-ingress-canary/ingress-canary-9pb5n" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.326189 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3e3f05ac-eaa1-4dfb-9a2f-d2dfd9c695eb-node-bootstrap-token\") pod \"machine-config-server-h2595\" (UID: \"3e3f05ac-eaa1-4dfb-9a2f-d2dfd9c695eb\") " pod="openshift-machine-config-operator/machine-config-server-h2595" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.327029 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d7e627b1-5b87-424a-8640-f721cae406ec-metrics-tls\") pod \"dns-default-nb7qq\" (UID: \"d7e627b1-5b87-424a-8640-f721cae406ec\") " pod="openshift-dns/dns-default-nb7qq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.339195 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.364065 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.378994 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.399544 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.418524 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.422871 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:15 crc kubenswrapper[4966]: E0127 15:44:15.422977 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:15.922959844 +0000 UTC m=+122.225753332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.423501 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: E0127 15:44:15.423788 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:15.923779699 +0000 UTC m=+122.226573197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.440090 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.458357 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.478704 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.518016 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.524493 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:15 crc kubenswrapper[4966]: E0127 15:44:15.524638 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.024620748 +0000 UTC m=+122.327414236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.525364 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: E0127 15:44:15.526075 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.026040483 +0000 UTC m=+122.328834011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.530438 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh4n4\" (UniqueName: \"kubernetes.io/projected/ac92efeb-93b0-4044-9b79-fbfc19fc629e-kube-api-access-rh4n4\") pod \"route-controller-manager-6576b87f9c-p7z6k\" (UID: \"ac92efeb-93b0-4044-9b79-fbfc19fc629e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.541540 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.552511 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b8111aeb-2c95-4953-a2d0-586c5fcd4940-registry-tls\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.581016 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b8111aeb-2c95-4953-a2d0-586c5fcd4940-bound-sa-token\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.598512 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3630c88c-69cc-44c2-8a80-90c02ace87f5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c4rfs\" (UID: \"3630c88c-69cc-44c2-8a80-90c02ace87f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c4rfs" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.618430 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.622704 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9phft\" (UniqueName: \"kubernetes.io/projected/3630c88c-69cc-44c2-8a80-90c02ace87f5-kube-api-access-9phft\") pod \"cluster-image-registry-operator-dc59b4c8b-c4rfs\" (UID: \"3630c88c-69cc-44c2-8a80-90c02ace87f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c4rfs" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.625509 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b8111aeb-2c95-4953-a2d0-586c5fcd4940-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.625859 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:15 crc kubenswrapper[4966]: E0127 15:44:15.626006 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.125981253 +0000 UTC m=+122.428774751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.626710 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: E0127 15:44:15.627037 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.127026816 +0000 UTC m=+122.429820304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.645547 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.654470 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3630c88c-69cc-44c2-8a80-90c02ace87f5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c4rfs\" (UID: \"3630c88c-69cc-44c2-8a80-90c02ace87f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c4rfs" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.655639 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8111aeb-2c95-4953-a2d0-586c5fcd4940-trusted-ca\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.692108 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8729\" (UniqueName: \"kubernetes.io/projected/b8111aeb-2c95-4953-a2d0-586c5fcd4940-kube-api-access-f8729\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.699001 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.707323 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3630c88c-69cc-44c2-8a80-90c02ace87f5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c4rfs\" (UID: \"3630c88c-69cc-44c2-8a80-90c02ace87f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c4rfs" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.718410 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.727633 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:15 crc kubenswrapper[4966]: E0127 15:44:15.727865 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.227837354 +0000 UTC m=+122.530630862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.728139 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z96mp\" (UniqueName: \"kubernetes.io/projected/eb7f0aaf-703f-4d9c-89c8-701f0707ab18-kube-api-access-z96mp\") pod \"machine-api-operator-5694c8668f-f2z26\" (UID: \"eb7f0aaf-703f-4d9c-89c8-701f0707ab18\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f2z26" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.728340 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: E0127 15:44:15.728676 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.228668909 +0000 UTC m=+122.531462387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.739052 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.753527 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9wzv\" (UniqueName: \"kubernetes.io/projected/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-kube-api-access-r9wzv\") pod \"controller-manager-879f6c89f-w8jb4\" (UID: \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.758630 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.770373 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzlnp\" (UniqueName: \"kubernetes.io/projected/612fb5e2-ec40-4a52-b6fb-463e64e0e872-kube-api-access-rzlnp\") pod \"oauth-openshift-558db77b4-rckw5\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.779346 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.789206 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pw74\" (UniqueName: \"kubernetes.io/projected/0046cf8f-c67b-4936-b3d6-1f7ac02eb919-kube-api-access-4pw74\") pod \"openshift-config-operator-7777fb866f-pn2q5\" (UID: \"0046cf8f-c67b-4936-b3d6-1f7ac02eb919\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.799623 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.806589 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7pvp\" (UniqueName: \"kubernetes.io/projected/cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7-kube-api-access-s7pvp\") pod \"apiserver-76f77b778f-xgjgw\" (UID: \"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7\") " pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.818965 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.829633 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:15 crc kubenswrapper[4966]: E0127 15:44:15.830010 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.329967923 +0000 UTC m=+122.632761471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.830649 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6048e5b0-2eb7-41b9-a0e1-53651ff008e2-serving-cert\") pod \"service-ca-operator-777779d784-275fm\" (UID: \"6048e5b0-2eb7-41b9-a0e1-53651ff008e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-275fm" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.838360 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.850777 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/08398814-3579-49c5-bf30-b8e700fabdab-default-certificate\") pod \"router-default-5444994796-wzrqq\" (UID: \"08398814-3579-49c5-bf30-b8e700fabdab\") " pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.859605 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.872278 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9e5d666-9ed7-45b5-9e80-b1a8cab36cda-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-g8mr8\" (UID: \"f9e5d666-9ed7-45b5-9e80-b1a8cab36cda\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g8mr8" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.899084 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.908764 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d3b7ce8-d257-4466-8385-0e506ba4cb38-config-volume\") pod \"collect-profiles-29492130-nl8kr\" (UID: \"4d3b7ce8-d257-4466-8385-0e506ba4cb38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.932327 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:15 crc kubenswrapper[4966]: E0127 15:44:15.933118 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.433094343 +0000 UTC m=+122.735887871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.938970 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.949598 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/767a1177-008b-4103-b582-5a679d5d6384-config\") pod \"machine-approver-56656f9798-clgw2\" (UID: \"767a1177-008b-4103-b582-5a679d5d6384\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.958831 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 15:44:15 crc kubenswrapper[4966]: I0127 15:44:15.969113 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4f1e013-7175-4268-908c-10658a7a8f1f-config\") pod \"kube-apiserver-operator-766d6c64bb-ddksw\" (UID: \"b4f1e013-7175-4268-908c-10658a7a8f1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ddksw" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.026818 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.030189 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dab55df7-5589-4454-9f76-2318e87b02bb-trusted-ca\") pod \"ingress-operator-5b745b69d9-nhgff\" (UID: \"dab55df7-5589-4454-9f76-2318e87b02bb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.034526 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.034690 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.534660614 +0000 UTC m=+122.837454102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.035305 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.035845 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.535817331 +0000 UTC m=+122.838610859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.039721 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.052782 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70f9fda4-72f3-4f6a-8b6a-38dffbb6c958-webhook-cert\") pod \"packageserver-d55dfcdfc-fgklq\" (UID: \"70f9fda4-72f3-4f6a-8b6a-38dffbb6c958\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.055821 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/70f9fda4-72f3-4f6a-8b6a-38dffbb6c958-apiservice-cert\") pod \"packageserver-d55dfcdfc-fgklq\" (UID: \"70f9fda4-72f3-4f6a-8b6a-38dffbb6c958\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.058816 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.074081 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cfed2cb5-8390-4b53-998a-195d7cc17c90-signing-key\") pod \"service-ca-9c57cc56f-hl7j2\" (UID: \"cfed2cb5-8390-4b53-998a-195d7cc17c90\") " pod="openshift-service-ca/service-ca-9c57cc56f-hl7j2" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.096628 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.100080 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.101242 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tjhrf\" (UID: \"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.110965 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecccc492-0c85-414f-9ae9-2f5aa8df4e0d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hhnc\" (UID: \"ecccc492-0c85-414f-9ae9-2f5aa8df4e0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hhnc" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.119615 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.133786 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44a925db-1525-422d-ac47-5e1ded16d64f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qrsjk\" (UID: \"44a925db-1525-422d-ac47-5e1ded16d64f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qrsjk" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.137180 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.138199 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.638179747 +0000 UTC m=+122.940973245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.139000 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.150946 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cfed2cb5-8390-4b53-998a-195d7cc17c90-signing-cabundle\") pod \"service-ca-9c57cc56f-hl7j2\" (UID: \"cfed2cb5-8390-4b53-998a-195d7cc17c90\") " pod="openshift-service-ca/service-ca-9c57cc56f-hl7j2" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.178658 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.182857 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/767a1177-008b-4103-b582-5a679d5d6384-machine-approver-tls\") pod \"machine-approver-56656f9798-clgw2\" (UID: \"767a1177-008b-4103-b582-5a679d5d6384\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.198610 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.199315 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6048e5b0-2eb7-41b9-a0e1-53651ff008e2-config\") pod \"service-ca-operator-777779d784-275fm\" (UID: \"6048e5b0-2eb7-41b9-a0e1-53651ff008e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-275fm" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.218391 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.220571 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9e5d666-9ed7-45b5-9e80-b1a8cab36cda-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-g8mr8\" (UID: \"f9e5d666-9ed7-45b5-9e80-b1a8cab36cda\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g8mr8" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.239405 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.239564 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.240604 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.740583765 +0000 UTC m=+123.043377293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.250715 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/330c84f4-5179-4ec9-92d5-4a5dd18c799b-images\") pod \"machine-config-operator-74547568cd-v4bvw\" (UID: \"330c84f4-5179-4ec9-92d5-4a5dd18c799b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.258394 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.264494 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f02135d5-ce67-4a94-9f94-60c29b672231-profile-collector-cert\") pod \"olm-operator-6b444d44fb-m7rc2\" (UID: \"f02135d5-ce67-4a94-9f94-60c29b672231\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.264786 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1317de86-7041-4b5a-8403-98489b8dc338-profile-collector-cert\") pod \"catalog-operator-68c6474976-zk5gr\" (UID: \"1317de86-7041-4b5a-8403-98489b8dc338\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.266111 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d3b7ce8-d257-4466-8385-0e506ba4cb38-secret-volume\") pod \"collect-profiles-29492130-nl8kr\" (UID: \"4d3b7ce8-d257-4466-8385-0e506ba4cb38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.299727 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.308200 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dab55df7-5589-4454-9f76-2318e87b02bb-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nhgff\" (UID: \"dab55df7-5589-4454-9f76-2318e87b02bb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.310963 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08398814-3579-49c5-bf30-b8e700fabdab-service-ca-bundle\") pod \"router-default-5444994796-wzrqq\" (UID: \"08398814-3579-49c5-bf30-b8e700fabdab\") " pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.318259 4966 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.318402 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ecccc492-0c85-414f-9ae9-2f5aa8df4e0d-config podName:ecccc492-0c85-414f-9ae9-2f5aa8df4e0d nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.818361956 +0000 UTC m=+123.121155474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ecccc492-0c85-414f-9ae9-2f5aa8df4e0d-config") pod "openshift-kube-scheduler-operator-5fdd9b5758-5hhnc" (UID: "ecccc492-0c85-414f-9ae9-2f5aa8df4e0d") : failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.320720 4966 secret.go:188] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.320818 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08398814-3579-49c5-bf30-b8e700fabdab-stats-auth podName:08398814-3579-49c5-bf30-b8e700fabdab nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.820788871 +0000 UTC m=+123.123582399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/08398814-3579-49c5-bf30-b8e700fabdab-stats-auth") pod "router-default-5444994796-wzrqq" (UID: "08398814-3579-49c5-bf30-b8e700fabdab") : failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.320824 4966 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.320987 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6-marketplace-operator-metrics podName:1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6 nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.820951226 +0000 UTC m=+123.123744784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6-marketplace-operator-metrics") pod "marketplace-operator-79b997595-tjhrf" (UID: "1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6") : failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.323074 4966 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.323160 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/767a1177-008b-4103-b582-5a679d5d6384-auth-proxy-config podName:767a1177-008b-4103-b582-5a679d5d6384 nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.823140454 +0000 UTC m=+123.125933972 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/767a1177-008b-4103-b582-5a679d5d6384-auth-proxy-config") pod "machine-approver-56656f9798-clgw2" (UID: "767a1177-008b-4103-b582-5a679d5d6384") : failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.323194 4966 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.323243 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1317de86-7041-4b5a-8403-98489b8dc338-srv-cert podName:1317de86-7041-4b5a-8403-98489b8dc338 nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.823226027 +0000 UTC m=+123.126019555 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/1317de86-7041-4b5a-8403-98489b8dc338-srv-cert") pod "catalog-operator-68c6474976-zk5gr" (UID: "1317de86-7041-4b5a-8403-98489b8dc338") : failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.323267 4966 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.323311 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d00c259a-2ad3-44dd-97d2-53e763da5ab1-control-plane-machine-set-operator-tls podName:d00c259a-2ad3-44dd-97d2-53e763da5ab1 nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.823296579 +0000 UTC m=+123.126090097 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/d00c259a-2ad3-44dd-97d2-53e763da5ab1-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-l4jwq" (UID: "d00c259a-2ad3-44dd-97d2-53e763da5ab1") : failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.323503 4966 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.323570 4966 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.323584 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/805c6450-f56a-415f-85de-4a5b1df954b3-config podName:805c6450-f56a-415f-85de-4a5b1df954b3 nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.823560627 +0000 UTC m=+123.126354235 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/805c6450-f56a-415f-85de-4a5b1df954b3-config") pod "kube-controller-manager-operator-78b949d7b-sdhr5" (UID: "805c6450-f56a-415f-85de-4a5b1df954b3") : failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.323710 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f02135d5-ce67-4a94-9f94-60c29b672231-srv-cert podName:f02135d5-ce67-4a94-9f94-60c29b672231 nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.823679391 +0000 UTC m=+123.126472879 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/f02135d5-ce67-4a94-9f94-60c29b672231-srv-cert") pod "olm-operator-6b444d44fb-m7rc2" (UID: "f02135d5-ce67-4a94-9f94-60c29b672231") : failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.323779 4966 secret.go:188] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.323834 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08398814-3579-49c5-bf30-b8e700fabdab-metrics-certs podName:08398814-3579-49c5-bf30-b8e700fabdab nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.823820395 +0000 UTC m=+123.126613913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/08398814-3579-49c5-bf30-b8e700fabdab-metrics-certs") pod "router-default-5444994796-wzrqq" (UID: "08398814-3579-49c5-bf30-b8e700fabdab") : failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.323867 4966 secret.go:188] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.323939 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/805c6450-f56a-415f-85de-4a5b1df954b3-serving-cert podName:805c6450-f56a-415f-85de-4a5b1df954b3 nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.823926999 +0000 UTC m=+123.126720527 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/805c6450-f56a-415f-85de-4a5b1df954b3-serving-cert") pod "kube-controller-manager-operator-78b949d7b-sdhr5" (UID: "805c6450-f56a-415f-85de-4a5b1df954b3") : failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.323970 4966 secret.go:188] Couldn't get secret openshift-ingress-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.324008 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dab55df7-5589-4454-9f76-2318e87b02bb-metrics-tls podName:dab55df7-5589-4454-9f76-2318e87b02bb nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.823997481 +0000 UTC m=+123.126790999 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/dab55df7-5589-4454-9f76-2318e87b02bb-metrics-tls") pod "ingress-operator-5b745b69d9-nhgff" (UID: "dab55df7-5589-4454-9f76-2318e87b02bb") : failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.324035 4966 secret.go:188] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.324069 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3438edc5-62d1-4e68-b4ac-aa41a4240e78-proxy-tls podName:3438edc5-62d1-4e68-b4ac-aa41a4240e78 nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.824059103 +0000 UTC m=+123.126852621 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/3438edc5-62d1-4e68-b4ac-aa41a4240e78-proxy-tls") pod "machine-config-controller-84d6567774-4lwnj" (UID: "3438edc5-62d1-4e68-b4ac-aa41a4240e78") : failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.324097 4966 secret.go:188] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.324133 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/330c84f4-5179-4ec9-92d5-4a5dd18c799b-proxy-tls podName:330c84f4-5179-4ec9-92d5-4a5dd18c799b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.824123465 +0000 UTC m=+123.126916983 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/330c84f4-5179-4ec9-92d5-4a5dd18c799b-proxy-tls") pod "machine-config-operator-74547568cd-v4bvw" (UID: "330c84f4-5179-4ec9-92d5-4a5dd18c799b") : failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.324162 4966 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.324202 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7-package-server-manager-serving-cert podName:a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7 nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.824191777 +0000 UTC m=+123.126985305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-cr8s9" (UID: "a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7") : failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.324227 4966 secret.go:188] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.324261 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b4f1e013-7175-4268-908c-10658a7a8f1f-serving-cert podName:b4f1e013-7175-4268-908c-10658a7a8f1f nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.824251429 +0000 UTC m=+123.127044947 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b4f1e013-7175-4268-908c-10658a7a8f1f-serving-cert") pod "kube-apiserver-operator-766d6c64bb-ddksw" (UID: "b4f1e013-7175-4268-908c-10658a7a8f1f") : failed to sync secret cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.333320 4966 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.333644 4966 projected.go:194] Error preparing data for projected volume kube-api-access-848k7 for pod openshift-console/console-f9d7485db-sgfdr: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.333966 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3c5438a-013d-48da-8a1b-8dd23e17bce6-kube-api-access-848k7 podName:a3c5438a-013d-48da-8a1b-8dd23e17bce6 nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.83392454 +0000 UTC m=+123.136718158 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-848k7" (UniqueName: "kubernetes.io/projected/a3c5438a-013d-48da-8a1b-8dd23e17bce6-kube-api-access-848k7") pod "console-f9d7485db-sgfdr" (UID: "a3c5438a-013d-48da-8a1b-8dd23e17bce6") : failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.337750 4966 request.go:700] Waited for 1.018432511s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&resourceVersion=27148 Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.339846 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.340613 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.340880 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.840847415 +0000 UTC m=+123.143640933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.341682 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.342256 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.842233358 +0000 UTC m=+123.145026886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.344174 4966 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.344222 4966 projected.go:194] Error preparing data for projected volume kube-api-access-z56mh for pod openshift-etcd-operator/etcd-operator-b45778765-ztxq2: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.344302 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a48af85-5ce3-4cd9-85bd-d0f88d38103a-kube-api-access-z56mh podName:1a48af85-5ce3-4cd9-85bd-d0f88d38103a nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.844276462 +0000 UTC m=+123.147070141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-z56mh" (UniqueName: "kubernetes.io/projected/1a48af85-5ce3-4cd9-85bd-d0f88d38103a-kube-api-access-z56mh") pod "etcd-operator-b45778765-ztxq2" (UID: "1a48af85-5ce3-4cd9-85bd-d0f88d38103a") : failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.375333 4966 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.375370 4966 projected.go:194] Error preparing data for projected volume kube-api-access-l25fx for pod openshift-dns-operator/dns-operator-744455d44c-52vpt: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.375447 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1349c57b-da7f-4882-bb6f-73a883b23cea-kube-api-access-l25fx podName:1349c57b-da7f-4882-bb6f-73a883b23cea nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.875420411 +0000 UTC m=+123.178213929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l25fx" (UniqueName: "kubernetes.io/projected/1349c57b-da7f-4882-bb6f-73a883b23cea-kube-api-access-l25fx") pod "dns-operator-744455d44c-52vpt" (UID: "1349c57b-da7f-4882-bb6f-73a883b23cea") : failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.379778 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bm5r\" (UniqueName: \"kubernetes.io/projected/d7e627b1-5b87-424a-8640-f721cae406ec-kube-api-access-7bm5r\") pod \"dns-default-nb7qq\" (UID: \"d7e627b1-5b87-424a-8640-f721cae406ec\") " pod="openshift-dns/dns-default-nb7qq" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.380118 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.388714 4966 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.388792 4966 projected.go:194] Error preparing data for projected volume kube-api-access-z7vn7 for pod openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xdlgl: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.388930 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/64e4f5ae-5aab-40a4-855b-8d7904027e63-kube-api-access-z7vn7 podName:64e4f5ae-5aab-40a4-855b-8d7904027e63 nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.888872161 +0000 UTC m=+123.191665669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-z7vn7" (UniqueName: "kubernetes.io/projected/64e4f5ae-5aab-40a4-855b-8d7904027e63-kube-api-access-z7vn7") pod "openshift-controller-manager-operator-756b6f6bc6-xdlgl" (UID: "64e4f5ae-5aab-40a4-855b-8d7904027e63") : failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.401790 4966 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.401820 4966 projected.go:194] Error preparing data for projected volume kube-api-access-bmchm for pod openshift-console-operator/console-operator-58897d9998-wpbsk: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.401873 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12b9edc6-687c-47b9-b8c6-8fa656fc40de-kube-api-access-bmchm podName:12b9edc6-687c-47b9-b8c6-8fa656fc40de nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.901857904 +0000 UTC m=+123.204651402 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bmchm" (UniqueName: "kubernetes.io/projected/12b9edc6-687c-47b9-b8c6-8fa656fc40de-kube-api-access-bmchm") pod "console-operator-58897d9998-wpbsk" (UID: "12b9edc6-687c-47b9-b8c6-8fa656fc40de") : failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.420561 4966 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.420626 4966 projected.go:194] Error preparing data for projected volume kube-api-access-k7g7t for pod openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c28bj: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.420740 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a0612699-805c-409c-a48a-d9852f1c7f4f-kube-api-access-k7g7t podName:a0612699-805c-409c-a48a-d9852f1c7f4f nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.920709901 +0000 UTC m=+123.223503379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k7g7t" (UniqueName: "kubernetes.io/projected/a0612699-805c-409c-a48a-d9852f1c7f4f-kube-api-access-k7g7t") pod "cluster-samples-operator-665b6dd947-c28bj" (UID: "a0612699-805c-409c-a48a-d9852f1c7f4f") : failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.442768 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6c2j\" (UniqueName: \"kubernetes.io/projected/3438edc5-62d1-4e68-b4ac-aa41a4240e78-kube-api-access-c6c2j\") pod \"machine-config-controller-84d6567774-4lwnj\" (UID: \"3438edc5-62d1-4e68-b4ac-aa41a4240e78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4lwnj" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.443060 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.443393 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.943352736 +0000 UTC m=+123.246146264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.444321 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.444374 4966 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.444411 4966 projected.go:194] Error preparing data for projected volume kube-api-access-p8f6t for pod openshift-authentication-operator/authentication-operator-69f744f599-fws6n: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.444503 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c4ecd65d-fa3e-456b-8db6-314cc20216ed-kube-api-access-p8f6t podName:c4ecd65d-fa3e-456b-8db6-314cc20216ed nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.944474821 +0000 UTC m=+123.247268359 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p8f6t" (UniqueName: "kubernetes.io/projected/c4ecd65d-fa3e-456b-8db6-314cc20216ed-kube-api-access-p8f6t") pod "authentication-operator-69f744f599-fws6n" (UID: "c4ecd65d-fa3e-456b-8db6-314cc20216ed") : failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.444784 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.94476865 +0000 UTC m=+123.247562148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.464255 4966 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.464300 4966 projected.go:194] Error preparing data for projected volume kube-api-access-4fjf2 for pod openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.464370 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3fe20580-4a7b-4b46-9cc2-07c852e9c866-kube-api-access-4fjf2 podName:3fe20580-4a7b-4b46-9cc2-07c852e9c866 nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.96434814 +0000 UTC m=+123.267141638 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4fjf2" (UniqueName: "kubernetes.io/projected/3fe20580-4a7b-4b46-9cc2-07c852e9c866-kube-api-access-4fjf2") pod "apiserver-7bbb656c7d-gl8pc" (UID: "3fe20580-4a7b-4b46-9cc2-07c852e9c866") : failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.473853 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nb7qq" Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.486692 4966 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.486729 4966 projected.go:194] Error preparing data for projected volume kube-api-access-78vmx for pod openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hq4gf: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.486786 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6902c6a5-688c-4d89-9f2d-126e6cdd5879-kube-api-access-78vmx podName:6902c6a5-688c-4d89-9f2d-126e6cdd5879 nodeName:}" failed. No retries permitted until 2026-01-27 15:44:16.986766748 +0000 UTC m=+123.289560246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-78vmx" (UniqueName: "kubernetes.io/projected/6902c6a5-688c-4d89-9f2d-126e6cdd5879-kube-api-access-78vmx") pod "openshift-apiserver-operator-796bbdcf4f-hq4gf" (UID: "6902c6a5-688c-4d89-9f2d-126e6cdd5879") : failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.514961 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n4jx\" (UniqueName: \"kubernetes.io/projected/330c84f4-5179-4ec9-92d5-4a5dd18c799b-kube-api-access-6n4jx\") pod \"machine-config-operator-74547568cd-v4bvw\" (UID: \"330c84f4-5179-4ec9-92d5-4a5dd18c799b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.535243 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dzk6\" (UniqueName: \"kubernetes.io/projected/44a925db-1525-422d-ac47-5e1ded16d64f-kube-api-access-5dzk6\") pod \"multus-admission-controller-857f4d67dd-qrsjk\" (UID: \"44a925db-1525-422d-ac47-5e1ded16d64f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qrsjk" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.544809 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.544880 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:17.044866026 +0000 UTC m=+123.347659514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.545213 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.545511 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:17.045503466 +0000 UTC m=+123.348296954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.558487 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zjzc\" (UniqueName: \"kubernetes.io/projected/3e3f05ac-eaa1-4dfb-9a2f-d2dfd9c695eb-kube-api-access-8zjzc\") pod \"machine-config-server-h2595\" (UID: \"3e3f05ac-eaa1-4dfb-9a2f-d2dfd9c695eb\") " pod="openshift-machine-config-operator/machine-config-server-h2595" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.558804 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.604555 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv77k\" (UniqueName: \"kubernetes.io/projected/ec3fb998-85ac-4c16-be5e-3dd48da929df-kube-api-access-pv77k\") pod \"ingress-canary-9pb5n\" (UID: \"ec3fb998-85ac-4c16-be5e-3dd48da929df\") " pod="openshift-ingress-canary/ingress-canary-9pb5n" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.647205 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.647407 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:17.147377996 +0000 UTC m=+123.450171484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.648343 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.649029 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:17.149012598 +0000 UTC m=+123.451806096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.673507 4966 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.673539 4966 projected.go:194] Error preparing data for projected volume kube-api-access-hnfl5 for pod openshift-console/downloads-7954f5f757-kmmq4: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.673601 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36c5078e-fb86-4817-a08e-6d4b4e2bee7f-kube-api-access-hnfl5 podName:36c5078e-fb86-4817-a08e-6d4b4e2bee7f nodeName:}" failed. No retries permitted until 2026-01-27 15:44:17.173582302 +0000 UTC m=+123.476375790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hnfl5" (UniqueName: "kubernetes.io/projected/36c5078e-fb86-4817-a08e-6d4b4e2bee7f-kube-api-access-hnfl5") pod "downloads-7954f5f757-kmmq4" (UID: "36c5078e-fb86-4817-a08e-6d4b4e2bee7f") : failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.678425 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.689694 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nb7qq"] Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.697974 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.719367 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.750471 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.750629 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:17.25060986 +0000 UTC m=+123.553403348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.750842 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.751149 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:17.251141346 +0000 UTC m=+123.553934834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.753570 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r777x\" (UniqueName: \"kubernetes.io/projected/d00c259a-2ad3-44dd-97d2-53e763da5ab1-kube-api-access-r777x\") pod \"control-plane-machine-set-operator-78cbb6b69f-l4jwq\" (UID: \"d00c259a-2ad3-44dd-97d2-53e763da5ab1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4jwq" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.758194 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.764716 4966 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-machine-api/machine-api-operator-5694c8668f-f2z26" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.764756 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-f2z26" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.778907 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.779810 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9pb5n" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.784849 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-h2595" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.816327 4966 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.816409 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.827871 4966 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.827974 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.841089 4966 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.841176 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.842887 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.849627 4966 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.849701 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.852933 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.853525 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecccc492-0c85-414f-9ae9-2f5aa8df4e0d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hhnc\" (UID: \"ecccc492-0c85-414f-9ae9-2f5aa8df4e0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hhnc" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.853596 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-848k7\" (UniqueName: \"kubernetes.io/projected/a3c5438a-013d-48da-8a1b-8dd23e17bce6-kube-api-access-848k7\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.853655 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tjhrf\" (UID: \"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.853738 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/08398814-3579-49c5-bf30-b8e700fabdab-stats-auth\") pod \"router-default-5444994796-wzrqq\" (UID: \"08398814-3579-49c5-bf30-b8e700fabdab\") " pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.853764 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1317de86-7041-4b5a-8403-98489b8dc338-srv-cert\") pod \"catalog-operator-68c6474976-zk5gr\" (UID: \"1317de86-7041-4b5a-8403-98489b8dc338\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.853794 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/330c84f4-5179-4ec9-92d5-4a5dd18c799b-proxy-tls\") pod \"machine-config-operator-74547568cd-v4bvw\" (UID: \"330c84f4-5179-4ec9-92d5-4a5dd18c799b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.853813 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08398814-3579-49c5-bf30-b8e700fabdab-metrics-certs\") pod \"router-default-5444994796-wzrqq\" (UID: \"08398814-3579-49c5-bf30-b8e700fabdab\") " pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.853849 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/805c6450-f56a-415f-85de-4a5b1df954b3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sdhr5\" (UID: \"805c6450-f56a-415f-85de-4a5b1df954b3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdhr5" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.853874 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dab55df7-5589-4454-9f76-2318e87b02bb-metrics-tls\") pod \"ingress-operator-5b745b69d9-nhgff\" (UID: \"dab55df7-5589-4454-9f76-2318e87b02bb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.853949 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cr8s9\" (UID: \"a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.853987 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4f1e013-7175-4268-908c-10658a7a8f1f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ddksw\" (UID: \"b4f1e013-7175-4268-908c-10658a7a8f1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ddksw" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.854011 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3438edc5-62d1-4e68-b4ac-aa41a4240e78-proxy-tls\") pod \"machine-config-controller-84d6567774-4lwnj\" (UID: \"3438edc5-62d1-4e68-b4ac-aa41a4240e78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4lwnj" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.854036 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d00c259a-2ad3-44dd-97d2-53e763da5ab1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l4jwq\" (UID: \"d00c259a-2ad3-44dd-97d2-53e763da5ab1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4jwq" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.854074 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805c6450-f56a-415f-85de-4a5b1df954b3-config\") pod \"kube-controller-manager-operator-78b949d7b-sdhr5\" (UID: \"805c6450-f56a-415f-85de-4a5b1df954b3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdhr5" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.854098 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f02135d5-ce67-4a94-9f94-60c29b672231-srv-cert\") pod \"olm-operator-6b444d44fb-m7rc2\" (UID: \"f02135d5-ce67-4a94-9f94-60c29b672231\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.854119 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z56mh\" (UniqueName: \"kubernetes.io/projected/1a48af85-5ce3-4cd9-85bd-d0f88d38103a-kube-api-access-z56mh\") pod \"etcd-operator-b45778765-ztxq2\" (UID: \"1a48af85-5ce3-4cd9-85bd-d0f88d38103a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.854150 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/767a1177-008b-4103-b582-5a679d5d6384-auth-proxy-config\") pod \"machine-approver-56656f9798-clgw2\" (UID: \"767a1177-008b-4103-b582-5a679d5d6384\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.856890 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecccc492-0c85-414f-9ae9-2f5aa8df4e0d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hhnc\" (UID: \"ecccc492-0c85-414f-9ae9-2f5aa8df4e0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hhnc" Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.857022 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:17.357001581 +0000 UTC m=+123.659795079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.863121 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/08398814-3579-49c5-bf30-b8e700fabdab-stats-auth\") pod \"router-default-5444994796-wzrqq\" (UID: \"08398814-3579-49c5-bf30-b8e700fabdab\") " pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.863214 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tjhrf\" (UID: \"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.863528 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dab55df7-5589-4454-9f76-2318e87b02bb-metrics-tls\") pod \"ingress-operator-5b745b69d9-nhgff\" (UID: \"dab55df7-5589-4454-9f76-2318e87b02bb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.865517 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/330c84f4-5179-4ec9-92d5-4a5dd18c799b-proxy-tls\") pod \"machine-config-operator-74547568cd-v4bvw\" (UID: \"330c84f4-5179-4ec9-92d5-4a5dd18c799b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.865629 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3438edc5-62d1-4e68-b4ac-aa41a4240e78-proxy-tls\") pod \"machine-config-controller-84d6567774-4lwnj\" (UID: \"3438edc5-62d1-4e68-b4ac-aa41a4240e78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4lwnj" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.866116 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08398814-3579-49c5-bf30-b8e700fabdab-metrics-certs\") pod \"router-default-5444994796-wzrqq\" (UID: \"08398814-3579-49c5-bf30-b8e700fabdab\") " pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.870613 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f02135d5-ce67-4a94-9f94-60c29b672231-srv-cert\") pod \"olm-operator-6b444d44fb-m7rc2\" (UID: \"f02135d5-ce67-4a94-9f94-60c29b672231\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.875757 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/805c6450-f56a-415f-85de-4a5b1df954b3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sdhr5\" (UID: \"805c6450-f56a-415f-85de-4a5b1df954b3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdhr5" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.882939 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhbg4\" (UniqueName: \"kubernetes.io/projected/fccdaa28-9674-4bb6-9c58-3f3905df1e56-kube-api-access-lhbg4\") pod \"csi-hostpathplugin-96rmm\" (UID: \"fccdaa28-9674-4bb6-9c58-3f3905df1e56\") " pod="hostpath-provisioner/csi-hostpathplugin-96rmm" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.883275 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.890183 4966 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.890891 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805c6450-f56a-415f-85de-4a5b1df954b3-config\") pod \"kube-controller-manager-operator-78b949d7b-sdhr5\" (UID: \"805c6450-f56a-415f-85de-4a5b1df954b3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdhr5" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.900404 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.913357 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4f1e013-7175-4268-908c-10658a7a8f1f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ddksw\" (UID: \"b4f1e013-7175-4268-908c-10658a7a8f1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ddksw" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.924396 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.930271 4966 projected.go:288] Couldn't get configMap openshift-cluster-machine-approver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.931539 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d00c259a-2ad3-44dd-97d2-53e763da5ab1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l4jwq\" (UID: \"d00c259a-2ad3-44dd-97d2-53e763da5ab1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4jwq" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.939260 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.943722 4966 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c4rfs" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.943774 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c4rfs" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.947169 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/767a1177-008b-4103-b582-5a679d5d6384-auth-proxy-config\") pod \"machine-approver-56656f9798-clgw2\" (UID: \"767a1177-008b-4103-b582-5a679d5d6384\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.954987 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.955022 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmchm\" (UniqueName: \"kubernetes.io/projected/12b9edc6-687c-47b9-b8c6-8fa656fc40de-kube-api-access-bmchm\") pod \"console-operator-58897d9998-wpbsk\" (UID: \"12b9edc6-687c-47b9-b8c6-8fa656fc40de\") " pod="openshift-console-operator/console-operator-58897d9998-wpbsk" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.955041 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7vn7\" (UniqueName: \"kubernetes.io/projected/64e4f5ae-5aab-40a4-855b-8d7904027e63-kube-api-access-z7vn7\") pod \"openshift-controller-manager-operator-756b6f6bc6-xdlgl\" (UID: \"64e4f5ae-5aab-40a4-855b-8d7904027e63\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xdlgl" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.955091 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7g7t\" (UniqueName: \"kubernetes.io/projected/a0612699-805c-409c-a48a-d9852f1c7f4f-kube-api-access-k7g7t\") pod \"cluster-samples-operator-665b6dd947-c28bj\" (UID: \"a0612699-805c-409c-a48a-d9852f1c7f4f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c28bj" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.955147 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8f6t\" (UniqueName: \"kubernetes.io/projected/c4ecd65d-fa3e-456b-8db6-314cc20216ed-kube-api-access-p8f6t\") pod \"authentication-operator-69f744f599-fws6n\" (UID: \"c4ecd65d-fa3e-456b-8db6-314cc20216ed\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.955175 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l25fx\" (UniqueName: \"kubernetes.io/projected/1349c57b-da7f-4882-bb6f-73a883b23cea-kube-api-access-l25fx\") pod \"dns-operator-744455d44c-52vpt\" (UID: \"1349c57b-da7f-4882-bb6f-73a883b23cea\") " pod="openshift-dns-operator/dns-operator-744455d44c-52vpt" Jan 27 15:44:16 crc kubenswrapper[4966]: E0127 15:44:16.955560 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:17.455550069 +0000 UTC m=+123.758343557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.958484 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.967355 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-f2z26"] Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.970970 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1317de86-7041-4b5a-8403-98489b8dc338-srv-cert\") pod \"catalog-operator-68c6474976-zk5gr\" (UID: \"1317de86-7041-4b5a-8403-98489b8dc338\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.978696 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.981941 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cr8s9\" (UID: \"a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" Jan 27 15:44:16 crc kubenswrapper[4966]: W0127 15:44:16.985233 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb7f0aaf_703f_4d9c_89c8_701f0707ab18.slice/crio-ecf674a26e95c84deebaed1690b8e9e4a9c1c551d45142bf5126e3ac407b99a0 WatchSource:0}: Error finding container ecf674a26e95c84deebaed1690b8e9e4a9c1c551d45142bf5126e3ac407b99a0: Status 404 returned error can't find the container with id ecf674a26e95c84deebaed1690b8e9e4a9c1c551d45142bf5126e3ac407b99a0 Jan 27 15:44:16 crc kubenswrapper[4966]: I0127 15:44:16.998796 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.005416 4966 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.007038 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9pb5n"] Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.008714 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-848k7\" (UniqueName: \"kubernetes.io/projected/a3c5438a-013d-48da-8a1b-8dd23e17bce6-kube-api-access-848k7\") pod \"console-f9d7485db-sgfdr\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.019307 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.019777 4966 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.023805 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z56mh\" (UniqueName: \"kubernetes.io/projected/1a48af85-5ce3-4cd9-85bd-d0f88d38103a-kube-api-access-z56mh\") pod \"etcd-operator-b45778765-ztxq2\" (UID: \"1a48af85-5ce3-4cd9-85bd-d0f88d38103a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" Jan 27 15:44:17 crc kubenswrapper[4966]: W0127 15:44:17.026620 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec3fb998_85ac_4c16_be5e_3dd48da929df.slice/crio-031b9bdb8f8847dc3012ec1dfde28445b699b9fdc6c864f0dcb4a245d8e93230 WatchSource:0}: Error finding container 031b9bdb8f8847dc3012ec1dfde28445b699b9fdc6c864f0dcb4a245d8e93230: Status 404 returned error can't find the container with id 031b9bdb8f8847dc3012ec1dfde28445b699b9fdc6c864f0dcb4a245d8e93230 Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.039934 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.050827 4966 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.050909 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.051837 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l25fx\" (UniqueName: \"kubernetes.io/projected/1349c57b-da7f-4882-bb6f-73a883b23cea-kube-api-access-l25fx\") pod \"dns-operator-744455d44c-52vpt\" (UID: \"1349c57b-da7f-4882-bb6f-73a883b23cea\") " pod="openshift-dns-operator/dns-operator-744455d44c-52vpt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.056767 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.057003 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78vmx\" (UniqueName: \"kubernetes.io/projected/6902c6a5-688c-4d89-9f2d-126e6cdd5879-kube-api-access-78vmx\") pod \"openshift-apiserver-operator-796bbdcf4f-hq4gf\" (UID: \"6902c6a5-688c-4d89-9f2d-126e6cdd5879\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hq4gf" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.057109 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fjf2\" (UniqueName: \"kubernetes.io/projected/3fe20580-4a7b-4b46-9cc2-07c852e9c866-kube-api-access-4fjf2\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.057306 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:17.557291286 +0000 UTC m=+123.860084774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.058650 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.065637 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-96rmm" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.069353 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-w8jb4"] Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.072092 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7vn7\" (UniqueName: \"kubernetes.io/projected/64e4f5ae-5aab-40a4-855b-8d7904027e63-kube-api-access-z7vn7\") pod \"openshift-controller-manager-operator-756b6f6bc6-xdlgl\" (UID: \"64e4f5ae-5aab-40a4-855b-8d7904027e63\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xdlgl" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.080785 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.083854 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k"] Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.090478 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmchm\" (UniqueName: \"kubernetes.io/projected/12b9edc6-687c-47b9-b8c6-8fa656fc40de-kube-api-access-bmchm\") pod \"console-operator-58897d9998-wpbsk\" (UID: \"12b9edc6-687c-47b9-b8c6-8fa656fc40de\") " pod="openshift-console-operator/console-operator-58897d9998-wpbsk" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.099012 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: W0127 15:44:17.107959 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb36ea7a2_dd43_4f61_a99e_523cd4bea6b6.slice/crio-236f498289c18ca399709ac7813cce33cd3e5784a44e4f3c9510bf9eb9cc4719 WatchSource:0}: Error finding container 236f498289c18ca399709ac7813cce33cd3e5784a44e4f3c9510bf9eb9cc4719: Status 404 returned error can't find the container with id 236f498289c18ca399709ac7813cce33cd3e5784a44e4f3c9510bf9eb9cc4719 Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.109573 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7g7t\" (UniqueName: \"kubernetes.io/projected/a0612699-805c-409c-a48a-d9852f1c7f4f-kube-api-access-k7g7t\") pod \"cluster-samples-operator-665b6dd947-c28bj\" (UID: \"a0612699-805c-409c-a48a-d9852f1c7f4f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c28bj" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.118768 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.133166 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8f6t\" (UniqueName: \"kubernetes.io/projected/c4ecd65d-fa3e-456b-8db6-314cc20216ed-kube-api-access-p8f6t\") pod \"authentication-operator-69f744f599-fws6n\" (UID: \"c4ecd65d-fa3e-456b-8db6-314cc20216ed\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.141686 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.147948 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5"] Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.151963 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fjf2\" (UniqueName: \"kubernetes.io/projected/3fe20580-4a7b-4b46-9cc2-07c852e9c866-kube-api-access-4fjf2\") pod \"apiserver-7bbb656c7d-gl8pc\" (UID: \"3fe20580-4a7b-4b46-9cc2-07c852e9c866\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.159178 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.159539 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:17.659520778 +0000 UTC m=+123.962314266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.163255 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.170706 4966 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:17 crc kubenswrapper[4966]: W0127 15:44:17.173472 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0046cf8f_c67b_4936_b3d6_1f7ac02eb919.slice/crio-43b555d25b48b38d5ffecd199a99af160d64d58ff43049527a83202117d48063 WatchSource:0}: Error finding container 43b555d25b48b38d5ffecd199a99af160d64d58ff43049527a83202117d48063: Status 404 returned error can't find the container with id 43b555d25b48b38d5ffecd199a99af160d64d58ff43049527a83202117d48063 Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.174413 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78vmx\" (UniqueName: \"kubernetes.io/projected/6902c6a5-688c-4d89-9f2d-126e6cdd5879-kube-api-access-78vmx\") pod \"openshift-apiserver-operator-796bbdcf4f-hq4gf\" (UID: \"6902c6a5-688c-4d89-9f2d-126e6cdd5879\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hq4gf" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.198691 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.200486 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rckw5"] Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.215316 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" event={"ID":"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6","Type":"ContainerStarted","Data":"236f498289c18ca399709ac7813cce33cd3e5784a44e4f3c9510bf9eb9cc4719"} Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.218470 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.220640 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c4rfs"] Jan 27 15:44:17 crc kubenswrapper[4966]: W0127 15:44:17.221287 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod612fb5e2_ec40_4a52_b6fb_463e64e0e872.slice/crio-ae7d8525de78283e7edd86bcc0dd096d0053abda398b362335675ff7f7764e60 WatchSource:0}: Error finding container ae7d8525de78283e7edd86bcc0dd096d0053abda398b362335675ff7f7764e60: Status 404 returned error can't find the container with id ae7d8525de78283e7edd86bcc0dd096d0053abda398b362335675ff7f7764e60 Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.223081 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" event={"ID":"ac92efeb-93b0-4044-9b79-fbfc19fc629e","Type":"ContainerStarted","Data":"59e8f32ebe61814f54edf8716befc1a7e04791a19a7b5b62e2d8cbffa6ab4534"} Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.224612 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-h2595" event={"ID":"3e3f05ac-eaa1-4dfb-9a2f-d2dfd9c695eb","Type":"ContainerStarted","Data":"b62e1d21162b40f2cc3a8d5920163484cdada02b8398bf7a390c75e5f323dcc9"} Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.224640 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-h2595" event={"ID":"3e3f05ac-eaa1-4dfb-9a2f-d2dfd9c695eb","Type":"ContainerStarted","Data":"a06060de656bd45a5c657c27e08be14bc9ef78e57f5b2ec4def4170e7caafc65"} Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.225646 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nb7qq" event={"ID":"d7e627b1-5b87-424a-8640-f721cae406ec","Type":"ContainerStarted","Data":"858ca778369f8272d863708431aa2175efb55e0bf66d42fafcec467fb36e583f"} Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.225670 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nb7qq" event={"ID":"d7e627b1-5b87-424a-8640-f721cae406ec","Type":"ContainerStarted","Data":"01304bf471fccc62df4bc5f30f50b60df9de2b8017aecd93e0df2ee77084d5fa"} Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.228940 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9pb5n" event={"ID":"ec3fb998-85ac-4c16-be5e-3dd48da929df","Type":"ContainerStarted","Data":"3976df5a4cb3b44e7a5789f44018ed841c4102690da7755fb767aa7023d7b84e"} Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.228976 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9pb5n" event={"ID":"ec3fb998-85ac-4c16-be5e-3dd48da929df","Type":"ContainerStarted","Data":"031b9bdb8f8847dc3012ec1dfde28445b699b9fdc6c864f0dcb4a245d8e93230"} Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.230932 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" event={"ID":"0046cf8f-c67b-4936-b3d6-1f7ac02eb919","Type":"ContainerStarted","Data":"43b555d25b48b38d5ffecd199a99af160d64d58ff43049527a83202117d48063"} Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.232083 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-f2z26" event={"ID":"eb7f0aaf-703f-4d9c-89c8-701f0707ab18","Type":"ContainerStarted","Data":"25cc82b3b3c73cf57446a6eb2f3392f21d282d6a74db5253ffeccd437422e803"} Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.232110 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-f2z26" event={"ID":"eb7f0aaf-703f-4d9c-89c8-701f0707ab18","Type":"ContainerStarted","Data":"ecf674a26e95c84deebaed1690b8e9e4a9c1c551d45142bf5126e3ac407b99a0"} Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.239735 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.259977 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.260302 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnfl5\" (UniqueName: \"kubernetes.io/projected/36c5078e-fb86-4817-a08e-6d4b4e2bee7f-kube-api-access-hnfl5\") pod \"downloads-7954f5f757-kmmq4\" (UID: \"36c5078e-fb86-4817-a08e-6d4b4e2bee7f\") " pod="openshift-console/downloads-7954f5f757-kmmq4" Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.260364 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:17.760347206 +0000 UTC m=+124.063140694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.260458 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.264096 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.264745 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:17.764730663 +0000 UTC m=+124.067524151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.269589 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnfl5\" (UniqueName: \"kubernetes.io/projected/36c5078e-fb86-4817-a08e-6d4b4e2bee7f-kube-api-access-hnfl5\") pod \"downloads-7954f5f757-kmmq4\" (UID: \"36c5078e-fb86-4817-a08e-6d4b4e2bee7f\") " pod="openshift-console/downloads-7954f5f757-kmmq4" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.279864 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.298877 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.299154 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xgjgw"] Jan 27 15:44:17 crc kubenswrapper[4966]: W0127 15:44:17.308588 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc84a1d6_0aba_48d3_9fcf_d5bd5719e0f7.slice/crio-64ddd09e1ea8935b567c61dad3f59baea0eb1ef8567e4aa4e410a629988d12bd WatchSource:0}: Error finding container 64ddd09e1ea8935b567c61dad3f59baea0eb1ef8567e4aa4e410a629988d12bd: Status 404 returned error can't find the container with id 64ddd09e1ea8935b567c61dad3f59baea0eb1ef8567e4aa4e410a629988d12bd Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.319493 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.340228 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.357330 4966 request.go:700] Waited for 1.353823991s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27148 Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.359524 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.364032 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.364184 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:17.864154097 +0000 UTC m=+124.166947585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.364284 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.364719 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:17.864709515 +0000 UTC m=+124.167503003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.380670 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.393684 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-96rmm"] Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.399777 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.419753 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.440940 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.459575 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.466247 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.466489 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:17.966467603 +0000 UTC m=+124.269261091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.466776 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.467086 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:17.967078791 +0000 UTC m=+124.269872279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.479934 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.500466 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.502443 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/805c6450-f56a-415f-85de-4a5b1df954b3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sdhr5\" (UID: \"805c6450-f56a-415f-85de-4a5b1df954b3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdhr5" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.506106 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbk28\" (UniqueName: \"kubernetes.io/projected/6048e5b0-2eb7-41b9-a0e1-53651ff008e2-kube-api-access-qbk28\") pod \"service-ca-operator-777779d784-275fm\" (UID: \"6048e5b0-2eb7-41b9-a0e1-53651ff008e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-275fm" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.518653 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.525334 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-qrsjk" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.539720 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.553778 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecccc492-0c85-414f-9ae9-2f5aa8df4e0d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hhnc\" (UID: \"ecccc492-0c85-414f-9ae9-2f5aa8df4e0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hhnc" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.558279 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.567222 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.567337 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:18.067319902 +0000 UTC m=+124.370113390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.567486 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.567753 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:18.067745645 +0000 UTC m=+124.370539133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.570521 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4f1e013-7175-4268-908c-10658a7a8f1f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ddksw\" (UID: \"b4f1e013-7175-4268-908c-10658a7a8f1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ddksw" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.578953 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.599354 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.600979 4966 projected.go:194] Error preparing data for projected volume kube-api-access-bj7pt for pod openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.601049 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7-kube-api-access-bj7pt podName:a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7 nodeName:}" failed. No retries permitted until 2026-01-27 15:44:18.101031031 +0000 UTC m=+124.403824519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bj7pt" (UniqueName: "kubernetes.io/projected/a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7-kube-api-access-bj7pt") pod "package-server-manager-789f6589d5-cr8s9" (UID: "a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7") : failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.606861 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56ckh\" (UniqueName: \"kubernetes.io/projected/4d3b7ce8-d257-4466-8385-0e506ba4cb38-kube-api-access-56ckh\") pod \"collect-profiles-29492130-nl8kr\" (UID: \"4d3b7ce8-d257-4466-8385-0e506ba4cb38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.607951 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpg5v\" (UniqueName: \"kubernetes.io/projected/70f9fda4-72f3-4f6a-8b6a-38dffbb6c958-kube-api-access-fpg5v\") pod \"packageserver-d55dfcdfc-fgklq\" (UID: \"70f9fda4-72f3-4f6a-8b6a-38dffbb6c958\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.609751 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p45j\" (UniqueName: \"kubernetes.io/projected/1317de86-7041-4b5a-8403-98489b8dc338-kube-api-access-7p45j\") pod \"catalog-operator-68c6474976-zk5gr\" (UID: \"1317de86-7041-4b5a-8403-98489b8dc338\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.614236 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92cg5\" (UniqueName: \"kubernetes.io/projected/f02135d5-ce67-4a94-9f94-60c29b672231-kube-api-access-92cg5\") pod \"olm-operator-6b444d44fb-m7rc2\" (UID: \"f02135d5-ce67-4a94-9f94-60c29b672231\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.618487 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.620621 4966 projected.go:194] Error preparing data for projected volume kube-api-access-rblmp for pod openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.620683 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/767a1177-008b-4103-b582-5a679d5d6384-kube-api-access-rblmp podName:767a1177-008b-4103-b582-5a679d5d6384 nodeName:}" failed. No retries permitted until 2026-01-27 15:44:18.120665092 +0000 UTC m=+124.423458580 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rblmp" (UniqueName: "kubernetes.io/projected/767a1177-008b-4103-b582-5a679d5d6384-kube-api-access-rblmp") pod "machine-approver-56656f9798-clgw2" (UID: "767a1177-008b-4103-b582-5a679d5d6384") : failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.641204 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.646753 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4lwnj" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.659197 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.670132 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.670701 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.670846 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:18.170824364 +0000 UTC m=+124.473617852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.671122 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.671375 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:18.17136757 +0000 UTC m=+124.474161058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.678980 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.686104 4966 projected.go:194] Error preparing data for projected volume kube-api-access-x287c for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g8mr8: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.686203 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9e5d666-9ed7-45b5-9e80-b1a8cab36cda-kube-api-access-x287c podName:f9e5d666-9ed7-45b5-9e80-b1a8cab36cda nodeName:}" failed. No retries permitted until 2026-01-27 15:44:18.186178921 +0000 UTC m=+124.488972409 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x287c" (UniqueName: "kubernetes.io/projected/f9e5d666-9ed7-45b5-9e80-b1a8cab36cda-kube-api-access-x287c") pod "kube-storage-version-migrator-operator-b67b599dd-g8mr8" (UID: "f9e5d666-9ed7-45b5-9e80-b1a8cab36cda") : failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.698921 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.700825 4966 projected.go:194] Error preparing data for projected volume kube-api-access-9mwc8 for pod openshift-service-ca/service-ca-9c57cc56f-hl7j2: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.700912 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cfed2cb5-8390-4b53-998a-195d7cc17c90-kube-api-access-9mwc8 podName:cfed2cb5-8390-4b53-998a-195d7cc17c90 nodeName:}" failed. No retries permitted until 2026-01-27 15:44:18.200880219 +0000 UTC m=+124.503673707 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9mwc8" (UniqueName: "kubernetes.io/projected/cfed2cb5-8390-4b53-998a-195d7cc17c90-kube-api-access-9mwc8") pod "service-ca-9c57cc56f-hl7j2" (UID: "cfed2cb5-8390-4b53-998a-195d7cc17c90") : failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.720822 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.723956 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qrsjk"] Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.729390 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.741258 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.747737 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.758859 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.762325 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xdlgl" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.771769 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.772194 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:18.272179248 +0000 UTC m=+124.574972736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.779996 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.787732 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.801791 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.810721 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c28bj" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.819309 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.822226 4966 projected.go:194] Error preparing data for projected volume kube-api-access-fj858 for pod openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff: failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.822303 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dab55df7-5589-4454-9f76-2318e87b02bb-kube-api-access-fj858 podName:dab55df7-5589-4454-9f76-2318e87b02bb nodeName:}" failed. No retries permitted until 2026-01-27 15:44:18.322284268 +0000 UTC m=+124.625077756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fj858" (UniqueName: "kubernetes.io/projected/dab55df7-5589-4454-9f76-2318e87b02bb-kube-api-access-fj858") pod "ingress-operator-5b745b69d9-nhgff" (UID: "dab55df7-5589-4454-9f76-2318e87b02bb") : failed to sync configmap cache: timed out waiting for the condition Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.839954 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.840981 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-52vpt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.875377 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.876386 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:18.376364581 +0000 UTC m=+124.679158069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.880752 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.887945 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.899290 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.905012 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4jwq" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.916228 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw"] Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.920118 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.925978 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.939984 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.958260 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vshm4\" (UniqueName: \"kubernetes.io/projected/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6-kube-api-access-vshm4\") pod \"marketplace-operator-79b997595-tjhrf\" (UID: \"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.958743 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.970374 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hq4gf" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.976082 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.976217 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:18.476190459 +0000 UTC m=+124.778983947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.976373 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:17 crc kubenswrapper[4966]: E0127 15:44:17.976870 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:18.476855929 +0000 UTC m=+124.779649417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.978836 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.982508 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sgfdr"] Jan 27 15:44:17 crc kubenswrapper[4966]: I0127 15:44:17.994621 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfxs4\" (UniqueName: \"kubernetes.io/projected/1348f4a2-fbf7-4e01-82ad-3943f3bc628e-kube-api-access-rfxs4\") pod \"migrator-59844c95c7-ssfvh\" (UID: \"1348f4a2-fbf7-4e01-82ad-3943f3bc628e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfvh" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.003620 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.018133 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-kmmq4" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.020762 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.027150 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdhr5" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.041503 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.047485 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdcf7\" (UniqueName: \"kubernetes.io/projected/08398814-3579-49c5-bf30-b8e700fabdab-kube-api-access-cdcf7\") pod \"router-default-5444994796-wzrqq\" (UID: \"08398814-3579-49c5-bf30-b8e700fabdab\") " pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.059356 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.062272 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ddksw" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.071633 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4lwnj"] Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.077503 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:18 crc kubenswrapper[4966]: E0127 15:44:18.077799 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:18.577786 +0000 UTC m=+124.880579488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.078610 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.086403 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.086543 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.086980 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.099181 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.103126 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.118352 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 15:44:18 crc kubenswrapper[4966]: W0127 15:44:18.123875 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3438edc5_62d1_4e68_b4ac_aa41a4240e78.slice/crio-e2acb8e2d0e72a437456ee6a8b8e45f514eea328ba526f0f669fe8136da373ba WatchSource:0}: Error finding container e2acb8e2d0e72a437456ee6a8b8e45f514eea328ba526f0f669fe8136da373ba: Status 404 returned error can't find the container with id e2acb8e2d0e72a437456ee6a8b8e45f514eea328ba526f0f669fe8136da373ba Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.130263 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-275fm" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.140457 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.149163 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hhnc" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.181526 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.181605 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj7pt\" (UniqueName: \"kubernetes.io/projected/a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7-kube-api-access-bj7pt\") pod \"package-server-manager-789f6589d5-cr8s9\" (UID: \"a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.181627 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rblmp\" (UniqueName: \"kubernetes.io/projected/767a1177-008b-4103-b582-5a679d5d6384-kube-api-access-rblmp\") pod \"machine-approver-56656f9798-clgw2\" (UID: \"767a1177-008b-4103-b582-5a679d5d6384\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2" Jan 27 15:44:18 crc kubenswrapper[4966]: E0127 15:44:18.182010 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:18.681991474 +0000 UTC m=+124.984784962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.194100 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rblmp\" (UniqueName: \"kubernetes.io/projected/767a1177-008b-4103-b582-5a679d5d6384-kube-api-access-rblmp\") pod \"machine-approver-56656f9798-clgw2\" (UID: \"767a1177-008b-4103-b582-5a679d5d6384\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.198378 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj7pt\" (UniqueName: \"kubernetes.io/projected/a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7-kube-api-access-bj7pt\") pod \"package-server-manager-789f6589d5-cr8s9\" (UID: \"a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.199194 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.204284 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.207548 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.220497 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.221742 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfvh" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.240556 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw" event={"ID":"330c84f4-5179-4ec9-92d5-4a5dd18c799b","Type":"ContainerStarted","Data":"26e2970aee871898fa647e19e0adb4d8953cd408bea4a9b36678714c5686db70"} Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.240596 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw" event={"ID":"330c84f4-5179-4ec9-92d5-4a5dd18c799b","Type":"ContainerStarted","Data":"ab292a2428b209ba6a0d8fbfc0feec4f9b42d6f191a79d2c16d57719fa537184"} Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.245123 4966 generic.go:334] "Generic (PLEG): container finished" podID="cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7" containerID="810880afaea9593d829b3d7128e74d07a4f5886f0895b2b54c4dfd00cc95d77b" exitCode=0 Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.245237 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" event={"ID":"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7","Type":"ContainerDied","Data":"810880afaea9593d829b3d7128e74d07a4f5886f0895b2b54c4dfd00cc95d77b"} Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.245285 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" event={"ID":"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7","Type":"ContainerStarted","Data":"64ddd09e1ea8935b567c61dad3f59baea0eb1ef8567e4aa4e410a629988d12bd"} Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.247211 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c4rfs" event={"ID":"3630c88c-69cc-44c2-8a80-90c02ace87f5","Type":"ContainerStarted","Data":"7aa57fbe36fd929282c0cb547303ddf154af7b289a5c983e275036184ec029fd"} Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.247294 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c4rfs" event={"ID":"3630c88c-69cc-44c2-8a80-90c02ace87f5","Type":"ContainerStarted","Data":"a4865cf312d93e29e838c32725f6a80b5c60d220b65b4a35f1d46773c3144f6c"} Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.248782 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sgfdr" event={"ID":"a3c5438a-013d-48da-8a1b-8dd23e17bce6","Type":"ContainerStarted","Data":"129ad2048ba4772209bd451a32cd7fe01d80d8b11b9d5159af33d2d313d2b7aa"} Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.259323 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.261020 4966 generic.go:334] "Generic (PLEG): container finished" podID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerID="bce8ecafb478a959e11113ef787c37b3a622725ca8b49b7a2857bdf6ad59dcba" exitCode=0 Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.261095 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" event={"ID":"0046cf8f-c67b-4936-b3d6-1f7ac02eb919","Type":"ContainerDied","Data":"bce8ecafb478a959e11113ef787c37b3a622725ca8b49b7a2857bdf6ad59dcba"} Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.266142 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.270345 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-f2z26" event={"ID":"eb7f0aaf-703f-4d9c-89c8-701f0707ab18","Type":"ContainerStarted","Data":"09ee0b1b05e0abea5bed99991131e120bddc46d8616ed6ad4e496859e12c9ba3"} Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.275195 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qrsjk" event={"ID":"44a925db-1525-422d-ac47-5e1ded16d64f","Type":"ContainerStarted","Data":"945119c9d4775c8678fb863fd9102d882a7359c7e9c96657533942749d7368c5"} Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.275264 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qrsjk" event={"ID":"44a925db-1525-422d-ac47-5e1ded16d64f","Type":"ContainerStarted","Data":"716608e44c5d1b35ecac046014f5266158ca1438c91140fcd12d4edaf8b951f0"} Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.283345 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" event={"ID":"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6","Type":"ContainerStarted","Data":"e9b0b48ce6f3d36592d27974224bd0c3614952c4fd5d913bd423c8674121d284"} Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.283386 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.283445 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.283673 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x287c\" (UniqueName: \"kubernetes.io/projected/f9e5d666-9ed7-45b5-9e80-b1a8cab36cda-kube-api-access-x287c\") pod \"kube-storage-version-migrator-operator-b67b599dd-g8mr8\" (UID: \"f9e5d666-9ed7-45b5-9e80-b1a8cab36cda\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g8mr8" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.283700 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mwc8\" (UniqueName: \"kubernetes.io/projected/cfed2cb5-8390-4b53-998a-195d7cc17c90-kube-api-access-9mwc8\") pod \"service-ca-9c57cc56f-hl7j2\" (UID: \"cfed2cb5-8390-4b53-998a-195d7cc17c90\") " pod="openshift-service-ca/service-ca-9c57cc56f-hl7j2" Jan 27 15:44:18 crc kubenswrapper[4966]: E0127 15:44:18.283733 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:18.783712241 +0000 UTC m=+125.086505729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.287655 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4lwnj" event={"ID":"3438edc5-62d1-4e68-b4ac-aa41a4240e78","Type":"ContainerStarted","Data":"e2acb8e2d0e72a437456ee6a8b8e45f514eea328ba526f0f669fe8136da373ba"} Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.288183 4966 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-w8jb4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.288253 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" podUID="b36ea7a2-dd43-4f61-a99e-523cd4bea6b6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.288607 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mwc8\" (UniqueName: \"kubernetes.io/projected/cfed2cb5-8390-4b53-998a-195d7cc17c90-kube-api-access-9mwc8\") pod \"service-ca-9c57cc56f-hl7j2\" (UID: \"cfed2cb5-8390-4b53-998a-195d7cc17c90\") " pod="openshift-service-ca/service-ca-9c57cc56f-hl7j2" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.299550 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x287c\" (UniqueName: \"kubernetes.io/projected/f9e5d666-9ed7-45b5-9e80-b1a8cab36cda-kube-api-access-x287c\") pod \"kube-storage-version-migrator-operator-b67b599dd-g8mr8\" (UID: \"f9e5d666-9ed7-45b5-9e80-b1a8cab36cda\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g8mr8" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.310458 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wpbsk"] Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.310502 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xdlgl"] Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.327083 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" event={"ID":"612fb5e2-ec40-4a52-b6fb-463e64e0e872","Type":"ContainerStarted","Data":"c30b370d3e538adb1b55462db75d700afb1165fb2d286192b8d3a5901442c905"} Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.327125 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" event={"ID":"612fb5e2-ec40-4a52-b6fb-463e64e0e872","Type":"ContainerStarted","Data":"ae7d8525de78283e7edd86bcc0dd096d0053abda398b362335675ff7f7764e60"} Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.327649 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.336191 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-96rmm" event={"ID":"fccdaa28-9674-4bb6-9c58-3f3905df1e56","Type":"ContainerStarted","Data":"1b1d495636a608d927dd0e21328ad5426c36394bff62cd080769e682554f2041"} Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.337059 4966 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rckw5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.337107 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" podUID="612fb5e2-ec40-4a52-b6fb-463e64e0e872" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.338048 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nb7qq" event={"ID":"d7e627b1-5b87-424a-8640-f721cae406ec","Type":"ContainerStarted","Data":"95877edc6dd5283accdf38d7615bca53519b20283ce58f302bf47188fe686186"} Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.338271 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-nb7qq" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.349308 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" event={"ID":"ac92efeb-93b0-4044-9b79-fbfc19fc629e","Type":"ContainerStarted","Data":"0be8d78ee63c6aca2701c271d45a9d0a3c9885720b08f30c5ae7200ace133328"} Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.349346 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.386323 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj858\" (UniqueName: \"kubernetes.io/projected/dab55df7-5589-4454-9f76-2318e87b02bb-kube-api-access-fj858\") pod \"ingress-operator-5b745b69d9-nhgff\" (UID: \"dab55df7-5589-4454-9f76-2318e87b02bb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.386674 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:18 crc kubenswrapper[4966]: E0127 15:44:18.388007 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:18.887989357 +0000 UTC m=+125.190782845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.403319 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj858\" (UniqueName: \"kubernetes.io/projected/dab55df7-5589-4454-9f76-2318e87b02bb-kube-api-access-fj858\") pod \"ingress-operator-5b745b69d9-nhgff\" (UID: \"dab55df7-5589-4454-9f76-2318e87b02bb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.445136 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.449637 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.487636 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:18 crc kubenswrapper[4966]: E0127 15:44:18.487862 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:18.987841275 +0000 UTC m=+125.290634763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.488957 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:18 crc kubenswrapper[4966]: E0127 15:44:18.518793 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:19.018778068 +0000 UTC m=+125.321571546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.519513 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.528265 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g8mr8" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.558914 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.566578 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hl7j2" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.581210 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.590070 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:18 crc kubenswrapper[4966]: E0127 15:44:18.590391 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:19.090373386 +0000 UTC m=+125.393166874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.591252 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.696647 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:18 crc kubenswrapper[4966]: E0127 15:44:18.697027 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:19.197012795 +0000 UTC m=+125.499806283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.798771 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:18 crc kubenswrapper[4966]: E0127 15:44:18.799391 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:19.299369202 +0000 UTC m=+125.602162700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.895130 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" Jan 27 15:44:18 crc kubenswrapper[4966]: I0127 15:44:18.900515 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:18 crc kubenswrapper[4966]: E0127 15:44:18.902235 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:19.402214363 +0000 UTC m=+125.705007851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.002982 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:19 crc kubenswrapper[4966]: E0127 15:44:19.003353 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:19.503338021 +0000 UTC m=+125.806131499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.104347 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:19 crc kubenswrapper[4966]: E0127 15:44:19.104882 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:19.60487111 +0000 UTC m=+125.907664598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.159271 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc"] Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.171231 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-52vpt"] Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.205534 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:19 crc kubenswrapper[4966]: E0127 15:44:19.205924 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:19.705877604 +0000 UTC m=+126.008671082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.233166 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c28bj"] Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.239836 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-9pb5n" podStartSLOduration=7.239819651 podStartE2EDuration="7.239819651s" podCreationTimestamp="2026-01-27 15:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:19.239564763 +0000 UTC m=+125.542358271" watchObservedRunningTime="2026-01-27 15:44:19.239819651 +0000 UTC m=+125.542613139" Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.307042 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:19 crc kubenswrapper[4966]: E0127 15:44:19.307321 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:19.807310002 +0000 UTC m=+126.110103490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.391993 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2" event={"ID":"767a1177-008b-4103-b582-5a679d5d6384","Type":"ContainerStarted","Data":"48f18b8d16f1ed9d14260fce0b294b856223c3477a24e57b71544f51809d4682"} Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.392205 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2" event={"ID":"767a1177-008b-4103-b582-5a679d5d6384","Type":"ContainerStarted","Data":"e2ecfeaaca5631ec5b1216adbb6a15f7e22157fef7ce1936cf2f79e1f4be4a56"} Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.404803 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wzrqq" event={"ID":"08398814-3579-49c5-bf30-b8e700fabdab","Type":"ContainerStarted","Data":"bd020432a2d6ff1957eff4ef787cd85c20d0b8d3aaa3f6a34084f838c823a40b"} Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.404842 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wzrqq" event={"ID":"08398814-3579-49c5-bf30-b8e700fabdab","Type":"ContainerStarted","Data":"3c79a491a086e82e5d52423568d355c5420bc137823fe492a30636a37ea30e75"} Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.407231 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-nb7qq" podStartSLOduration=7.407220081 podStartE2EDuration="7.407220081s" podCreationTimestamp="2026-01-27 15:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:19.406123588 +0000 UTC m=+125.708917096" watchObservedRunningTime="2026-01-27 15:44:19.407220081 +0000 UTC m=+125.710013569" Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.407770 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:19 crc kubenswrapper[4966]: E0127 15:44:19.408093 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:19.908082879 +0000 UTC m=+126.210876367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.418702 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sgfdr" event={"ID":"a3c5438a-013d-48da-8a1b-8dd23e17bce6","Type":"ContainerStarted","Data":"1177a8183442b9fd017b60aed037627b7cb0f85365bafd92772ecc78a083f7da"} Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.421257 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4lwnj" event={"ID":"3438edc5-62d1-4e68-b4ac-aa41a4240e78","Type":"ContainerStarted","Data":"3546294f3af3130aef0b02874f69d1886e7aaf2ae71f2007b56dce4b9524681a"} Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.421284 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4lwnj" event={"ID":"3438edc5-62d1-4e68-b4ac-aa41a4240e78","Type":"ContainerStarted","Data":"66a29acccc9a4055fd53b6cee0b5e168ed22237ca6f9cfac87130d74df1a0619"} Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.426297 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" event={"ID":"0046cf8f-c67b-4936-b3d6-1f7ac02eb919","Type":"ContainerStarted","Data":"a4165bb5e68d0d51bd39f333d00d61bbdde7b503e116c44eaf4366167bd10168"} Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.427570 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.431701 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qrsjk" event={"ID":"44a925db-1525-422d-ac47-5e1ded16d64f","Type":"ContainerStarted","Data":"397b4239ffb43b7ea8590b114eb3b97366a3f0935f573428aa7a390503bcef47"} Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.468501 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-96rmm" event={"ID":"fccdaa28-9674-4bb6-9c58-3f3905df1e56","Type":"ContainerStarted","Data":"7e4ac7047af7da9edf3994afd8b86f2f779775c714156b183357d31c7522db5e"} Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.501792 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" event={"ID":"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7","Type":"ContainerStarted","Data":"da6f0651a48dc7f8c240210ede898b3567adfda9bcc4659d72f37efd3ba68ff1"} Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.512278 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:19 crc kubenswrapper[4966]: E0127 15:44:19.513771 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:20.013759638 +0000 UTC m=+126.316553116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.515181 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c4rfs" podStartSLOduration=101.515162742 podStartE2EDuration="1m41.515162742s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:19.51221521 +0000 UTC m=+125.815008718" watchObservedRunningTime="2026-01-27 15:44:19.515162742 +0000 UTC m=+125.817956240" Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.561132 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xdlgl" event={"ID":"64e4f5ae-5aab-40a4-855b-8d7904027e63","Type":"ContainerStarted","Data":"7931e6949244bd4802d979e82fd5e3333faa7ffbb69f3c6249feff287892f632"} Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.561171 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xdlgl" event={"ID":"64e4f5ae-5aab-40a4-855b-8d7904027e63","Type":"ContainerStarted","Data":"991a3e5e4e17b884e5411b325897313b0ce3f16c74e4a901763353e2c6ed7223"} Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.561787 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" podStartSLOduration=101.561769562 podStartE2EDuration="1m41.561769562s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:19.556578211 +0000 UTC m=+125.859371699" watchObservedRunningTime="2026-01-27 15:44:19.561769562 +0000 UTC m=+125.864563060" Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.613379 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:19 crc kubenswrapper[4966]: E0127 15:44:19.613650 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:20.113635527 +0000 UTC m=+126.416429015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.635027 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" event={"ID":"12b9edc6-687c-47b9-b8c6-8fa656fc40de","Type":"ContainerStarted","Data":"8df87488c4564c6b6214014bff5f3c09ae41a9d1b9c170548638f483029f4791"} Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.635081 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" event={"ID":"12b9edc6-687c-47b9-b8c6-8fa656fc40de","Type":"ContainerStarted","Data":"1de1717da68a953b0c110143aba1cee379692e994f0c61fd4585aeac2578de4c"} Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.636059 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.643828 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw" event={"ID":"330c84f4-5179-4ec9-92d5-4a5dd18c799b","Type":"ContainerStarted","Data":"507824e0727534df510571a7275c185730236b5bcdc50bb9666b927ae4f7b9cd"} Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.653313 4966 patch_prober.go:28] interesting pod/console-operator-58897d9998-wpbsk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.653383 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" podUID="12b9edc6-687c-47b9-b8c6-8fa656fc40de" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.658536 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.659347 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.697845 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-kmmq4"] Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.716134 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:19 crc kubenswrapper[4966]: E0127 15:44:19.723566 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:20.223546148 +0000 UTC m=+126.526339646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.748787 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ztxq2"] Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.776443 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hq4gf"] Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.777734 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" podStartSLOduration=100.777714744 podStartE2EDuration="1m40.777714744s" podCreationTimestamp="2026-01-27 15:42:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:19.77245033 +0000 UTC m=+126.075243828" watchObservedRunningTime="2026-01-27 15:44:19.777714744 +0000 UTC m=+126.080508232" Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.785680 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4jwq"] Jan 27 15:44:19 crc kubenswrapper[4966]: W0127 15:44:19.791070 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6902c6a5_688c_4d89_9f2d_126e6cdd5879.slice/crio-714891061d15fa806f6e0421ba9c2360119c9cef09d2fb983fc0413f57493411 WatchSource:0}: Error finding container 714891061d15fa806f6e0421ba9c2360119c9cef09d2fb983fc0413f57493411: Status 404 returned error can't find the container with id 714891061d15fa806f6e0421ba9c2360119c9cef09d2fb983fc0413f57493411 Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.792544 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fws6n"] Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.817516 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.823143 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdhr5"] Jan 27 15:44:19 crc kubenswrapper[4966]: E0127 15:44:19.824404 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:20.324375476 +0000 UTC m=+126.627168964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.831129 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ddksw"] Jan 27 15:44:19 crc kubenswrapper[4966]: W0127 15:44:19.852570 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4ecd65d_fa3e_456b_8db6_314cc20216ed.slice/crio-98c44cadac728bd5147b6d5cd7af92dcb5e5d82c9cc7a7d9df1acf561e785d82 WatchSource:0}: Error finding container 98c44cadac728bd5147b6d5cd7af92dcb5e5d82c9cc7a7d9df1acf561e785d82: Status 404 returned error can't find the container with id 98c44cadac728bd5147b6d5cd7af92dcb5e5d82c9cc7a7d9df1acf561e785d82 Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.863638 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq"] Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.869939 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" podStartSLOduration=101.869920254 podStartE2EDuration="1m41.869920254s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:19.821993292 +0000 UTC m=+126.124786800" watchObservedRunningTime="2026-01-27 15:44:19.869920254 +0000 UTC m=+126.172713742" Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.919329 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:19 crc kubenswrapper[4966]: E0127 15:44:19.919581 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:20.419567989 +0000 UTC m=+126.722361487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.955505 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-h2595" podStartSLOduration=7.955485078 podStartE2EDuration="7.955485078s" podCreationTimestamp="2026-01-27 15:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:19.950723949 +0000 UTC m=+126.253517437" watchObservedRunningTime="2026-01-27 15:44:19.955485078 +0000 UTC m=+126.258278576" Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.956258 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfvh"] Jan 27 15:44:19 crc kubenswrapper[4966]: W0127 15:44:19.964505 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4f1e013_7175_4268_908c_10658a7a8f1f.slice/crio-b2a59bb39d88db0aa40986ae0e6c1185bff11cedc93aa7097059c1aa3ead1b0d WatchSource:0}: Error finding container b2a59bb39d88db0aa40986ae0e6c1185bff11cedc93aa7097059c1aa3ead1b0d: Status 404 returned error can't find the container with id b2a59bb39d88db0aa40986ae0e6c1185bff11cedc93aa7097059c1aa3ead1b0d Jan 27 15:44:19 crc kubenswrapper[4966]: I0127 15:44:19.970023 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2"] Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.009782 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr"] Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.020447 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:20 crc kubenswrapper[4966]: E0127 15:44:20.020849 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:20.520829451 +0000 UTC m=+126.823622939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:20 crc kubenswrapper[4966]: W0127 15:44:20.026615 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf02135d5_ce67_4a94_9f94_60c29b672231.slice/crio-53946ca2bf87244e023e8e825d9b2148c823099293bb435c22a8034ea8281b58 WatchSource:0}: Error finding container 53946ca2bf87244e023e8e825d9b2148c823099293bb435c22a8034ea8281b58: Status 404 returned error can't find the container with id 53946ca2bf87244e023e8e825d9b2148c823099293bb435c22a8034ea8281b58 Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.036939 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-275fm"] Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.055970 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr"] Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.064026 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tjhrf"] Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.066484 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hhnc"] Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.077174 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-f2z26" podStartSLOduration=102.077157925 podStartE2EDuration="1m42.077157925s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:20.075417461 +0000 UTC m=+126.378210949" watchObservedRunningTime="2026-01-27 15:44:20.077157925 +0000 UTC m=+126.379951413" Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.120625 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9"] Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.121379 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:20 crc kubenswrapper[4966]: E0127 15:44:20.121649 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:20.621637319 +0000 UTC m=+126.924430797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.138647 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g8mr8"] Jan 27 15:44:20 crc kubenswrapper[4966]: W0127 15:44:20.140619 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1317de86_7041_4b5a_8403_98489b8dc338.slice/crio-99eb6820c53a317753834e39513bbd3e79b69e2f8aad31ac9ee6f01f13b3d5be WatchSource:0}: Error finding container 99eb6820c53a317753834e39513bbd3e79b69e2f8aad31ac9ee6f01f13b3d5be: Status 404 returned error can't find the container with id 99eb6820c53a317753834e39513bbd3e79b69e2f8aad31ac9ee6f01f13b3d5be Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.183471 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff"] Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.196177 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4bvw" podStartSLOduration=102.196158409 podStartE2EDuration="1m42.196158409s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:20.19038501 +0000 UTC m=+126.493178498" watchObservedRunningTime="2026-01-27 15:44:20.196158409 +0000 UTC m=+126.498951897" Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.199115 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hl7j2"] Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.224847 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:20 crc kubenswrapper[4966]: E0127 15:44:20.226561 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:20.726539215 +0000 UTC m=+127.029332703 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.226611 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:20 crc kubenswrapper[4966]: E0127 15:44:20.228676 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:20.728663401 +0000 UTC m=+127.031456889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.256118 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podStartSLOduration=102.256098395 podStartE2EDuration="1m42.256098395s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:20.254721362 +0000 UTC m=+126.557514870" watchObservedRunningTime="2026-01-27 15:44:20.256098395 +0000 UTC m=+126.558891893" Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.267649 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.278114 4966 patch_prober.go:28] interesting pod/router-default-5444994796-wzrqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:44:20 crc kubenswrapper[4966]: [-]has-synced failed: reason withheld Jan 27 15:44:20 crc kubenswrapper[4966]: [+]process-running ok Jan 27 15:44:20 crc kubenswrapper[4966]: healthz check failed Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.278166 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wzrqq" podUID="08398814-3579-49c5-bf30-b8e700fabdab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:44:20 crc kubenswrapper[4966]: W0127 15:44:20.293569 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9e5d666_9ed7_45b5_9e80_b1a8cab36cda.slice/crio-179215e05ea5c07cd53e908dd179f63be05687b82316b1895ebf5209589e5dbe WatchSource:0}: Error finding container 179215e05ea5c07cd53e908dd179f63be05687b82316b1895ebf5209589e5dbe: Status 404 returned error can't find the container with id 179215e05ea5c07cd53e908dd179f63be05687b82316b1895ebf5209589e5dbe Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.327795 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:20 crc kubenswrapper[4966]: E0127 15:44:20.356058 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:20.856020386 +0000 UTC m=+127.158813874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.358518 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:20 crc kubenswrapper[4966]: E0127 15:44:20.358966 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:20.858948926 +0000 UTC m=+127.161742414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.461445 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:20 crc kubenswrapper[4966]: E0127 15:44:20.463417 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:20.963393757 +0000 UTC m=+127.266187245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.512724 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" podStartSLOduration=102.512707983 podStartE2EDuration="1m42.512707983s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:20.512397113 +0000 UTC m=+126.815190611" watchObservedRunningTime="2026-01-27 15:44:20.512707983 +0000 UTC m=+126.815501471" Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.567325 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:20 crc kubenswrapper[4966]: E0127 15:44:20.567661 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:21.067649143 +0000 UTC m=+127.370442631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.624330 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-qrsjk" podStartSLOduration=102.624314137 podStartE2EDuration="1m42.624314137s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:20.558741865 +0000 UTC m=+126.861535373" watchObservedRunningTime="2026-01-27 15:44:20.624314137 +0000 UTC m=+126.927107625" Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.675102 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:20 crc kubenswrapper[4966]: E0127 15:44:20.675454 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:21.175439677 +0000 UTC m=+127.478233165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.697105 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-sgfdr" podStartSLOduration=102.697081362 podStartE2EDuration="1m42.697081362s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:20.692100656 +0000 UTC m=+126.994894154" watchObservedRunningTime="2026-01-27 15:44:20.697081362 +0000 UTC m=+126.999874850" Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.711304 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hhnc" event={"ID":"ecccc492-0c85-414f-9ae9-2f5aa8df4e0d","Type":"ContainerStarted","Data":"73a84a44df54fa7c90d3314c507c725814311225e8a0d26de06957286255d515"} Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.773198 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" event={"ID":"a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7","Type":"ContainerStarted","Data":"0c5f710c34837aad43e29bc0c48b96545640ce0c81c49eed8ca55672aace3f2f"} Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.776970 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:20 crc kubenswrapper[4966]: E0127 15:44:20.777268 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:21.277254247 +0000 UTC m=+127.580047735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.777586 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" event={"ID":"1317de86-7041-4b5a-8403-98489b8dc338","Type":"ContainerStarted","Data":"99eb6820c53a317753834e39513bbd3e79b69e2f8aad31ac9ee6f01f13b3d5be"} Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.782948 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr" event={"ID":"4d3b7ce8-d257-4466-8385-0e506ba4cb38","Type":"ContainerStarted","Data":"f288ea0fc016db48d62341c4518fec96c272858854ae0fc39889227820872080"} Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.802208 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ddksw" event={"ID":"b4f1e013-7175-4268-908c-10658a7a8f1f","Type":"ContainerStarted","Data":"b2a59bb39d88db0aa40986ae0e6c1185bff11cedc93aa7097059c1aa3ead1b0d"} Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.821140 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-kmmq4" event={"ID":"36c5078e-fb86-4817-a08e-6d4b4e2bee7f","Type":"ContainerStarted","Data":"4e3a9b2d9d5d7fc5b637b36af29f1e01423128596bd976c9be305f2d25a8cc3a"} Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.821187 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-kmmq4" event={"ID":"36c5078e-fb86-4817-a08e-6d4b4e2bee7f","Type":"ContainerStarted","Data":"9e29b28e16405353ccb3b5de803b0327f1c74c90c462ab9d42d68582555615bb"} Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.821814 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-kmmq4" Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.825009 4966 patch_prober.go:28] interesting pod/downloads-7954f5f757-kmmq4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.825051 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kmmq4" podUID="36c5078e-fb86-4817-a08e-6d4b4e2bee7f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.837734 4966 generic.go:334] "Generic (PLEG): container finished" podID="3fe20580-4a7b-4b46-9cc2-07c852e9c866" containerID="6b2ba58225ba5506eae07b5a25b8416996f047e6664c71324927b781f5030ae6" exitCode=0 Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.837814 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" event={"ID":"3fe20580-4a7b-4b46-9cc2-07c852e9c866","Type":"ContainerDied","Data":"6b2ba58225ba5506eae07b5a25b8416996f047e6664c71324927b781f5030ae6"} Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.837839 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" event={"ID":"3fe20580-4a7b-4b46-9cc2-07c852e9c866","Type":"ContainerStarted","Data":"ef8ee36017fa0fb405dd293c6a509243dd8f7f8aac21a9054a749487c51ec182"} Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.857649 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2" event={"ID":"767a1177-008b-4103-b582-5a679d5d6384","Type":"ContainerStarted","Data":"14dc15bd39226327c9fe00530e1c8181234be89f9be0ce0155e4406faa6c20a0"} Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.870263 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4jwq" event={"ID":"d00c259a-2ad3-44dd-97d2-53e763da5ab1","Type":"ContainerStarted","Data":"a5af950b56b45a5275ae27c8e4838e6b64dadcec09990e3a84e127d2c330ae9c"} Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.872083 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdhr5" event={"ID":"805c6450-f56a-415f-85de-4a5b1df954b3","Type":"ContainerStarted","Data":"509ed584789e2e03ee34af95a6dfb127df8a0b944f739cec204aa9790880f858"} Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.878841 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.890218 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" event={"ID":"1a48af85-5ce3-4cd9-85bd-d0f88d38103a","Type":"ContainerStarted","Data":"34399407694b82d7bd4f4c91fac8bac720c27895970426f2d00a903edd56973f"} Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.890262 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" event={"ID":"1a48af85-5ce3-4cd9-85bd-d0f88d38103a","Type":"ContainerStarted","Data":"b8430195fb8376b2ef883032560fca57715f3936a2fa130f1b7aaf1986af4488"} Jan 27 15:44:20 crc kubenswrapper[4966]: E0127 15:44:20.890335 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:21.390310996 +0000 UTC m=+127.693104484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.924999 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" event={"ID":"cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7","Type":"ContainerStarted","Data":"3a424ba2d907819fa04514027d4865b4c4c9106d1aa014c15035739dcce40afa"} Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.932409 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-275fm" event={"ID":"6048e5b0-2eb7-41b9-a0e1-53651ff008e2","Type":"ContainerStarted","Data":"56eff98ab05943021d4020d7679c2463bf44e237da231753de8c5e6eb82d4ab2"} Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.942556 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" event={"ID":"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6","Type":"ContainerStarted","Data":"bf01320752cdb04afb2fb7687f6c90f8861da12ad2ceb251eb35b86de8f6734a"} Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.952333 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff" event={"ID":"dab55df7-5589-4454-9f76-2318e87b02bb","Type":"ContainerStarted","Data":"b0b2602a77815065448109316e9c81d451ae9edcaa47f787363f239213e56371"} Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.968072 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfvh" event={"ID":"1348f4a2-fbf7-4e01-82ad-3943f3bc628e","Type":"ContainerStarted","Data":"2779ad5cc77fad052f26639a1870ced208d719d47be8d8ecbc2a02a62c1d9979"} Jan 27 15:44:20 crc kubenswrapper[4966]: I0127 15:44:20.980714 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:20 crc kubenswrapper[4966]: E0127 15:44:20.984800 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:21.484785947 +0000 UTC m=+127.787579435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.014561 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hl7j2" event={"ID":"cfed2cb5-8390-4b53-998a-195d7cc17c90","Type":"ContainerStarted","Data":"e2675417e5c86611cab4bc6fb1677b860432640bc9423202dd0366b07ab2c5d4"} Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.018756 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" event={"ID":"c4ecd65d-fa3e-456b-8db6-314cc20216ed","Type":"ContainerStarted","Data":"86879ffcda37acb00832ea87979e76e7306201be9b58acf892c3e33ba3c70ed9"} Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.018802 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" event={"ID":"c4ecd65d-fa3e-456b-8db6-314cc20216ed","Type":"ContainerStarted","Data":"98c44cadac728bd5147b6d5cd7af92dcb5e5d82c9cc7a7d9df1acf561e785d82"} Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.044330 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hq4gf" event={"ID":"6902c6a5-688c-4d89-9f2d-126e6cdd5879","Type":"ContainerStarted","Data":"82fc9a9b31f0f57859be53909d55073b02be9c071c1c4465983bf4afd83991cc"} Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.044374 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hq4gf" event={"ID":"6902c6a5-688c-4d89-9f2d-126e6cdd5879","Type":"ContainerStarted","Data":"714891061d15fa806f6e0421ba9c2360119c9cef09d2fb983fc0413f57493411"} Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.059381 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c28bj" event={"ID":"a0612699-805c-409c-a48a-d9852f1c7f4f","Type":"ContainerStarted","Data":"863a72ab720fcdff8fd83ef97cd7da13aaee84dda9648846fb6eb5e028a990cb"} Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.059420 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c28bj" event={"ID":"a0612699-805c-409c-a48a-d9852f1c7f4f","Type":"ContainerStarted","Data":"880e732c2eb5662999fb25e6b62997a0b7e21d4f9b0c2283964872f4fa5ede1b"} Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.071293 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-52vpt" event={"ID":"1349c57b-da7f-4882-bb6f-73a883b23cea","Type":"ContainerStarted","Data":"5d84e95ddffa5c1f4d0673a05717d1cd9aab85f9a792b32907b3dbc560a83af4"} Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.071661 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-52vpt" event={"ID":"1349c57b-da7f-4882-bb6f-73a883b23cea","Type":"ContainerStarted","Data":"d6a880931dcc13e22cd229f7085bc711e7ce80d7b57b483dd5293bb627d9e78a"} Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.081376 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:21 crc kubenswrapper[4966]: E0127 15:44:21.082180 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:21.582151497 +0000 UTC m=+127.884944985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.098682 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" event={"ID":"f02135d5-ce67-4a94-9f94-60c29b672231","Type":"ContainerStarted","Data":"53946ca2bf87244e023e8e825d9b2148c823099293bb435c22a8034ea8281b58"} Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.098717 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.108456 4966 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-m7rc2 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.108502 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" podUID="f02135d5-ce67-4a94-9f94-60c29b672231" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.110403 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g8mr8" event={"ID":"f9e5d666-9ed7-45b5-9e80-b1a8cab36cda","Type":"ContainerStarted","Data":"179215e05ea5c07cd53e908dd179f63be05687b82316b1895ebf5209589e5dbe"} Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.128057 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" event={"ID":"70f9fda4-72f3-4f6a-8b6a-38dffbb6c958","Type":"ContainerStarted","Data":"2b9c23b2bf28bd464666693cfe1ee9856eeb98a886f0601cd11cf93a49848330"} Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.128233 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.139200 4966 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fgklq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.139227 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" podUID="70f9fda4-72f3-4f6a-8b6a-38dffbb6c958" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.142408 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-wzrqq" podStartSLOduration=103.142396113 podStartE2EDuration="1m43.142396113s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:21.091819989 +0000 UTC m=+127.394613487" watchObservedRunningTime="2026-01-27 15:44:21.142396113 +0000 UTC m=+127.445189601" Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.193816 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:21 crc kubenswrapper[4966]: E0127 15:44:21.195262 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:21.695250138 +0000 UTC m=+127.998043626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.222045 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xdlgl" podStartSLOduration=103.222026941 podStartE2EDuration="1m43.222026941s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:21.193118472 +0000 UTC m=+127.495911960" watchObservedRunningTime="2026-01-27 15:44:21.222026941 +0000 UTC m=+127.524820429" Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.244031 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4lwnj" podStartSLOduration=103.244013575 podStartE2EDuration="1m43.244013575s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:21.242314123 +0000 UTC m=+127.545107611" watchObservedRunningTime="2026-01-27 15:44:21.244013575 +0000 UTC m=+127.546807063" Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.274867 4966 patch_prober.go:28] interesting pod/router-default-5444994796-wzrqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:44:21 crc kubenswrapper[4966]: [-]has-synced failed: reason withheld Jan 27 15:44:21 crc kubenswrapper[4966]: [+]process-running ok Jan 27 15:44:21 crc kubenswrapper[4966]: healthz check failed Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.275250 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wzrqq" podUID="08398814-3579-49c5-bf30-b8e700fabdab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.295183 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:21 crc kubenswrapper[4966]: E0127 15:44:21.295468 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:21.795451997 +0000 UTC m=+128.098245485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.295921 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:21 crc kubenswrapper[4966]: E0127 15:44:21.300535 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:21.800522714 +0000 UTC m=+128.103316202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.395707 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" podStartSLOduration=103.395663296 podStartE2EDuration="1m43.395663296s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:21.393381615 +0000 UTC m=+127.696175133" watchObservedRunningTime="2026-01-27 15:44:21.395663296 +0000 UTC m=+127.698456784" Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.413301 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:21 crc kubenswrapper[4966]: E0127 15:44:21.413653 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:21.913637076 +0000 UTC m=+128.216430564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.427772 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" podStartSLOduration=103.427753165 podStartE2EDuration="1m43.427753165s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:21.424740041 +0000 UTC m=+127.727533539" watchObservedRunningTime="2026-01-27 15:44:21.427753165 +0000 UTC m=+127.730546653" Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.443082 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-275fm" podStartSLOduration=102.443061641 podStartE2EDuration="1m42.443061641s" podCreationTimestamp="2026-01-27 15:42:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:21.44014485 +0000 UTC m=+127.742938348" watchObservedRunningTime="2026-01-27 15:44:21.443061641 +0000 UTC m=+127.745855129" Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.519195 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:21 crc kubenswrapper[4966]: E0127 15:44:21.519646 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:22.019633075 +0000 UTC m=+128.322426563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.544566 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hq4gf" podStartSLOduration=103.54454256 podStartE2EDuration="1m43.54454256s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:21.541837446 +0000 UTC m=+127.844630934" watchObservedRunningTime="2026-01-27 15:44:21.54454256 +0000 UTC m=+127.847336048" Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.553214 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.592238 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" podStartSLOduration=103.592221814 podStartE2EDuration="1m43.592221814s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:21.590693777 +0000 UTC m=+127.893487285" watchObservedRunningTime="2026-01-27 15:44:21.592221814 +0000 UTC m=+127.895015302" Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.663389 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:21 crc kubenswrapper[4966]: E0127 15:44:21.664144 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:22.164123742 +0000 UTC m=+128.466917240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.681562 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" podStartSLOduration=103.681534224 podStartE2EDuration="1m43.681534224s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:21.653529932 +0000 UTC m=+127.956323440" watchObservedRunningTime="2026-01-27 15:44:21.681534224 +0000 UTC m=+127.984327712" Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.683343 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.683451 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-kmmq4" podStartSLOduration=103.683428453 podStartE2EDuration="1m43.683428453s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:21.677465388 +0000 UTC m=+127.980258886" watchObservedRunningTime="2026-01-27 15:44:21.683428453 +0000 UTC m=+127.986221951" Jan 27 15:44:21 crc kubenswrapper[4966]: E0127 15:44:21.683936 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:22.183919758 +0000 UTC m=+128.486713246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.707762 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4jwq" podStartSLOduration=103.707717529 podStartE2EDuration="1m43.707717529s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:21.705763458 +0000 UTC m=+128.008556966" watchObservedRunningTime="2026-01-27 15:44:21.707717529 +0000 UTC m=+128.010511017" Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.745681 4966 csr.go:261] certificate signing request csr-4pzl4 is approved, waiting to be issued Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.771413 4966 csr.go:257] certificate signing request csr-4pzl4 is issued Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.784313 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:21 crc kubenswrapper[4966]: E0127 15:44:21.784616 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:22.284601662 +0000 UTC m=+128.587395150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.796950 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-ztxq2" podStartSLOduration=103.796934226 podStartE2EDuration="1m43.796934226s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:21.79546358 +0000 UTC m=+128.098257068" watchObservedRunningTime="2026-01-27 15:44:21.796934226 +0000 UTC m=+128.099727714" Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.798667 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c28bj" podStartSLOduration=103.79865985 podStartE2EDuration="1m43.79865985s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:21.756399424 +0000 UTC m=+128.059192912" watchObservedRunningTime="2026-01-27 15:44:21.79865985 +0000 UTC m=+128.101453338" Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.833272 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-clgw2" podStartSLOduration=103.833249337 podStartE2EDuration="1m43.833249337s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:21.826644721 +0000 UTC m=+128.129438219" watchObservedRunningTime="2026-01-27 15:44:21.833249337 +0000 UTC m=+128.136042825" Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.890552 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:21 crc kubenswrapper[4966]: E0127 15:44:21.890887 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:22.39087645 +0000 UTC m=+128.693669928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:21 crc kubenswrapper[4966]: I0127 15:44:21.991363 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:21 crc kubenswrapper[4966]: E0127 15:44:21.992011 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:22.491995508 +0000 UTC m=+128.794788996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.052301 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.052363 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.093722 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:22 crc kubenswrapper[4966]: E0127 15:44:22.094038 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:22.594027224 +0000 UTC m=+128.896820712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.132851 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-52vpt" event={"ID":"1349c57b-da7f-4882-bb6f-73a883b23cea","Type":"ContainerStarted","Data":"ab040df10126bdfee744752b8c6522cf5d362d10053de4a9b179a41d1a4c7cdb"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.134413 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hl7j2" event={"ID":"cfed2cb5-8390-4b53-998a-195d7cc17c90","Type":"ContainerStarted","Data":"03a9bfcbd5580caf99bd7ce53f428d7be4288f87bead0f8c85d8d64efeeafde1"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.136148 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" event={"ID":"3fe20580-4a7b-4b46-9cc2-07c852e9c866","Type":"ContainerStarted","Data":"023c9b1316e1b99da4d0e7bcac3679acd02e5fc4f3b16f4ac7d60635b949de9e"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.137183 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hhnc" event={"ID":"ecccc492-0c85-414f-9ae9-2f5aa8df4e0d","Type":"ContainerStarted","Data":"7df873e843b4279e41bd7fd0e73f1e50603e7973752a168fbc3addb3bcff9541"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.139257 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" event={"ID":"70f9fda4-72f3-4f6a-8b6a-38dffbb6c958","Type":"ContainerStarted","Data":"83199eac14e6d24d69364c158fddb5436530ed5932ffdb5bb12c9bc0b9842de6"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.140102 4966 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fgklq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.140134 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" podUID="70f9fda4-72f3-4f6a-8b6a-38dffbb6c958" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.141128 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-96rmm" event={"ID":"fccdaa28-9674-4bb6-9c58-3f3905df1e56","Type":"ContainerStarted","Data":"aef4e538580e69ce6ff2a9e1d021a76c46ae1924a5bd65758957b6c5f83ab58f"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.142377 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr" event={"ID":"4d3b7ce8-d257-4466-8385-0e506ba4cb38","Type":"ContainerStarted","Data":"416e2bc09b74b8effaa4823006b5e389917e67966f17dcdc13238a1ce62366dd"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.144778 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c28bj" event={"ID":"a0612699-805c-409c-a48a-d9852f1c7f4f","Type":"ContainerStarted","Data":"1430417d2757d698d4214eb3923f8b8d5c8781246305f7415bfe55ddd4493725"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.146568 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff" event={"ID":"dab55df7-5589-4454-9f76-2318e87b02bb","Type":"ContainerStarted","Data":"1c51e953ba2146448a98d19320de1b437d3b235a73229c665517ccf791b28f81"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.146611 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff" event={"ID":"dab55df7-5589-4454-9f76-2318e87b02bb","Type":"ContainerStarted","Data":"e82fb2e42c73a5370e55e54e59982f15d7abd135d3daa96ee6f65ccc384d9afe"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.148548 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdhr5" event={"ID":"805c6450-f56a-415f-85de-4a5b1df954b3","Type":"ContainerStarted","Data":"ea1f9b557054d6a3499a0a9268b4d23dade42336350a412cb3c8af29c0a71bdb"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.149979 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ddksw" event={"ID":"b4f1e013-7175-4268-908c-10658a7a8f1f","Type":"ContainerStarted","Data":"11c2fc326dc00afabe2d97221092e08b9545c8b8ea1cac5757110b9ed768e8f4"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.154549 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-52vpt" podStartSLOduration=104.154535087 podStartE2EDuration="1m44.154535087s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:22.153264108 +0000 UTC m=+128.456057596" watchObservedRunningTime="2026-01-27 15:44:22.154535087 +0000 UTC m=+128.457328575" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.192546 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4jwq" event={"ID":"d00c259a-2ad3-44dd-97d2-53e763da5ab1","Type":"ContainerStarted","Data":"80c8f998977d53fafa27bbe40a9400b2cf276d7d95f6c484c23b17ec75bb327f"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.194820 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:22 crc kubenswrapper[4966]: E0127 15:44:22.195013 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:22.694986957 +0000 UTC m=+128.997780445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.195124 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:22 crc kubenswrapper[4966]: E0127 15:44:22.195507 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:22.695499802 +0000 UTC m=+128.998293290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.196183 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hhnc" podStartSLOduration=104.196171143 podStartE2EDuration="1m44.196171143s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:22.193295914 +0000 UTC m=+128.496089402" watchObservedRunningTime="2026-01-27 15:44:22.196171143 +0000 UTC m=+128.498964631" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.200355 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g8mr8" event={"ID":"f9e5d666-9ed7-45b5-9e80-b1a8cab36cda","Type":"ContainerStarted","Data":"f95e553073221280ece646c88f0aab230b5f6f770d72dbfcf28ee684a743909d"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.238522 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ddksw" podStartSLOduration=104.238509331 podStartE2EDuration="1m44.238509331s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:22.23622301 +0000 UTC m=+128.539016498" watchObservedRunningTime="2026-01-27 15:44:22.238509331 +0000 UTC m=+128.541302819" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.247280 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfvh" event={"ID":"1348f4a2-fbf7-4e01-82ad-3943f3bc628e","Type":"ContainerStarted","Data":"de03d34271b4db761a8cdfaf724a7ca14bbac11b6c2beaaa76fda54b35d38a64"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.247338 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfvh" event={"ID":"1348f4a2-fbf7-4e01-82ad-3943f3bc628e","Type":"ContainerStarted","Data":"e65f9225688acc4cb5b55a09ad8d45a5e39a3eeb9958995a789b321db250a220"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.255413 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-275fm" event={"ID":"6048e5b0-2eb7-41b9-a0e1-53651ff008e2","Type":"ContainerStarted","Data":"2e033f133cdd029a29b850aaea325651bc5441b37b6adf57abd160b6d35bd54d"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.276958 4966 patch_prober.go:28] interesting pod/router-default-5444994796-wzrqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:44:22 crc kubenswrapper[4966]: [-]has-synced failed: reason withheld Jan 27 15:44:22 crc kubenswrapper[4966]: [+]process-running ok Jan 27 15:44:22 crc kubenswrapper[4966]: healthz check failed Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.277016 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wzrqq" podUID="08398814-3579-49c5-bf30-b8e700fabdab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.282547 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" podStartSLOduration=103.282535371 podStartE2EDuration="1m43.282535371s" podCreationTimestamp="2026-01-27 15:42:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:22.28055027 +0000 UTC m=+128.583343768" watchObservedRunningTime="2026-01-27 15:44:22.282535371 +0000 UTC m=+128.585328849" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.294215 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" event={"ID":"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6","Type":"ContainerStarted","Data":"22d7c4c40beca27781ab90f03b512e8744efdd00e64cc9eaf1f74f6e722837b0"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.294663 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.297626 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" event={"ID":"1317de86-7041-4b5a-8403-98489b8dc338","Type":"ContainerStarted","Data":"f25072628369f755c272ee07cda1df1a73662e13a81b36d22ef89a0b7f04e534"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.298988 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.299333 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:22 crc kubenswrapper[4966]: E0127 15:44:22.301377 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:22.800955935 +0000 UTC m=+129.103749423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.321914 4966 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tjhrf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.321968 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" podUID="1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.329058 4966 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zk5gr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.329128 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" podUID="1317de86-7041-4b5a-8403-98489b8dc338" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.330322 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" event={"ID":"a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7","Type":"ContainerStarted","Data":"712f7f3fa67c4eac572f4a1a00e314db48749e033361fb7e97d90c55e29b1f48"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.330368 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" event={"ID":"a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7","Type":"ContainerStarted","Data":"bc324ae107f4af17ad08adbf3e4618fef421c5c5a16e4235422622ec36aff752"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.331023 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.336069 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" event={"ID":"f02135d5-ce67-4a94-9f94-60c29b672231","Type":"ContainerStarted","Data":"39d4e694c040cf5d6fc9f98a10a58faeed4a103b9c53d089c76fa84a7157cdb5"} Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.336612 4966 patch_prober.go:28] interesting pod/downloads-7954f5f757-kmmq4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.336666 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kmmq4" podUID="36c5078e-fb86-4817-a08e-6d4b4e2bee7f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.339319 4966 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-m7rc2 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.339362 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" podUID="f02135d5-ce67-4a94-9f94-60c29b672231" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.365363 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdhr5" podStartSLOduration=104.365342809 podStartE2EDuration="1m44.365342809s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:22.312416882 +0000 UTC m=+128.615210370" watchObservedRunningTime="2026-01-27 15:44:22.365342809 +0000 UTC m=+128.668136297" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.406635 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nhgff" podStartSLOduration=104.406620714 podStartE2EDuration="1m44.406620714s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:22.368220329 +0000 UTC m=+128.671013817" watchObservedRunningTime="2026-01-27 15:44:22.406620714 +0000 UTC m=+128.709414202" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.407117 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr" podStartSLOduration=104.407111249 podStartE2EDuration="1m44.407111249s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:22.405290093 +0000 UTC m=+128.708083611" watchObservedRunningTime="2026-01-27 15:44:22.407111249 +0000 UTC m=+128.709904737" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.407859 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:22 crc kubenswrapper[4966]: E0127 15:44:22.408252 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:22.908233894 +0000 UTC m=+129.211027382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.492960 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" podStartSLOduration=104.49292598 podStartE2EDuration="1m44.49292598s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:22.484377154 +0000 UTC m=+128.787170662" watchObservedRunningTime="2026-01-27 15:44:22.49292598 +0000 UTC m=+128.795719468" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.493305 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-hl7j2" podStartSLOduration=103.493298222 podStartE2EDuration="1m43.493298222s" podCreationTimestamp="2026-01-27 15:42:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:22.433282044 +0000 UTC m=+128.736075542" watchObservedRunningTime="2026-01-27 15:44:22.493298222 +0000 UTC m=+128.796091710" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.503662 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g8mr8" podStartSLOduration=104.503641003 podStartE2EDuration="1m44.503641003s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:22.503418726 +0000 UTC m=+128.806212224" watchObservedRunningTime="2026-01-27 15:44:22.503641003 +0000 UTC m=+128.806434491" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.510021 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:22 crc kubenswrapper[4966]: E0127 15:44:22.510338 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:23.010309141 +0000 UTC m=+129.313102629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.592779 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfvh" podStartSLOduration=104.592745668 podStartE2EDuration="1m44.592745668s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:22.560680279 +0000 UTC m=+128.863473777" watchObservedRunningTime="2026-01-27 15:44:22.592745668 +0000 UTC m=+128.895539156" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.593833 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" podStartSLOduration=104.593828251 podStartE2EDuration="1m44.593828251s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:22.591321243 +0000 UTC m=+128.894114751" watchObservedRunningTime="2026-01-27 15:44:22.593828251 +0000 UTC m=+128.896621739" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.612643 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:22 crc kubenswrapper[4966]: E0127 15:44:22.612987 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:23.112975367 +0000 UTC m=+129.415768845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.613211 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" podStartSLOduration=104.613193814 podStartE2EDuration="1m44.613193814s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:22.610753888 +0000 UTC m=+128.913547386" watchObservedRunningTime="2026-01-27 15:44:22.613193814 +0000 UTC m=+128.915987302" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.681334 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.714295 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:22 crc kubenswrapper[4966]: E0127 15:44:22.714667 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:23.214650222 +0000 UTC m=+129.517443710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.772388 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 15:39:21 +0000 UTC, rotation deadline is 2026-11-25 07:09:30.734668795 +0000 UTC Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.772433 4966 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7239h25m7.962238705s for next certificate rotation Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.788860 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.788914 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.797808 4966 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-gl8pc container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.29:8443/livez\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.797861 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" podUID="3fe20580-4a7b-4b46-9cc2-07c852e9c866" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.29:8443/livez\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.816538 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:22 crc kubenswrapper[4966]: E0127 15:44:22.816871 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:23.316860144 +0000 UTC m=+129.619653622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.917718 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:22 crc kubenswrapper[4966]: E0127 15:44:22.917970 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:23.417940569 +0000 UTC m=+129.720734057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.918194 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:22 crc kubenswrapper[4966]: E0127 15:44:22.918518 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:23.418507547 +0000 UTC m=+129.721301035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:22 crc kubenswrapper[4966]: I0127 15:44:22.948606 4966 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.019007 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:23 crc kubenswrapper[4966]: E0127 15:44:23.019320 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:23.519303885 +0000 UTC m=+129.822097373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.120644 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:23 crc kubenswrapper[4966]: E0127 15:44:23.121022 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:23.621006911 +0000 UTC m=+129.923800399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.222404 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:23 crc kubenswrapper[4966]: E0127 15:44:23.222705 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:23.722686896 +0000 UTC m=+130.025480384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.222805 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:23 crc kubenswrapper[4966]: E0127 15:44:23.223124 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:23.723111278 +0000 UTC m=+130.025904766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.272683 4966 patch_prober.go:28] interesting pod/router-default-5444994796-wzrqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:44:23 crc kubenswrapper[4966]: [-]has-synced failed: reason withheld Jan 27 15:44:23 crc kubenswrapper[4966]: [+]process-running ok Jan 27 15:44:23 crc kubenswrapper[4966]: healthz check failed Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.272753 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wzrqq" podUID="08398814-3579-49c5-bf30-b8e700fabdab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.323739 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:23 crc kubenswrapper[4966]: E0127 15:44:23.323924 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:23.823886795 +0000 UTC m=+130.126680283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.324032 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:23 crc kubenswrapper[4966]: E0127 15:44:23.324343 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:23.824335019 +0000 UTC m=+130.127128507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.349787 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-96rmm" event={"ID":"fccdaa28-9674-4bb6-9c58-3f3905df1e56","Type":"ContainerStarted","Data":"2e8eb1cca352e4f57cdb14608652c90e2e361c7a6351c0e47c65adefa277184e"} Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.349825 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-96rmm" event={"ID":"fccdaa28-9674-4bb6-9c58-3f3905df1e56","Type":"ContainerStarted","Data":"2439a1d95fc8fd9c81de467a68ffd3d4a0adf52ab584b074adeaa43f335fe864"} Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.350535 4966 patch_prober.go:28] interesting pod/downloads-7954f5f757-kmmq4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.350574 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kmmq4" podUID="36c5078e-fb86-4817-a08e-6d4b4e2bee7f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.350701 4966 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tjhrf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.350759 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" podUID="1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.360000 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.361172 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.381063 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-96rmm" podStartSLOduration=11.381044125 podStartE2EDuration="11.381044125s" podCreationTimestamp="2026-01-27 15:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:23.380000322 +0000 UTC m=+129.682793820" watchObservedRunningTime="2026-01-27 15:44:23.381044125 +0000 UTC m=+129.683837613" Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.411616 4966 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xgjgw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 15:44:23 crc kubenswrapper[4966]: [+]log ok Jan 27 15:44:23 crc kubenswrapper[4966]: [+]etcd ok Jan 27 15:44:23 crc kubenswrapper[4966]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 15:44:23 crc kubenswrapper[4966]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 15:44:23 crc kubenswrapper[4966]: [+]poststarthook/max-in-flight-filter ok Jan 27 15:44:23 crc kubenswrapper[4966]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 15:44:23 crc kubenswrapper[4966]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 27 15:44:23 crc kubenswrapper[4966]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 27 15:44:23 crc kubenswrapper[4966]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 27 15:44:23 crc kubenswrapper[4966]: [+]poststarthook/project.openshift.io-projectcache ok Jan 27 15:44:23 crc kubenswrapper[4966]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 27 15:44:23 crc kubenswrapper[4966]: [+]poststarthook/openshift.io-startinformers ok Jan 27 15:44:23 crc kubenswrapper[4966]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 27 15:44:23 crc kubenswrapper[4966]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 15:44:23 crc kubenswrapper[4966]: livez check failed Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.411683 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" podUID="cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.425347 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:23 crc kubenswrapper[4966]: E0127 15:44:23.425662 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:23.925623502 +0000 UTC m=+130.228416990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.427363 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:23 crc kubenswrapper[4966]: E0127 15:44:23.428309 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:23.928300585 +0000 UTC m=+130.231094163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.531430 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:23 crc kubenswrapper[4966]: E0127 15:44:23.531593 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:24.03156968 +0000 UTC m=+130.334363168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.531711 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:23 crc kubenswrapper[4966]: E0127 15:44:23.532004 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:24.031996763 +0000 UTC m=+130.334790251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.619248 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.632667 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:23 crc kubenswrapper[4966]: E0127 15:44:23.632782 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:24.1327619 +0000 UTC m=+130.435555388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.633029 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:23 crc kubenswrapper[4966]: E0127 15:44:23.633364 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:24.133353228 +0000 UTC m=+130.436146716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.733547 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:23 crc kubenswrapper[4966]: E0127 15:44:23.733660 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:44:24.23363732 +0000 UTC m=+130.536430808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.733794 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:23 crc kubenswrapper[4966]: E0127 15:44:23.734262 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:44:24.234239679 +0000 UTC m=+130.537033167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kctp8" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.737479 4966 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T15:44:22.948637565Z","Handler":null,"Name":""} Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.771623 4966 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.771668 4966 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.835328 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.841319 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.936963 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.959544 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 15:44:23 crc kubenswrapper[4966]: I0127 15:44:23.959592 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.037235 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kctp8\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.096716 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.105749 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.271181 4966 patch_prober.go:28] interesting pod/router-default-5444994796-wzrqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:44:24 crc kubenswrapper[4966]: [-]has-synced failed: reason withheld Jan 27 15:44:24 crc kubenswrapper[4966]: [+]process-running ok Jan 27 15:44:24 crc kubenswrapper[4966]: healthz check failed Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.271527 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wzrqq" podUID="08398814-3579-49c5-bf30-b8e700fabdab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.531553 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.564620 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kctp8"] Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.623467 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dnhcl"] Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.624379 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnhcl" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.626057 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.639482 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dnhcl"] Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.747407 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3ad1e5f-77aa-4005-bd12-618819d83c12-catalog-content\") pod \"certified-operators-dnhcl\" (UID: \"c3ad1e5f-77aa-4005-bd12-618819d83c12\") " pod="openshift-marketplace/certified-operators-dnhcl" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.747476 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77ppn\" (UniqueName: \"kubernetes.io/projected/c3ad1e5f-77aa-4005-bd12-618819d83c12-kube-api-access-77ppn\") pod \"certified-operators-dnhcl\" (UID: \"c3ad1e5f-77aa-4005-bd12-618819d83c12\") " pod="openshift-marketplace/certified-operators-dnhcl" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.747511 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3ad1e5f-77aa-4005-bd12-618819d83c12-utilities\") pod \"certified-operators-dnhcl\" (UID: \"c3ad1e5f-77aa-4005-bd12-618819d83c12\") " pod="openshift-marketplace/certified-operators-dnhcl" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.804316 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gtmd5"] Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.805172 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtmd5" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.807128 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.816566 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtmd5"] Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.848491 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77ppn\" (UniqueName: \"kubernetes.io/projected/c3ad1e5f-77aa-4005-bd12-618819d83c12-kube-api-access-77ppn\") pod \"certified-operators-dnhcl\" (UID: \"c3ad1e5f-77aa-4005-bd12-618819d83c12\") " pod="openshift-marketplace/certified-operators-dnhcl" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.848557 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3ad1e5f-77aa-4005-bd12-618819d83c12-utilities\") pod \"certified-operators-dnhcl\" (UID: \"c3ad1e5f-77aa-4005-bd12-618819d83c12\") " pod="openshift-marketplace/certified-operators-dnhcl" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.848700 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3ad1e5f-77aa-4005-bd12-618819d83c12-catalog-content\") pod \"certified-operators-dnhcl\" (UID: \"c3ad1e5f-77aa-4005-bd12-618819d83c12\") " pod="openshift-marketplace/certified-operators-dnhcl" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.849750 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3ad1e5f-77aa-4005-bd12-618819d83c12-catalog-content\") pod \"certified-operators-dnhcl\" (UID: \"c3ad1e5f-77aa-4005-bd12-618819d83c12\") " pod="openshift-marketplace/certified-operators-dnhcl" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.849828 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3ad1e5f-77aa-4005-bd12-618819d83c12-utilities\") pod \"certified-operators-dnhcl\" (UID: \"c3ad1e5f-77aa-4005-bd12-618819d83c12\") " pod="openshift-marketplace/certified-operators-dnhcl" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.867836 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77ppn\" (UniqueName: \"kubernetes.io/projected/c3ad1e5f-77aa-4005-bd12-618819d83c12-kube-api-access-77ppn\") pod \"certified-operators-dnhcl\" (UID: \"c3ad1e5f-77aa-4005-bd12-618819d83c12\") " pod="openshift-marketplace/certified-operators-dnhcl" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.948479 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnhcl" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.949519 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7220b0-9ce3-461c-a434-e09d4fde1b0a-utilities\") pod \"community-operators-gtmd5\" (UID: \"6e7220b0-9ce3-461c-a434-e09d4fde1b0a\") " pod="openshift-marketplace/community-operators-gtmd5" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.949589 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzws2\" (UniqueName: \"kubernetes.io/projected/6e7220b0-9ce3-461c-a434-e09d4fde1b0a-kube-api-access-kzws2\") pod \"community-operators-gtmd5\" (UID: \"6e7220b0-9ce3-461c-a434-e09d4fde1b0a\") " pod="openshift-marketplace/community-operators-gtmd5" Jan 27 15:44:24 crc kubenswrapper[4966]: I0127 15:44:24.949732 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7220b0-9ce3-461c-a434-e09d4fde1b0a-catalog-content\") pod \"community-operators-gtmd5\" (UID: \"6e7220b0-9ce3-461c-a434-e09d4fde1b0a\") " pod="openshift-marketplace/community-operators-gtmd5" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.009754 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rqggh"] Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.010884 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqggh" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.022136 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rqggh"] Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.051371 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7220b0-9ce3-461c-a434-e09d4fde1b0a-utilities\") pod \"community-operators-gtmd5\" (UID: \"6e7220b0-9ce3-461c-a434-e09d4fde1b0a\") " pod="openshift-marketplace/community-operators-gtmd5" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.051419 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzws2\" (UniqueName: \"kubernetes.io/projected/6e7220b0-9ce3-461c-a434-e09d4fde1b0a-kube-api-access-kzws2\") pod \"community-operators-gtmd5\" (UID: \"6e7220b0-9ce3-461c-a434-e09d4fde1b0a\") " pod="openshift-marketplace/community-operators-gtmd5" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.051493 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7220b0-9ce3-461c-a434-e09d4fde1b0a-catalog-content\") pod \"community-operators-gtmd5\" (UID: \"6e7220b0-9ce3-461c-a434-e09d4fde1b0a\") " pod="openshift-marketplace/community-operators-gtmd5" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.051960 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7220b0-9ce3-461c-a434-e09d4fde1b0a-catalog-content\") pod \"community-operators-gtmd5\" (UID: \"6e7220b0-9ce3-461c-a434-e09d4fde1b0a\") " pod="openshift-marketplace/community-operators-gtmd5" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.052125 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7220b0-9ce3-461c-a434-e09d4fde1b0a-utilities\") pod \"community-operators-gtmd5\" (UID: \"6e7220b0-9ce3-461c-a434-e09d4fde1b0a\") " pod="openshift-marketplace/community-operators-gtmd5" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.077138 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzws2\" (UniqueName: \"kubernetes.io/projected/6e7220b0-9ce3-461c-a434-e09d4fde1b0a-kube-api-access-kzws2\") pod \"community-operators-gtmd5\" (UID: \"6e7220b0-9ce3-461c-a434-e09d4fde1b0a\") " pod="openshift-marketplace/community-operators-gtmd5" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.143299 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtmd5" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.152729 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd6pk\" (UniqueName: \"kubernetes.io/projected/5171e70d-cb12-4969-b12f-535c813ce6b9-kube-api-access-hd6pk\") pod \"certified-operators-rqggh\" (UID: \"5171e70d-cb12-4969-b12f-535c813ce6b9\") " pod="openshift-marketplace/certified-operators-rqggh" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.152795 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5171e70d-cb12-4969-b12f-535c813ce6b9-utilities\") pod \"certified-operators-rqggh\" (UID: \"5171e70d-cb12-4969-b12f-535c813ce6b9\") " pod="openshift-marketplace/certified-operators-rqggh" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.152855 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5171e70d-cb12-4969-b12f-535c813ce6b9-catalog-content\") pod \"certified-operators-rqggh\" (UID: \"5171e70d-cb12-4969-b12f-535c813ce6b9\") " pod="openshift-marketplace/certified-operators-rqggh" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.190678 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dnhcl"] Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.211352 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s6njw"] Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.212166 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6njw" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.223516 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s6njw"] Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.253997 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5171e70d-cb12-4969-b12f-535c813ce6b9-catalog-content\") pod \"certified-operators-rqggh\" (UID: \"5171e70d-cb12-4969-b12f-535c813ce6b9\") " pod="openshift-marketplace/certified-operators-rqggh" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.254133 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd6pk\" (UniqueName: \"kubernetes.io/projected/5171e70d-cb12-4969-b12f-535c813ce6b9-kube-api-access-hd6pk\") pod \"certified-operators-rqggh\" (UID: \"5171e70d-cb12-4969-b12f-535c813ce6b9\") " pod="openshift-marketplace/certified-operators-rqggh" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.254200 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5171e70d-cb12-4969-b12f-535c813ce6b9-utilities\") pod \"certified-operators-rqggh\" (UID: \"5171e70d-cb12-4969-b12f-535c813ce6b9\") " pod="openshift-marketplace/certified-operators-rqggh" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.255394 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5171e70d-cb12-4969-b12f-535c813ce6b9-catalog-content\") pod \"certified-operators-rqggh\" (UID: \"5171e70d-cb12-4969-b12f-535c813ce6b9\") " pod="openshift-marketplace/certified-operators-rqggh" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.256507 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5171e70d-cb12-4969-b12f-535c813ce6b9-utilities\") pod \"certified-operators-rqggh\" (UID: \"5171e70d-cb12-4969-b12f-535c813ce6b9\") " pod="openshift-marketplace/certified-operators-rqggh" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.271396 4966 patch_prober.go:28] interesting pod/router-default-5444994796-wzrqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:44:25 crc kubenswrapper[4966]: [-]has-synced failed: reason withheld Jan 27 15:44:25 crc kubenswrapper[4966]: [+]process-running ok Jan 27 15:44:25 crc kubenswrapper[4966]: healthz check failed Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.271468 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wzrqq" podUID="08398814-3579-49c5-bf30-b8e700fabdab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.278187 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd6pk\" (UniqueName: \"kubernetes.io/projected/5171e70d-cb12-4969-b12f-535c813ce6b9-kube-api-access-hd6pk\") pod \"certified-operators-rqggh\" (UID: \"5171e70d-cb12-4969-b12f-535c813ce6b9\") " pod="openshift-marketplace/certified-operators-rqggh" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.351140 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqggh" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.356438 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11e4932d-65cd-40d3-a441-604e8d96855c-catalog-content\") pod \"community-operators-s6njw\" (UID: \"11e4932d-65cd-40d3-a441-604e8d96855c\") " pod="openshift-marketplace/community-operators-s6njw" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.356563 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11e4932d-65cd-40d3-a441-604e8d96855c-utilities\") pod \"community-operators-s6njw\" (UID: \"11e4932d-65cd-40d3-a441-604e8d96855c\") " pod="openshift-marketplace/community-operators-s6njw" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.356615 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlnjb\" (UniqueName: \"kubernetes.io/projected/11e4932d-65cd-40d3-a441-604e8d96855c-kube-api-access-qlnjb\") pod \"community-operators-s6njw\" (UID: \"11e4932d-65cd-40d3-a441-604e8d96855c\") " pod="openshift-marketplace/community-operators-s6njw" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.401694 4966 generic.go:334] "Generic (PLEG): container finished" podID="4d3b7ce8-d257-4466-8385-0e506ba4cb38" containerID="416e2bc09b74b8effaa4823006b5e389917e67966f17dcdc13238a1ce62366dd" exitCode=0 Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.401780 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr" event={"ID":"4d3b7ce8-d257-4466-8385-0e506ba4cb38","Type":"ContainerDied","Data":"416e2bc09b74b8effaa4823006b5e389917e67966f17dcdc13238a1ce62366dd"} Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.406429 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" event={"ID":"b8111aeb-2c95-4953-a2d0-586c5fcd4940","Type":"ContainerStarted","Data":"e828f758870823d7eaf0c64e9c0421f9471b6d951d5d3e4c9fd9f876ab9fa8c3"} Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.406475 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" event={"ID":"b8111aeb-2c95-4953-a2d0-586c5fcd4940","Type":"ContainerStarted","Data":"83df70eba59f7c93336512b6f83ee62943058226ccb4520356f4b4c0c8feb67e"} Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.407293 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.409438 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnhcl" event={"ID":"c3ad1e5f-77aa-4005-bd12-618819d83c12","Type":"ContainerStarted","Data":"9199338a4760f5c9899818c13748e6e8e09f8a4c793b2f5bb86795c847902705"} Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.409507 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnhcl" event={"ID":"c3ad1e5f-77aa-4005-bd12-618819d83c12","Type":"ContainerStarted","Data":"528d574363302529d696bd0b276d6d7cef2d9e91d16fdae828957b7d05046ed0"} Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.457659 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11e4932d-65cd-40d3-a441-604e8d96855c-utilities\") pod \"community-operators-s6njw\" (UID: \"11e4932d-65cd-40d3-a441-604e8d96855c\") " pod="openshift-marketplace/community-operators-s6njw" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.458044 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlnjb\" (UniqueName: \"kubernetes.io/projected/11e4932d-65cd-40d3-a441-604e8d96855c-kube-api-access-qlnjb\") pod \"community-operators-s6njw\" (UID: \"11e4932d-65cd-40d3-a441-604e8d96855c\") " pod="openshift-marketplace/community-operators-s6njw" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.458081 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11e4932d-65cd-40d3-a441-604e8d96855c-catalog-content\") pod \"community-operators-s6njw\" (UID: \"11e4932d-65cd-40d3-a441-604e8d96855c\") " pod="openshift-marketplace/community-operators-s6njw" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.459434 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11e4932d-65cd-40d3-a441-604e8d96855c-utilities\") pod \"community-operators-s6njw\" (UID: \"11e4932d-65cd-40d3-a441-604e8d96855c\") " pod="openshift-marketplace/community-operators-s6njw" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.459482 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11e4932d-65cd-40d3-a441-604e8d96855c-catalog-content\") pod \"community-operators-s6njw\" (UID: \"11e4932d-65cd-40d3-a441-604e8d96855c\") " pod="openshift-marketplace/community-operators-s6njw" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.465685 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" podStartSLOduration=107.465670972 podStartE2EDuration="1m47.465670972s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:44:25.46397493 +0000 UTC m=+131.766768428" watchObservedRunningTime="2026-01-27 15:44:25.465670972 +0000 UTC m=+131.768464460" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.481437 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlnjb\" (UniqueName: \"kubernetes.io/projected/11e4932d-65cd-40d3-a441-604e8d96855c-kube-api-access-qlnjb\") pod \"community-operators-s6njw\" (UID: \"11e4932d-65cd-40d3-a441-604e8d96855c\") " pod="openshift-marketplace/community-operators-s6njw" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.535455 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6njw" Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.548400 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rqggh"] Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.599559 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtmd5"] Jan 27 15:44:25 crc kubenswrapper[4966]: I0127 15:44:25.931083 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s6njw"] Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.269967 4966 patch_prober.go:28] interesting pod/router-default-5444994796-wzrqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:44:26 crc kubenswrapper[4966]: [-]has-synced failed: reason withheld Jan 27 15:44:26 crc kubenswrapper[4966]: [+]process-running ok Jan 27 15:44:26 crc kubenswrapper[4966]: healthz check failed Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.270018 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wzrqq" podUID="08398814-3579-49c5-bf30-b8e700fabdab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.424230 4966 generic.go:334] "Generic (PLEG): container finished" podID="5171e70d-cb12-4969-b12f-535c813ce6b9" containerID="c447b003d2afc5266dfcf5a227e7a410ad0a33c7258bbc7bdf80816941c018fa" exitCode=0 Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.424295 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqggh" event={"ID":"5171e70d-cb12-4969-b12f-535c813ce6b9","Type":"ContainerDied","Data":"c447b003d2afc5266dfcf5a227e7a410ad0a33c7258bbc7bdf80816941c018fa"} Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.425111 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqggh" event={"ID":"5171e70d-cb12-4969-b12f-535c813ce6b9","Type":"ContainerStarted","Data":"93d0a5f475b2f0df3978bfa25500eaafbd0cfe0419d1a1bade69fc2fc5ec55df"} Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.427829 4966 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.430368 4966 generic.go:334] "Generic (PLEG): container finished" podID="c3ad1e5f-77aa-4005-bd12-618819d83c12" containerID="9199338a4760f5c9899818c13748e6e8e09f8a4c793b2f5bb86795c847902705" exitCode=0 Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.430450 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnhcl" event={"ID":"c3ad1e5f-77aa-4005-bd12-618819d83c12","Type":"ContainerDied","Data":"9199338a4760f5c9899818c13748e6e8e09f8a4c793b2f5bb86795c847902705"} Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.445672 4966 generic.go:334] "Generic (PLEG): container finished" podID="6e7220b0-9ce3-461c-a434-e09d4fde1b0a" containerID="2e687a76f890ee6fdad32b0e81382a029765024915768a1d4a62c5194e60bda3" exitCode=0 Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.445795 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtmd5" event={"ID":"6e7220b0-9ce3-461c-a434-e09d4fde1b0a","Type":"ContainerDied","Data":"2e687a76f890ee6fdad32b0e81382a029765024915768a1d4a62c5194e60bda3"} Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.445827 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtmd5" event={"ID":"6e7220b0-9ce3-461c-a434-e09d4fde1b0a","Type":"ContainerStarted","Data":"2308d2af77c77b1e6552f761eb9a67621e05d54d6c46c6a9cd5bda4ca58b7e57"} Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.448058 4966 generic.go:334] "Generic (PLEG): container finished" podID="11e4932d-65cd-40d3-a441-604e8d96855c" containerID="597cded85d12a15bffcb6be9882dae2acd85b33f958bf19a3b63889bf44f94ef" exitCode=0 Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.448478 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6njw" event={"ID":"11e4932d-65cd-40d3-a441-604e8d96855c","Type":"ContainerDied","Data":"597cded85d12a15bffcb6be9882dae2acd85b33f958bf19a3b63889bf44f94ef"} Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.448541 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6njw" event={"ID":"11e4932d-65cd-40d3-a441-604e8d96855c","Type":"ContainerStarted","Data":"8c0e14a1aab7200d5f12d3a267207833366197f02b0ddeeb6edbce1ed56ea8cc"} Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.692198 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr" Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.775931 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d3b7ce8-d257-4466-8385-0e506ba4cb38-secret-volume\") pod \"4d3b7ce8-d257-4466-8385-0e506ba4cb38\" (UID: \"4d3b7ce8-d257-4466-8385-0e506ba4cb38\") " Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.776368 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56ckh\" (UniqueName: \"kubernetes.io/projected/4d3b7ce8-d257-4466-8385-0e506ba4cb38-kube-api-access-56ckh\") pod \"4d3b7ce8-d257-4466-8385-0e506ba4cb38\" (UID: \"4d3b7ce8-d257-4466-8385-0e506ba4cb38\") " Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.776499 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d3b7ce8-d257-4466-8385-0e506ba4cb38-config-volume\") pod \"4d3b7ce8-d257-4466-8385-0e506ba4cb38\" (UID: \"4d3b7ce8-d257-4466-8385-0e506ba4cb38\") " Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.777120 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d3b7ce8-d257-4466-8385-0e506ba4cb38-config-volume" (OuterVolumeSpecName: "config-volume") pod "4d3b7ce8-d257-4466-8385-0e506ba4cb38" (UID: "4d3b7ce8-d257-4466-8385-0e506ba4cb38"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.781209 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d3b7ce8-d257-4466-8385-0e506ba4cb38-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4d3b7ce8-d257-4466-8385-0e506ba4cb38" (UID: "4d3b7ce8-d257-4466-8385-0e506ba4cb38"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.781305 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d3b7ce8-d257-4466-8385-0e506ba4cb38-kube-api-access-56ckh" (OuterVolumeSpecName: "kube-api-access-56ckh") pod "4d3b7ce8-d257-4466-8385-0e506ba4cb38" (UID: "4d3b7ce8-d257-4466-8385-0e506ba4cb38"). InnerVolumeSpecName "kube-api-access-56ckh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.808108 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wrghj"] Jan 27 15:44:26 crc kubenswrapper[4966]: E0127 15:44:26.808293 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d3b7ce8-d257-4466-8385-0e506ba4cb38" containerName="collect-profiles" Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.808303 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d3b7ce8-d257-4466-8385-0e506ba4cb38" containerName="collect-profiles" Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.808402 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d3b7ce8-d257-4466-8385-0e506ba4cb38" containerName="collect-profiles" Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.809026 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrghj" Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.812278 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.818439 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrghj"] Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.878269 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbad73d0-4e65-4601-9d3e-7ac464269b5f-catalog-content\") pod \"redhat-marketplace-wrghj\" (UID: \"bbad73d0-4e65-4601-9d3e-7ac464269b5f\") " pod="openshift-marketplace/redhat-marketplace-wrghj" Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.878370 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w84bx\" (UniqueName: \"kubernetes.io/projected/bbad73d0-4e65-4601-9d3e-7ac464269b5f-kube-api-access-w84bx\") pod \"redhat-marketplace-wrghj\" (UID: \"bbad73d0-4e65-4601-9d3e-7ac464269b5f\") " pod="openshift-marketplace/redhat-marketplace-wrghj" Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.878418 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbad73d0-4e65-4601-9d3e-7ac464269b5f-utilities\") pod \"redhat-marketplace-wrghj\" (UID: \"bbad73d0-4e65-4601-9d3e-7ac464269b5f\") " pod="openshift-marketplace/redhat-marketplace-wrghj" Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.878547 4966 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d3b7ce8-d257-4466-8385-0e506ba4cb38-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.878563 4966 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d3b7ce8-d257-4466-8385-0e506ba4cb38-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.878576 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56ckh\" (UniqueName: \"kubernetes.io/projected/4d3b7ce8-d257-4466-8385-0e506ba4cb38-kube-api-access-56ckh\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.979914 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbad73d0-4e65-4601-9d3e-7ac464269b5f-catalog-content\") pod \"redhat-marketplace-wrghj\" (UID: \"bbad73d0-4e65-4601-9d3e-7ac464269b5f\") " pod="openshift-marketplace/redhat-marketplace-wrghj" Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.979974 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w84bx\" (UniqueName: \"kubernetes.io/projected/bbad73d0-4e65-4601-9d3e-7ac464269b5f-kube-api-access-w84bx\") pod \"redhat-marketplace-wrghj\" (UID: \"bbad73d0-4e65-4601-9d3e-7ac464269b5f\") " pod="openshift-marketplace/redhat-marketplace-wrghj" Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.980005 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbad73d0-4e65-4601-9d3e-7ac464269b5f-utilities\") pod \"redhat-marketplace-wrghj\" (UID: \"bbad73d0-4e65-4601-9d3e-7ac464269b5f\") " pod="openshift-marketplace/redhat-marketplace-wrghj" Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.980492 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbad73d0-4e65-4601-9d3e-7ac464269b5f-utilities\") pod \"redhat-marketplace-wrghj\" (UID: \"bbad73d0-4e65-4601-9d3e-7ac464269b5f\") " pod="openshift-marketplace/redhat-marketplace-wrghj" Jan 27 15:44:26 crc kubenswrapper[4966]: I0127 15:44:26.980796 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbad73d0-4e65-4601-9d3e-7ac464269b5f-catalog-content\") pod \"redhat-marketplace-wrghj\" (UID: \"bbad73d0-4e65-4601-9d3e-7ac464269b5f\") " pod="openshift-marketplace/redhat-marketplace-wrghj" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.002415 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w84bx\" (UniqueName: \"kubernetes.io/projected/bbad73d0-4e65-4601-9d3e-7ac464269b5f-kube-api-access-w84bx\") pod \"redhat-marketplace-wrghj\" (UID: \"bbad73d0-4e65-4601-9d3e-7ac464269b5f\") " pod="openshift-marketplace/redhat-marketplace-wrghj" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.017030 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.018067 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.023491 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.023808 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.026296 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.057781 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.065412 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.080751 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf3024e0-9e0a-48aa-801b-84aa6e241811-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cf3024e0-9e0a-48aa-801b-84aa6e241811\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.080836 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf3024e0-9e0a-48aa-801b-84aa6e241811-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cf3024e0-9e0a-48aa-801b-84aa6e241811\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.163097 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrghj" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.182489 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf3024e0-9e0a-48aa-801b-84aa6e241811-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cf3024e0-9e0a-48aa-801b-84aa6e241811\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.182536 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf3024e0-9e0a-48aa-801b-84aa6e241811-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cf3024e0-9e0a-48aa-801b-84aa6e241811\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.182587 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf3024e0-9e0a-48aa-801b-84aa6e241811-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cf3024e0-9e0a-48aa-801b-84aa6e241811\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.219251 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf3024e0-9e0a-48aa-801b-84aa6e241811-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cf3024e0-9e0a-48aa-801b-84aa6e241811\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.223343 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9lpj5"] Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.224444 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9lpj5" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.240288 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9lpj5"] Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.276713 4966 patch_prober.go:28] interesting pod/router-default-5444994796-wzrqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:44:27 crc kubenswrapper[4966]: [-]has-synced failed: reason withheld Jan 27 15:44:27 crc kubenswrapper[4966]: [+]process-running ok Jan 27 15:44:27 crc kubenswrapper[4966]: healthz check failed Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.276772 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wzrqq" podUID="08398814-3579-49c5-bf30-b8e700fabdab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.283479 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42e73b55-ac4e-4b3d-80be-2d9320fa42ce-catalog-content\") pod \"redhat-marketplace-9lpj5\" (UID: \"42e73b55-ac4e-4b3d-80be-2d9320fa42ce\") " pod="openshift-marketplace/redhat-marketplace-9lpj5" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.283652 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktsn9\" (UniqueName: \"kubernetes.io/projected/42e73b55-ac4e-4b3d-80be-2d9320fa42ce-kube-api-access-ktsn9\") pod \"redhat-marketplace-9lpj5\" (UID: \"42e73b55-ac4e-4b3d-80be-2d9320fa42ce\") " pod="openshift-marketplace/redhat-marketplace-9lpj5" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.283716 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42e73b55-ac4e-4b3d-80be-2d9320fa42ce-utilities\") pod \"redhat-marketplace-9lpj5\" (UID: \"42e73b55-ac4e-4b3d-80be-2d9320fa42ce\") " pod="openshift-marketplace/redhat-marketplace-9lpj5" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.365101 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.384640 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42e73b55-ac4e-4b3d-80be-2d9320fa42ce-catalog-content\") pod \"redhat-marketplace-9lpj5\" (UID: \"42e73b55-ac4e-4b3d-80be-2d9320fa42ce\") " pod="openshift-marketplace/redhat-marketplace-9lpj5" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.384715 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktsn9\" (UniqueName: \"kubernetes.io/projected/42e73b55-ac4e-4b3d-80be-2d9320fa42ce-kube-api-access-ktsn9\") pod \"redhat-marketplace-9lpj5\" (UID: \"42e73b55-ac4e-4b3d-80be-2d9320fa42ce\") " pod="openshift-marketplace/redhat-marketplace-9lpj5" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.384751 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42e73b55-ac4e-4b3d-80be-2d9320fa42ce-utilities\") pod \"redhat-marketplace-9lpj5\" (UID: \"42e73b55-ac4e-4b3d-80be-2d9320fa42ce\") " pod="openshift-marketplace/redhat-marketplace-9lpj5" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.385646 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42e73b55-ac4e-4b3d-80be-2d9320fa42ce-utilities\") pod \"redhat-marketplace-9lpj5\" (UID: \"42e73b55-ac4e-4b3d-80be-2d9320fa42ce\") " pod="openshift-marketplace/redhat-marketplace-9lpj5" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.385677 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42e73b55-ac4e-4b3d-80be-2d9320fa42ce-catalog-content\") pod \"redhat-marketplace-9lpj5\" (UID: \"42e73b55-ac4e-4b3d-80be-2d9320fa42ce\") " pod="openshift-marketplace/redhat-marketplace-9lpj5" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.414979 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktsn9\" (UniqueName: \"kubernetes.io/projected/42e73b55-ac4e-4b3d-80be-2d9320fa42ce-kube-api-access-ktsn9\") pod \"redhat-marketplace-9lpj5\" (UID: \"42e73b55-ac4e-4b3d-80be-2d9320fa42ce\") " pod="openshift-marketplace/redhat-marketplace-9lpj5" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.463689 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr" event={"ID":"4d3b7ce8-d257-4466-8385-0e506ba4cb38","Type":"ContainerDied","Data":"f288ea0fc016db48d62341c4518fec96c272858854ae0fc39889227820872080"} Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.463742 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f288ea0fc016db48d62341c4518fec96c272858854ae0fc39889227820872080" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.463799 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.468388 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrghj"] Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.550755 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9lpj5" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.730220 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.730864 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.758846 4966 patch_prober.go:28] interesting pod/console-f9d7485db-sgfdr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.758890 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sgfdr" podUID="a3c5438a-013d-48da-8a1b-8dd23e17bce6" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.800834 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.808082 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jrddt"] Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.809279 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrddt" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.811749 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.815521 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.846581 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jrddt"] Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.849792 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 15:44:27 crc kubenswrapper[4966]: W0127 15:44:27.870700 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcf3024e0_9e0a_48aa_801b_84aa6e241811.slice/crio-b2a1c53ef615be14edeb31f0ea3bc5160607925e154c82ad3960227496b4c0a7 WatchSource:0}: Error finding container b2a1c53ef615be14edeb31f0ea3bc5160607925e154c82ad3960227496b4c0a7: Status 404 returned error can't find the container with id b2a1c53ef615be14edeb31f0ea3bc5160607925e154c82ad3960227496b4c0a7 Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.895273 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0696ef-3017-4937-92ee-fe9e794c9fdd-catalog-content\") pod \"redhat-operators-jrddt\" (UID: \"9e0696ef-3017-4937-92ee-fe9e794c9fdd\") " pod="openshift-marketplace/redhat-operators-jrddt" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.895333 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0696ef-3017-4937-92ee-fe9e794c9fdd-utilities\") pod \"redhat-operators-jrddt\" (UID: \"9e0696ef-3017-4937-92ee-fe9e794c9fdd\") " pod="openshift-marketplace/redhat-operators-jrddt" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.895437 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xftq\" (UniqueName: \"kubernetes.io/projected/9e0696ef-3017-4937-92ee-fe9e794c9fdd-kube-api-access-8xftq\") pod \"redhat-operators-jrddt\" (UID: \"9e0696ef-3017-4937-92ee-fe9e794c9fdd\") " pod="openshift-marketplace/redhat-operators-jrddt" Jan 27 15:44:27 crc kubenswrapper[4966]: I0127 15:44:27.991977 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9lpj5"] Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:27.996237 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0696ef-3017-4937-92ee-fe9e794c9fdd-catalog-content\") pod \"redhat-operators-jrddt\" (UID: \"9e0696ef-3017-4937-92ee-fe9e794c9fdd\") " pod="openshift-marketplace/redhat-operators-jrddt" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:27.996273 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0696ef-3017-4937-92ee-fe9e794c9fdd-utilities\") pod \"redhat-operators-jrddt\" (UID: \"9e0696ef-3017-4937-92ee-fe9e794c9fdd\") " pod="openshift-marketplace/redhat-operators-jrddt" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:27.996329 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xftq\" (UniqueName: \"kubernetes.io/projected/9e0696ef-3017-4937-92ee-fe9e794c9fdd-kube-api-access-8xftq\") pod \"redhat-operators-jrddt\" (UID: \"9e0696ef-3017-4937-92ee-fe9e794c9fdd\") " pod="openshift-marketplace/redhat-operators-jrddt" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:27.997122 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0696ef-3017-4937-92ee-fe9e794c9fdd-catalog-content\") pod \"redhat-operators-jrddt\" (UID: \"9e0696ef-3017-4937-92ee-fe9e794c9fdd\") " pod="openshift-marketplace/redhat-operators-jrddt" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:27.997325 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0696ef-3017-4937-92ee-fe9e794c9fdd-utilities\") pod \"redhat-operators-jrddt\" (UID: \"9e0696ef-3017-4937-92ee-fe9e794c9fdd\") " pod="openshift-marketplace/redhat-operators-jrddt" Jan 27 15:44:28 crc kubenswrapper[4966]: W0127 15:44:28.003123 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42e73b55_ac4e_4b3d_80be_2d9320fa42ce.slice/crio-09b88e93eaca967d61c350a888bc9151ccacf078a82bd296a0db4c426a37e618 WatchSource:0}: Error finding container 09b88e93eaca967d61c350a888bc9151ccacf078a82bd296a0db4c426a37e618: Status 404 returned error can't find the container with id 09b88e93eaca967d61c350a888bc9151ccacf078a82bd296a0db4c426a37e618 Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.022040 4966 patch_prober.go:28] interesting pod/downloads-7954f5f757-kmmq4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.022106 4966 patch_prober.go:28] interesting pod/downloads-7954f5f757-kmmq4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.022138 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kmmq4" podUID="36c5078e-fb86-4817-a08e-6d4b4e2bee7f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.022147 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-kmmq4" podUID="36c5078e-fb86-4817-a08e-6d4b4e2bee7f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.027236 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xftq\" (UniqueName: \"kubernetes.io/projected/9e0696ef-3017-4937-92ee-fe9e794c9fdd-kube-api-access-8xftq\") pod \"redhat-operators-jrddt\" (UID: \"9e0696ef-3017-4937-92ee-fe9e794c9fdd\") " pod="openshift-marketplace/redhat-operators-jrddt" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.164463 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrddt" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.210280 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nxd8p"] Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.213257 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.213370 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nxd8p" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.217825 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nxd8p"] Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.266956 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.270630 4966 patch_prober.go:28] interesting pod/router-default-5444994796-wzrqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:44:28 crc kubenswrapper[4966]: [-]has-synced failed: reason withheld Jan 27 15:44:28 crc kubenswrapper[4966]: [+]process-running ok Jan 27 15:44:28 crc kubenswrapper[4966]: healthz check failed Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.270707 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wzrqq" podUID="08398814-3579-49c5-bf30-b8e700fabdab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.298786 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4-catalog-content\") pod \"redhat-operators-nxd8p\" (UID: \"3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4\") " pod="openshift-marketplace/redhat-operators-nxd8p" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.298881 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4-utilities\") pod \"redhat-operators-nxd8p\" (UID: \"3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4\") " pod="openshift-marketplace/redhat-operators-nxd8p" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.299033 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2krw8\" (UniqueName: \"kubernetes.io/projected/3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4-kube-api-access-2krw8\") pod \"redhat-operators-nxd8p\" (UID: \"3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4\") " pod="openshift-marketplace/redhat-operators-nxd8p" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.401495 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4-catalog-content\") pod \"redhat-operators-nxd8p\" (UID: \"3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4\") " pod="openshift-marketplace/redhat-operators-nxd8p" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.401907 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4-utilities\") pod \"redhat-operators-nxd8p\" (UID: \"3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4\") " pod="openshift-marketplace/redhat-operators-nxd8p" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.402000 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2krw8\" (UniqueName: \"kubernetes.io/projected/3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4-kube-api-access-2krw8\") pod \"redhat-operators-nxd8p\" (UID: \"3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4\") " pod="openshift-marketplace/redhat-operators-nxd8p" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.402299 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4-utilities\") pod \"redhat-operators-nxd8p\" (UID: \"3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4\") " pod="openshift-marketplace/redhat-operators-nxd8p" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.402299 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4-catalog-content\") pod \"redhat-operators-nxd8p\" (UID: \"3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4\") " pod="openshift-marketplace/redhat-operators-nxd8p" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.421663 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2krw8\" (UniqueName: \"kubernetes.io/projected/3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4-kube-api-access-2krw8\") pod \"redhat-operators-nxd8p\" (UID: \"3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4\") " pod="openshift-marketplace/redhat-operators-nxd8p" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.474953 4966 generic.go:334] "Generic (PLEG): container finished" podID="bbad73d0-4e65-4601-9d3e-7ac464269b5f" containerID="5586c2f2f0faa7f0931a1dfd67f401d1bfcc5c908be52ce91621c83634a6b93f" exitCode=0 Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.475083 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrghj" event={"ID":"bbad73d0-4e65-4601-9d3e-7ac464269b5f","Type":"ContainerDied","Data":"5586c2f2f0faa7f0931a1dfd67f401d1bfcc5c908be52ce91621c83634a6b93f"} Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.475131 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrghj" event={"ID":"bbad73d0-4e65-4601-9d3e-7ac464269b5f","Type":"ContainerStarted","Data":"6d6c291c45e9ec7ed0f7e916c770b419d45df29ae8e2e3f4d842e555412ad38e"} Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.479181 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cf3024e0-9e0a-48aa-801b-84aa6e241811","Type":"ContainerStarted","Data":"b2a1c53ef615be14edeb31f0ea3bc5160607925e154c82ad3960227496b4c0a7"} Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.480105 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-nb7qq" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.483693 4966 generic.go:334] "Generic (PLEG): container finished" podID="42e73b55-ac4e-4b3d-80be-2d9320fa42ce" containerID="ae64621d896310f978a4305bbbf2c76b809493488a5b147ab8dacff63f8762d7" exitCode=0 Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.487518 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lpj5" event={"ID":"42e73b55-ac4e-4b3d-80be-2d9320fa42ce","Type":"ContainerDied","Data":"ae64621d896310f978a4305bbbf2c76b809493488a5b147ab8dacff63f8762d7"} Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.487601 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lpj5" event={"ID":"42e73b55-ac4e-4b3d-80be-2d9320fa42ce","Type":"ContainerStarted","Data":"09b88e93eaca967d61c350a888bc9151ccacf078a82bd296a0db4c426a37e618"} Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.576286 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nxd8p" Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.643535 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jrddt"] Jan 27 15:44:28 crc kubenswrapper[4966]: W0127 15:44:28.703084 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e0696ef_3017_4937_92ee_fe9e794c9fdd.slice/crio-138cf139f1f4d72ea0f28b267091e4ce7d4ae249fdb1319d425dfdf3bfc93271 WatchSource:0}: Error finding container 138cf139f1f4d72ea0f28b267091e4ce7d4ae249fdb1319d425dfdf3bfc93271: Status 404 returned error can't find the container with id 138cf139f1f4d72ea0f28b267091e4ce7d4ae249fdb1319d425dfdf3bfc93271 Jan 27 15:44:28 crc kubenswrapper[4966]: I0127 15:44:28.911148 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nxd8p"] Jan 27 15:44:28 crc kubenswrapper[4966]: W0127 15:44:28.916714 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b3beb41_9a6e_40c4_b0bb_96938ba3b9d4.slice/crio-c4ee07a0d8233ed2523adfefebf990dc6c82a853babba48953579f60f2b52b8b WatchSource:0}: Error finding container c4ee07a0d8233ed2523adfefebf990dc6c82a853babba48953579f60f2b52b8b: Status 404 returned error can't find the container with id c4ee07a0d8233ed2523adfefebf990dc6c82a853babba48953579f60f2b52b8b Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.269698 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.272048 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.513470 4966 generic.go:334] "Generic (PLEG): container finished" podID="3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4" containerID="4175a9fc860c106940fd8d3f098954c4f3a60e93808d43dfe577c6ac65dc9255" exitCode=0 Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.513605 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxd8p" event={"ID":"3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4","Type":"ContainerDied","Data":"4175a9fc860c106940fd8d3f098954c4f3a60e93808d43dfe577c6ac65dc9255"} Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.513705 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxd8p" event={"ID":"3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4","Type":"ContainerStarted","Data":"c4ee07a0d8233ed2523adfefebf990dc6c82a853babba48953579f60f2b52b8b"} Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.516328 4966 generic.go:334] "Generic (PLEG): container finished" podID="cf3024e0-9e0a-48aa-801b-84aa6e241811" containerID="0e9e81c456f4023ce50a2d3b9fdab6787e8e0738a9732e8c1457ecff380e7a12" exitCode=0 Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.516389 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cf3024e0-9e0a-48aa-801b-84aa6e241811","Type":"ContainerDied","Data":"0e9e81c456f4023ce50a2d3b9fdab6787e8e0738a9732e8c1457ecff380e7a12"} Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.517606 4966 generic.go:334] "Generic (PLEG): container finished" podID="9e0696ef-3017-4937-92ee-fe9e794c9fdd" containerID="d3c2a0261b6de22871ac056b2c1650ebe4f6cd32e007410144f869f204340ff4" exitCode=0 Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.518473 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrddt" event={"ID":"9e0696ef-3017-4937-92ee-fe9e794c9fdd","Type":"ContainerDied","Data":"d3c2a0261b6de22871ac056b2c1650ebe4f6cd32e007410144f869f204340ff4"} Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.518489 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrddt" event={"ID":"9e0696ef-3017-4937-92ee-fe9e794c9fdd","Type":"ContainerStarted","Data":"138cf139f1f4d72ea0f28b267091e4ce7d4ae249fdb1319d425dfdf3bfc93271"} Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.636983 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.640325 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.644245 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.647307 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.647754 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.829088 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c39f89b-56f9-43f4-ac92-dc932ccbd49d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0c39f89b-56f9-43f4-ac92-dc932ccbd49d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.829142 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c39f89b-56f9-43f4-ac92-dc932ccbd49d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0c39f89b-56f9-43f4-ac92-dc932ccbd49d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.930369 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c39f89b-56f9-43f4-ac92-dc932ccbd49d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0c39f89b-56f9-43f4-ac92-dc932ccbd49d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.930410 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c39f89b-56f9-43f4-ac92-dc932ccbd49d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0c39f89b-56f9-43f4-ac92-dc932ccbd49d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.930500 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c39f89b-56f9-43f4-ac92-dc932ccbd49d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0c39f89b-56f9-43f4-ac92-dc932ccbd49d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.948383 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c39f89b-56f9-43f4-ac92-dc932ccbd49d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0c39f89b-56f9-43f4-ac92-dc932ccbd49d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:44:29 crc kubenswrapper[4966]: I0127 15:44:29.970354 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:44:30 crc kubenswrapper[4966]: I0127 15:44:30.479074 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 15:44:30 crc kubenswrapper[4966]: W0127 15:44:30.504017 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0c39f89b_56f9_43f4_ac92_dc932ccbd49d.slice/crio-c042d3a0ddc8b18339e18ba73965bed23bec9499b0676c54af9487091f85d156 WatchSource:0}: Error finding container c042d3a0ddc8b18339e18ba73965bed23bec9499b0676c54af9487091f85d156: Status 404 returned error can't find the container with id c042d3a0ddc8b18339e18ba73965bed23bec9499b0676c54af9487091f85d156 Jan 27 15:44:30 crc kubenswrapper[4966]: I0127 15:44:30.550282 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0c39f89b-56f9-43f4-ac92-dc932ccbd49d","Type":"ContainerStarted","Data":"c042d3a0ddc8b18339e18ba73965bed23bec9499b0676c54af9487091f85d156"} Jan 27 15:44:30 crc kubenswrapper[4966]: I0127 15:44:30.902589 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:44:31 crc kubenswrapper[4966]: I0127 15:44:31.044384 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf3024e0-9e0a-48aa-801b-84aa6e241811-kubelet-dir\") pod \"cf3024e0-9e0a-48aa-801b-84aa6e241811\" (UID: \"cf3024e0-9e0a-48aa-801b-84aa6e241811\") " Jan 27 15:44:31 crc kubenswrapper[4966]: I0127 15:44:31.044530 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf3024e0-9e0a-48aa-801b-84aa6e241811-kube-api-access\") pod \"cf3024e0-9e0a-48aa-801b-84aa6e241811\" (UID: \"cf3024e0-9e0a-48aa-801b-84aa6e241811\") " Jan 27 15:44:31 crc kubenswrapper[4966]: I0127 15:44:31.044550 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf3024e0-9e0a-48aa-801b-84aa6e241811-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cf3024e0-9e0a-48aa-801b-84aa6e241811" (UID: "cf3024e0-9e0a-48aa-801b-84aa6e241811"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:44:31 crc kubenswrapper[4966]: I0127 15:44:31.044834 4966 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf3024e0-9e0a-48aa-801b-84aa6e241811-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:31 crc kubenswrapper[4966]: I0127 15:44:31.052745 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf3024e0-9e0a-48aa-801b-84aa6e241811-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cf3024e0-9e0a-48aa-801b-84aa6e241811" (UID: "cf3024e0-9e0a-48aa-801b-84aa6e241811"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:44:31 crc kubenswrapper[4966]: I0127 15:44:31.146301 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf3024e0-9e0a-48aa-801b-84aa6e241811-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:31 crc kubenswrapper[4966]: I0127 15:44:31.570867 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0c39f89b-56f9-43f4-ac92-dc932ccbd49d","Type":"ContainerStarted","Data":"9b3e6b2c0daa317ebc9a9095f18f239d044d04dc1041f35eebadc76f620a6a2e"} Jan 27 15:44:31 crc kubenswrapper[4966]: I0127 15:44:31.579740 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cf3024e0-9e0a-48aa-801b-84aa6e241811","Type":"ContainerDied","Data":"b2a1c53ef615be14edeb31f0ea3bc5160607925e154c82ad3960227496b4c0a7"} Jan 27 15:44:31 crc kubenswrapper[4966]: I0127 15:44:31.579777 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:44:31 crc kubenswrapper[4966]: I0127 15:44:31.579783 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2a1c53ef615be14edeb31f0ea3bc5160607925e154c82ad3960227496b4c0a7" Jan 27 15:44:32 crc kubenswrapper[4966]: I0127 15:44:32.596012 4966 generic.go:334] "Generic (PLEG): container finished" podID="0c39f89b-56f9-43f4-ac92-dc932ccbd49d" containerID="9b3e6b2c0daa317ebc9a9095f18f239d044d04dc1041f35eebadc76f620a6a2e" exitCode=0 Jan 27 15:44:32 crc kubenswrapper[4966]: I0127 15:44:32.596077 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0c39f89b-56f9-43f4-ac92-dc932ccbd49d","Type":"ContainerDied","Data":"9b3e6b2c0daa317ebc9a9095f18f239d044d04dc1041f35eebadc76f620a6a2e"} Jan 27 15:44:34 crc kubenswrapper[4966]: I0127 15:44:34.328127 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:44:37 crc kubenswrapper[4966]: I0127 15:44:37.734458 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:37 crc kubenswrapper[4966]: I0127 15:44:37.738441 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:44:38 crc kubenswrapper[4966]: I0127 15:44:38.029839 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-kmmq4" Jan 27 15:44:38 crc kubenswrapper[4966]: I0127 15:44:38.830448 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:44:39 crc kubenswrapper[4966]: I0127 15:44:39.022976 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c39f89b-56f9-43f4-ac92-dc932ccbd49d-kube-api-access\") pod \"0c39f89b-56f9-43f4-ac92-dc932ccbd49d\" (UID: \"0c39f89b-56f9-43f4-ac92-dc932ccbd49d\") " Jan 27 15:44:39 crc kubenswrapper[4966]: I0127 15:44:39.023432 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c39f89b-56f9-43f4-ac92-dc932ccbd49d-kubelet-dir\") pod \"0c39f89b-56f9-43f4-ac92-dc932ccbd49d\" (UID: \"0c39f89b-56f9-43f4-ac92-dc932ccbd49d\") " Jan 27 15:44:39 crc kubenswrapper[4966]: I0127 15:44:39.023564 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c39f89b-56f9-43f4-ac92-dc932ccbd49d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0c39f89b-56f9-43f4-ac92-dc932ccbd49d" (UID: "0c39f89b-56f9-43f4-ac92-dc932ccbd49d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:44:39 crc kubenswrapper[4966]: I0127 15:44:39.024562 4966 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c39f89b-56f9-43f4-ac92-dc932ccbd49d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:39 crc kubenswrapper[4966]: I0127 15:44:39.031230 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c39f89b-56f9-43f4-ac92-dc932ccbd49d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0c39f89b-56f9-43f4-ac92-dc932ccbd49d" (UID: "0c39f89b-56f9-43f4-ac92-dc932ccbd49d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:44:39 crc kubenswrapper[4966]: I0127 15:44:39.125618 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c39f89b-56f9-43f4-ac92-dc932ccbd49d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:39 crc kubenswrapper[4966]: I0127 15:44:39.649380 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0c39f89b-56f9-43f4-ac92-dc932ccbd49d","Type":"ContainerDied","Data":"c042d3a0ddc8b18339e18ba73965bed23bec9499b0676c54af9487091f85d156"} Jan 27 15:44:39 crc kubenswrapper[4966]: I0127 15:44:39.649419 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c042d3a0ddc8b18339e18ba73965bed23bec9499b0676c54af9487091f85d156" Jan 27 15:44:39 crc kubenswrapper[4966]: I0127 15:44:39.649440 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:44:40 crc kubenswrapper[4966]: I0127 15:44:40.120074 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:44:40 crc kubenswrapper[4966]: I0127 15:44:40.120495 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:44:42 crc kubenswrapper[4966]: I0127 15:44:42.363159 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:44:42 crc kubenswrapper[4966]: I0127 15:44:42.363235 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:44:42 crc kubenswrapper[4966]: I0127 15:44:42.366108 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:44:42 crc kubenswrapper[4966]: I0127 15:44:42.370346 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:44:42 crc kubenswrapper[4966]: I0127 15:44:42.459963 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:44:42 crc kubenswrapper[4966]: I0127 15:44:42.463933 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:44:42 crc kubenswrapper[4966]: I0127 15:44:42.464002 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:44:42 crc kubenswrapper[4966]: I0127 15:44:42.470579 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:44:42 crc kubenswrapper[4966]: I0127 15:44:42.470640 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:44:42 crc kubenswrapper[4966]: I0127 15:44:42.635146 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:44:42 crc kubenswrapper[4966]: I0127 15:44:42.767932 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:44:44 crc kubenswrapper[4966]: I0127 15:44:44.111999 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:44:52 crc kubenswrapper[4966]: E0127 15:44:52.545360 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 15:44:52 crc kubenswrapper[4966]: E0127 15:44:52.545766 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qlnjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-s6njw_openshift-marketplace(11e4932d-65cd-40d3-a441-604e8d96855c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 15:44:52 crc kubenswrapper[4966]: E0127 15:44:52.547319 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-s6njw" podUID="11e4932d-65cd-40d3-a441-604e8d96855c" Jan 27 15:44:55 crc kubenswrapper[4966]: E0127 15:44:55.469250 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s6njw" podUID="11e4932d-65cd-40d3-a441-604e8d96855c" Jan 27 15:44:55 crc kubenswrapper[4966]: E0127 15:44:55.591600 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 15:44:55 crc kubenswrapper[4966]: E0127 15:44:55.591960 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kzws2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gtmd5_openshift-marketplace(6e7220b0-9ce3-461c-a434-e09d4fde1b0a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 15:44:55 crc kubenswrapper[4966]: E0127 15:44:55.594717 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gtmd5" podUID="6e7220b0-9ce3-461c-a434-e09d4fde1b0a" Jan 27 15:44:56 crc kubenswrapper[4966]: E0127 15:44:56.471390 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gtmd5" podUID="6e7220b0-9ce3-461c-a434-e09d4fde1b0a" Jan 27 15:44:56 crc kubenswrapper[4966]: E0127 15:44:56.541543 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 15:44:56 crc kubenswrapper[4966]: E0127 15:44:56.541712 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w84bx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wrghj_openshift-marketplace(bbad73d0-4e65-4601-9d3e-7ac464269b5f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 15:44:56 crc kubenswrapper[4966]: E0127 15:44:56.543198 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wrghj" podUID="bbad73d0-4e65-4601-9d3e-7ac464269b5f" Jan 27 15:44:56 crc kubenswrapper[4966]: E0127 15:44:56.704318 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 15:44:56 crc kubenswrapper[4966]: E0127 15:44:56.704452 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktsn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-9lpj5_openshift-marketplace(42e73b55-ac4e-4b3d-80be-2d9320fa42ce): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 15:44:56 crc kubenswrapper[4966]: E0127 15:44:56.705764 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-9lpj5" podUID="42e73b55-ac4e-4b3d-80be-2d9320fa42ce" Jan 27 15:44:56 crc kubenswrapper[4966]: E0127 15:44:56.754561 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-9lpj5" podUID="42e73b55-ac4e-4b3d-80be-2d9320fa42ce" Jan 27 15:44:56 crc kubenswrapper[4966]: E0127 15:44:56.754643 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wrghj" podUID="bbad73d0-4e65-4601-9d3e-7ac464269b5f" Jan 27 15:44:57 crc kubenswrapper[4966]: W0127 15:44:57.130089 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-5abc2738ba50cec11146d9438b92660f8a0e61f77be00cece36fc2adc88674b5 WatchSource:0}: Error finding container 5abc2738ba50cec11146d9438b92660f8a0e61f77be00cece36fc2adc88674b5: Status 404 returned error can't find the container with id 5abc2738ba50cec11146d9438b92660f8a0e61f77be00cece36fc2adc88674b5 Jan 27 15:44:57 crc kubenswrapper[4966]: W0127 15:44:57.281987 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-20838deff3ab4dda6c4ea98ed12af619200363021bb94ee98d93b3d5dba111b3 WatchSource:0}: Error finding container 20838deff3ab4dda6c4ea98ed12af619200363021bb94ee98d93b3d5dba111b3: Status 404 returned error can't find the container with id 20838deff3ab4dda6c4ea98ed12af619200363021bb94ee98d93b3d5dba111b3 Jan 27 15:44:57 crc kubenswrapper[4966]: I0127 15:44:57.759979 4966 generic.go:334] "Generic (PLEG): container finished" podID="c3ad1e5f-77aa-4005-bd12-618819d83c12" containerID="e05e3618a09bcafe68421c66e24c5b8a2ae69e2e4566dcc64ad622a4a1e6577f" exitCode=0 Jan 27 15:44:57 crc kubenswrapper[4966]: I0127 15:44:57.760867 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnhcl" event={"ID":"c3ad1e5f-77aa-4005-bd12-618819d83c12","Type":"ContainerDied","Data":"e05e3618a09bcafe68421c66e24c5b8a2ae69e2e4566dcc64ad622a4a1e6577f"} Jan 27 15:44:57 crc kubenswrapper[4966]: I0127 15:44:57.765219 4966 generic.go:334] "Generic (PLEG): container finished" podID="9e0696ef-3017-4937-92ee-fe9e794c9fdd" containerID="cd1b2c2c9b9ab3281f1f798d6677e835a620655818603c5c149b20106bf8a234" exitCode=0 Jan 27 15:44:57 crc kubenswrapper[4966]: I0127 15:44:57.765270 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrddt" event={"ID":"9e0696ef-3017-4937-92ee-fe9e794c9fdd","Type":"ContainerDied","Data":"cd1b2c2c9b9ab3281f1f798d6677e835a620655818603c5c149b20106bf8a234"} Jan 27 15:44:57 crc kubenswrapper[4966]: I0127 15:44:57.766549 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1935e20bc2d9ce067038cd24d98c8731bd196414ebb185cdf468cee81dea18c8"} Jan 27 15:44:57 crc kubenswrapper[4966]: I0127 15:44:57.766569 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5abc2738ba50cec11146d9438b92660f8a0e61f77be00cece36fc2adc88674b5"} Jan 27 15:44:57 crc kubenswrapper[4966]: I0127 15:44:57.771239 4966 generic.go:334] "Generic (PLEG): container finished" podID="3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4" containerID="b629c2b867a7c58903aad84e34aefb291937e2568e0cf66cf8b3ce0ea1e6a461" exitCode=0 Jan 27 15:44:57 crc kubenswrapper[4966]: I0127 15:44:57.771344 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxd8p" event={"ID":"3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4","Type":"ContainerDied","Data":"b629c2b867a7c58903aad84e34aefb291937e2568e0cf66cf8b3ce0ea1e6a461"} Jan 27 15:44:57 crc kubenswrapper[4966]: I0127 15:44:57.774814 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ae42f9c6bdd86e353f787a8643076c4c977896389757beb67ef7563d72fe1da0"} Jan 27 15:44:57 crc kubenswrapper[4966]: I0127 15:44:57.774854 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"20838deff3ab4dda6c4ea98ed12af619200363021bb94ee98d93b3d5dba111b3"} Jan 27 15:44:57 crc kubenswrapper[4966]: I0127 15:44:57.775630 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:44:57 crc kubenswrapper[4966]: I0127 15:44:57.777519 4966 generic.go:334] "Generic (PLEG): container finished" podID="5171e70d-cb12-4969-b12f-535c813ce6b9" containerID="7dfe673ae1f9fe7f8dedca1bde7cab7075f2be05df92d3d244373e23c4dbbb32" exitCode=0 Jan 27 15:44:57 crc kubenswrapper[4966]: I0127 15:44:57.777571 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqggh" event={"ID":"5171e70d-cb12-4969-b12f-535c813ce6b9","Type":"ContainerDied","Data":"7dfe673ae1f9fe7f8dedca1bde7cab7075f2be05df92d3d244373e23c4dbbb32"} Jan 27 15:44:57 crc kubenswrapper[4966]: I0127 15:44:57.781241 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9787dbd6844e00abac25ba1a304ed6c0d80747ed06caf909620d053119954e21"} Jan 27 15:44:57 crc kubenswrapper[4966]: I0127 15:44:57.781283 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f95fa19e98b2dcd0a116714ab59371f9383a3587e6b64465d491b75903f38975"} Jan 27 15:44:58 crc kubenswrapper[4966]: I0127 15:44:58.217825 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" Jan 27 15:44:58 crc kubenswrapper[4966]: I0127 15:44:58.789917 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnhcl" event={"ID":"c3ad1e5f-77aa-4005-bd12-618819d83c12","Type":"ContainerStarted","Data":"74c361ba077011ebf2e5fe3ff200db6938dabefc1bfb5559a1430bf50c8723d6"} Jan 27 15:44:58 crc kubenswrapper[4966]: I0127 15:44:58.792730 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrddt" event={"ID":"9e0696ef-3017-4937-92ee-fe9e794c9fdd","Type":"ContainerStarted","Data":"dcad32c69724e125a80f3b57df58adbe69dc27eacaa8930b2ae72cbd622c4d2f"} Jan 27 15:44:58 crc kubenswrapper[4966]: I0127 15:44:58.795150 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxd8p" event={"ID":"3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4","Type":"ContainerStarted","Data":"741cf1ac4481b01f2b0546dc3ed391c36ff90c35c9fcab005a58902205b92125"} Jan 27 15:44:58 crc kubenswrapper[4966]: I0127 15:44:58.807154 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqggh" event={"ID":"5171e70d-cb12-4969-b12f-535c813ce6b9","Type":"ContainerStarted","Data":"6a40331b65ac6086acb212b5a08152bfe07b3082755bfd00bb447f085686ba0d"} Jan 27 15:44:58 crc kubenswrapper[4966]: I0127 15:44:58.817404 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dnhcl" podStartSLOduration=2.729973766 podStartE2EDuration="34.817389272s" podCreationTimestamp="2026-01-27 15:44:24 +0000 UTC" firstStartedPulling="2026-01-27 15:44:26.443073186 +0000 UTC m=+132.745866674" lastFinishedPulling="2026-01-27 15:44:58.530488702 +0000 UTC m=+164.833282180" observedRunningTime="2026-01-27 15:44:58.816736806 +0000 UTC m=+165.119530324" watchObservedRunningTime="2026-01-27 15:44:58.817389272 +0000 UTC m=+165.120182760" Jan 27 15:44:58 crc kubenswrapper[4966]: I0127 15:44:58.863880 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nxd8p" podStartSLOduration=1.753963283 podStartE2EDuration="30.863862833s" podCreationTimestamp="2026-01-27 15:44:28 +0000 UTC" firstStartedPulling="2026-01-27 15:44:29.542546163 +0000 UTC m=+135.845339641" lastFinishedPulling="2026-01-27 15:44:58.652445703 +0000 UTC m=+164.955239191" observedRunningTime="2026-01-27 15:44:58.844992117 +0000 UTC m=+165.147785625" watchObservedRunningTime="2026-01-27 15:44:58.863862833 +0000 UTC m=+165.166656321" Jan 27 15:44:58 crc kubenswrapper[4966]: I0127 15:44:58.864086 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jrddt" podStartSLOduration=2.905404555 podStartE2EDuration="31.864081508s" podCreationTimestamp="2026-01-27 15:44:27 +0000 UTC" firstStartedPulling="2026-01-27 15:44:29.542253046 +0000 UTC m=+135.845046534" lastFinishedPulling="2026-01-27 15:44:58.500929999 +0000 UTC m=+164.803723487" observedRunningTime="2026-01-27 15:44:58.861859304 +0000 UTC m=+165.164652792" watchObservedRunningTime="2026-01-27 15:44:58.864081508 +0000 UTC m=+165.166874996" Jan 27 15:44:58 crc kubenswrapper[4966]: I0127 15:44:58.878286 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rqggh" podStartSLOduration=2.849805468 podStartE2EDuration="34.87826842s" podCreationTimestamp="2026-01-27 15:44:24 +0000 UTC" firstStartedPulling="2026-01-27 15:44:26.427539123 +0000 UTC m=+132.730332611" lastFinishedPulling="2026-01-27 15:44:58.456002055 +0000 UTC m=+164.758795563" observedRunningTime="2026-01-27 15:44:58.87538186 +0000 UTC m=+165.178175358" watchObservedRunningTime="2026-01-27 15:44:58.87826842 +0000 UTC m=+165.181061908" Jan 27 15:45:00 crc kubenswrapper[4966]: I0127 15:45:00.138138 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-nwrxn"] Jan 27 15:45:00 crc kubenswrapper[4966]: E0127 15:45:00.139247 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf3024e0-9e0a-48aa-801b-84aa6e241811" containerName="pruner" Jan 27 15:45:00 crc kubenswrapper[4966]: I0127 15:45:00.139337 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf3024e0-9e0a-48aa-801b-84aa6e241811" containerName="pruner" Jan 27 15:45:00 crc kubenswrapper[4966]: E0127 15:45:00.139421 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c39f89b-56f9-43f4-ac92-dc932ccbd49d" containerName="pruner" Jan 27 15:45:00 crc kubenswrapper[4966]: I0127 15:45:00.139488 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c39f89b-56f9-43f4-ac92-dc932ccbd49d" containerName="pruner" Jan 27 15:45:00 crc kubenswrapper[4966]: I0127 15:45:00.139678 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c39f89b-56f9-43f4-ac92-dc932ccbd49d" containerName="pruner" Jan 27 15:45:00 crc kubenswrapper[4966]: I0127 15:45:00.139757 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf3024e0-9e0a-48aa-801b-84aa6e241811" containerName="pruner" Jan 27 15:45:00 crc kubenswrapper[4966]: I0127 15:45:00.140345 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-nwrxn" Jan 27 15:45:00 crc kubenswrapper[4966]: I0127 15:45:00.142680 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 15:45:00 crc kubenswrapper[4966]: I0127 15:45:00.143122 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 15:45:00 crc kubenswrapper[4966]: I0127 15:45:00.148731 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-nwrxn"] Jan 27 15:45:00 crc kubenswrapper[4966]: I0127 15:45:00.228492 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkdcs\" (UniqueName: \"kubernetes.io/projected/e60a61a8-a15f-4e91-b12e-f77c8b9c7397-kube-api-access-lkdcs\") pod \"collect-profiles-29492145-nwrxn\" (UID: \"e60a61a8-a15f-4e91-b12e-f77c8b9c7397\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-nwrxn" Jan 27 15:45:00 crc kubenswrapper[4966]: I0127 15:45:00.228545 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e60a61a8-a15f-4e91-b12e-f77c8b9c7397-config-volume\") pod \"collect-profiles-29492145-nwrxn\" (UID: \"e60a61a8-a15f-4e91-b12e-f77c8b9c7397\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-nwrxn" Jan 27 15:45:00 crc kubenswrapper[4966]: I0127 15:45:00.228578 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e60a61a8-a15f-4e91-b12e-f77c8b9c7397-secret-volume\") pod \"collect-profiles-29492145-nwrxn\" (UID: \"e60a61a8-a15f-4e91-b12e-f77c8b9c7397\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-nwrxn" Jan 27 15:45:00 crc kubenswrapper[4966]: I0127 15:45:00.330101 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkdcs\" (UniqueName: \"kubernetes.io/projected/e60a61a8-a15f-4e91-b12e-f77c8b9c7397-kube-api-access-lkdcs\") pod \"collect-profiles-29492145-nwrxn\" (UID: \"e60a61a8-a15f-4e91-b12e-f77c8b9c7397\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-nwrxn" Jan 27 15:45:00 crc kubenswrapper[4966]: I0127 15:45:00.330147 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e60a61a8-a15f-4e91-b12e-f77c8b9c7397-config-volume\") pod \"collect-profiles-29492145-nwrxn\" (UID: \"e60a61a8-a15f-4e91-b12e-f77c8b9c7397\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-nwrxn" Jan 27 15:45:00 crc kubenswrapper[4966]: I0127 15:45:00.330172 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e60a61a8-a15f-4e91-b12e-f77c8b9c7397-secret-volume\") pod \"collect-profiles-29492145-nwrxn\" (UID: \"e60a61a8-a15f-4e91-b12e-f77c8b9c7397\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-nwrxn" Jan 27 15:45:00 crc kubenswrapper[4966]: I0127 15:45:00.331247 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e60a61a8-a15f-4e91-b12e-f77c8b9c7397-config-volume\") pod \"collect-profiles-29492145-nwrxn\" (UID: \"e60a61a8-a15f-4e91-b12e-f77c8b9c7397\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-nwrxn" Jan 27 15:45:00 crc kubenswrapper[4966]: I0127 15:45:00.340777 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e60a61a8-a15f-4e91-b12e-f77c8b9c7397-secret-volume\") pod \"collect-profiles-29492145-nwrxn\" (UID: \"e60a61a8-a15f-4e91-b12e-f77c8b9c7397\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-nwrxn" Jan 27 15:45:00 crc kubenswrapper[4966]: I0127 15:45:00.355739 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkdcs\" (UniqueName: \"kubernetes.io/projected/e60a61a8-a15f-4e91-b12e-f77c8b9c7397-kube-api-access-lkdcs\") pod \"collect-profiles-29492145-nwrxn\" (UID: \"e60a61a8-a15f-4e91-b12e-f77c8b9c7397\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-nwrxn" Jan 27 15:45:00 crc kubenswrapper[4966]: I0127 15:45:00.455992 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-nwrxn" Jan 27 15:45:00 crc kubenswrapper[4966]: I0127 15:45:00.875588 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-nwrxn"] Jan 27 15:45:01 crc kubenswrapper[4966]: I0127 15:45:01.241254 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs\") pod \"network-metrics-daemon-2fsdv\" (UID: \"311852f1-9764-49e5-a58a-5c2feee4ed1f\") " pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:45:01 crc kubenswrapper[4966]: I0127 15:45:01.247226 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/311852f1-9764-49e5-a58a-5c2feee4ed1f-metrics-certs\") pod \"network-metrics-daemon-2fsdv\" (UID: \"311852f1-9764-49e5-a58a-5c2feee4ed1f\") " pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:45:01 crc kubenswrapper[4966]: I0127 15:45:01.350813 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fsdv" Jan 27 15:45:01 crc kubenswrapper[4966]: I0127 15:45:01.737098 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2fsdv"] Jan 27 15:45:01 crc kubenswrapper[4966]: W0127 15:45:01.744479 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod311852f1_9764_49e5_a58a_5c2feee4ed1f.slice/crio-7265c5a016cc5a98825404be23bf8e7d04e7446bd72da7ed113a3cc86fd0ddf7 WatchSource:0}: Error finding container 7265c5a016cc5a98825404be23bf8e7d04e7446bd72da7ed113a3cc86fd0ddf7: Status 404 returned error can't find the container with id 7265c5a016cc5a98825404be23bf8e7d04e7446bd72da7ed113a3cc86fd0ddf7 Jan 27 15:45:01 crc kubenswrapper[4966]: I0127 15:45:01.822450 4966 generic.go:334] "Generic (PLEG): container finished" podID="e60a61a8-a15f-4e91-b12e-f77c8b9c7397" containerID="d654bc5db9e9b1fa0be2d16fcab340e28e9c8c7a8cf1472d700975447199ea3f" exitCode=0 Jan 27 15:45:01 crc kubenswrapper[4966]: I0127 15:45:01.822582 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-nwrxn" event={"ID":"e60a61a8-a15f-4e91-b12e-f77c8b9c7397","Type":"ContainerDied","Data":"d654bc5db9e9b1fa0be2d16fcab340e28e9c8c7a8cf1472d700975447199ea3f"} Jan 27 15:45:01 crc kubenswrapper[4966]: I0127 15:45:01.822626 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-nwrxn" event={"ID":"e60a61a8-a15f-4e91-b12e-f77c8b9c7397","Type":"ContainerStarted","Data":"939160ac9e642b7705456c97daab532db2b569cd411c5a50d2696f2ac1103bde"} Jan 27 15:45:01 crc kubenswrapper[4966]: I0127 15:45:01.827331 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2fsdv" event={"ID":"311852f1-9764-49e5-a58a-5c2feee4ed1f","Type":"ContainerStarted","Data":"7265c5a016cc5a98825404be23bf8e7d04e7446bd72da7ed113a3cc86fd0ddf7"} Jan 27 15:45:02 crc kubenswrapper[4966]: I0127 15:45:02.832435 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2fsdv" event={"ID":"311852f1-9764-49e5-a58a-5c2feee4ed1f","Type":"ContainerStarted","Data":"8fcde421e02281fce1a0a677bf391d13eac695de4c92e414aa7547a5e15857a4"} Jan 27 15:45:02 crc kubenswrapper[4966]: I0127 15:45:02.832793 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2fsdv" event={"ID":"311852f1-9764-49e5-a58a-5c2feee4ed1f","Type":"ContainerStarted","Data":"5f897d1248d2b07b2db96adfc94173e06cd1d79eb5ac7b51b823e14639541549"} Jan 27 15:45:02 crc kubenswrapper[4966]: I0127 15:45:02.851764 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-2fsdv" podStartSLOduration=144.85173511 podStartE2EDuration="2m24.85173511s" podCreationTimestamp="2026-01-27 15:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:45:02.849008184 +0000 UTC m=+169.151801692" watchObservedRunningTime="2026-01-27 15:45:02.85173511 +0000 UTC m=+169.154528618" Jan 27 15:45:03 crc kubenswrapper[4966]: I0127 15:45:03.134435 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-nwrxn" Jan 27 15:45:03 crc kubenswrapper[4966]: I0127 15:45:03.265060 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e60a61a8-a15f-4e91-b12e-f77c8b9c7397-secret-volume\") pod \"e60a61a8-a15f-4e91-b12e-f77c8b9c7397\" (UID: \"e60a61a8-a15f-4e91-b12e-f77c8b9c7397\") " Jan 27 15:45:03 crc kubenswrapper[4966]: I0127 15:45:03.265153 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkdcs\" (UniqueName: \"kubernetes.io/projected/e60a61a8-a15f-4e91-b12e-f77c8b9c7397-kube-api-access-lkdcs\") pod \"e60a61a8-a15f-4e91-b12e-f77c8b9c7397\" (UID: \"e60a61a8-a15f-4e91-b12e-f77c8b9c7397\") " Jan 27 15:45:03 crc kubenswrapper[4966]: I0127 15:45:03.265178 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e60a61a8-a15f-4e91-b12e-f77c8b9c7397-config-volume\") pod \"e60a61a8-a15f-4e91-b12e-f77c8b9c7397\" (UID: \"e60a61a8-a15f-4e91-b12e-f77c8b9c7397\") " Jan 27 15:45:03 crc kubenswrapper[4966]: I0127 15:45:03.266097 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e60a61a8-a15f-4e91-b12e-f77c8b9c7397-config-volume" (OuterVolumeSpecName: "config-volume") pod "e60a61a8-a15f-4e91-b12e-f77c8b9c7397" (UID: "e60a61a8-a15f-4e91-b12e-f77c8b9c7397"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:45:03 crc kubenswrapper[4966]: I0127 15:45:03.271348 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e60a61a8-a15f-4e91-b12e-f77c8b9c7397-kube-api-access-lkdcs" (OuterVolumeSpecName: "kube-api-access-lkdcs") pod "e60a61a8-a15f-4e91-b12e-f77c8b9c7397" (UID: "e60a61a8-a15f-4e91-b12e-f77c8b9c7397"). InnerVolumeSpecName "kube-api-access-lkdcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:45:03 crc kubenswrapper[4966]: I0127 15:45:03.271840 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e60a61a8-a15f-4e91-b12e-f77c8b9c7397-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e60a61a8-a15f-4e91-b12e-f77c8b9c7397" (UID: "e60a61a8-a15f-4e91-b12e-f77c8b9c7397"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:45:03 crc kubenswrapper[4966]: I0127 15:45:03.366268 4966 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e60a61a8-a15f-4e91-b12e-f77c8b9c7397-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:03 crc kubenswrapper[4966]: I0127 15:45:03.366319 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkdcs\" (UniqueName: \"kubernetes.io/projected/e60a61a8-a15f-4e91-b12e-f77c8b9c7397-kube-api-access-lkdcs\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:03 crc kubenswrapper[4966]: I0127 15:45:03.366328 4966 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e60a61a8-a15f-4e91-b12e-f77c8b9c7397-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:03 crc kubenswrapper[4966]: I0127 15:45:03.839535 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-nwrxn" event={"ID":"e60a61a8-a15f-4e91-b12e-f77c8b9c7397","Type":"ContainerDied","Data":"939160ac9e642b7705456c97daab532db2b569cd411c5a50d2696f2ac1103bde"} Jan 27 15:45:03 crc kubenswrapper[4966]: I0127 15:45:03.839575 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="939160ac9e642b7705456c97daab532db2b569cd411c5a50d2696f2ac1103bde" Jan 27 15:45:03 crc kubenswrapper[4966]: I0127 15:45:03.839598 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-nwrxn" Jan 27 15:45:04 crc kubenswrapper[4966]: I0127 15:45:04.949577 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dnhcl" Jan 27 15:45:04 crc kubenswrapper[4966]: I0127 15:45:04.949834 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dnhcl" Jan 27 15:45:05 crc kubenswrapper[4966]: I0127 15:45:05.092435 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dnhcl" Jan 27 15:45:05 crc kubenswrapper[4966]: I0127 15:45:05.351952 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rqggh" Jan 27 15:45:05 crc kubenswrapper[4966]: I0127 15:45:05.352011 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rqggh" Jan 27 15:45:05 crc kubenswrapper[4966]: I0127 15:45:05.397007 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rqggh" Jan 27 15:45:05 crc kubenswrapper[4966]: I0127 15:45:05.905114 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dnhcl" Jan 27 15:45:05 crc kubenswrapper[4966]: I0127 15:45:05.905767 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rqggh" Jan 27 15:45:06 crc kubenswrapper[4966]: I0127 15:45:06.325123 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rqggh"] Jan 27 15:45:07 crc kubenswrapper[4966]: I0127 15:45:07.862930 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rqggh" podUID="5171e70d-cb12-4969-b12f-535c813ce6b9" containerName="registry-server" containerID="cri-o://6a40331b65ac6086acb212b5a08152bfe07b3082755bfd00bb447f085686ba0d" gracePeriod=2 Jan 27 15:45:08 crc kubenswrapper[4966]: I0127 15:45:08.165379 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jrddt" Jan 27 15:45:08 crc kubenswrapper[4966]: I0127 15:45:08.165443 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jrddt" Jan 27 15:45:08 crc kubenswrapper[4966]: I0127 15:45:08.206107 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jrddt" Jan 27 15:45:08 crc kubenswrapper[4966]: I0127 15:45:08.576488 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nxd8p" Jan 27 15:45:08 crc kubenswrapper[4966]: I0127 15:45:08.576542 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nxd8p" Jan 27 15:45:08 crc kubenswrapper[4966]: I0127 15:45:08.670111 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nxd8p" Jan 27 15:45:08 crc kubenswrapper[4966]: I0127 15:45:08.838916 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 15:45:08 crc kubenswrapper[4966]: E0127 15:45:08.839127 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e60a61a8-a15f-4e91-b12e-f77c8b9c7397" containerName="collect-profiles" Jan 27 15:45:08 crc kubenswrapper[4966]: I0127 15:45:08.839139 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="e60a61a8-a15f-4e91-b12e-f77c8b9c7397" containerName="collect-profiles" Jan 27 15:45:08 crc kubenswrapper[4966]: I0127 15:45:08.839267 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="e60a61a8-a15f-4e91-b12e-f77c8b9c7397" containerName="collect-profiles" Jan 27 15:45:08 crc kubenswrapper[4966]: I0127 15:45:08.839627 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:45:08 crc kubenswrapper[4966]: I0127 15:45:08.841461 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 15:45:08 crc kubenswrapper[4966]: I0127 15:45:08.844304 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 15:45:08 crc kubenswrapper[4966]: I0127 15:45:08.845357 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 15:45:08 crc kubenswrapper[4966]: I0127 15:45:08.871140 4966 generic.go:334] "Generic (PLEG): container finished" podID="5171e70d-cb12-4969-b12f-535c813ce6b9" containerID="6a40331b65ac6086acb212b5a08152bfe07b3082755bfd00bb447f085686ba0d" exitCode=0 Jan 27 15:45:08 crc kubenswrapper[4966]: I0127 15:45:08.872067 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqggh" event={"ID":"5171e70d-cb12-4969-b12f-535c813ce6b9","Type":"ContainerDied","Data":"6a40331b65ac6086acb212b5a08152bfe07b3082755bfd00bb447f085686ba0d"} Jan 27 15:45:08 crc kubenswrapper[4966]: I0127 15:45:08.912510 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nxd8p" Jan 27 15:45:08 crc kubenswrapper[4966]: I0127 15:45:08.916174 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jrddt" Jan 27 15:45:08 crc kubenswrapper[4966]: I0127 15:45:08.937578 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54d12582-0fd3-41b5-aff6-d3540016c72e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"54d12582-0fd3-41b5-aff6-d3540016c72e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:45:08 crc kubenswrapper[4966]: I0127 15:45:08.937665 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54d12582-0fd3-41b5-aff6-d3540016c72e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"54d12582-0fd3-41b5-aff6-d3540016c72e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:45:09 crc kubenswrapper[4966]: I0127 15:45:09.039136 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54d12582-0fd3-41b5-aff6-d3540016c72e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"54d12582-0fd3-41b5-aff6-d3540016c72e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:45:09 crc kubenswrapper[4966]: I0127 15:45:09.039226 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54d12582-0fd3-41b5-aff6-d3540016c72e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"54d12582-0fd3-41b5-aff6-d3540016c72e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:45:09 crc kubenswrapper[4966]: I0127 15:45:09.039325 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54d12582-0fd3-41b5-aff6-d3540016c72e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"54d12582-0fd3-41b5-aff6-d3540016c72e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:45:09 crc kubenswrapper[4966]: I0127 15:45:09.060198 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54d12582-0fd3-41b5-aff6-d3540016c72e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"54d12582-0fd3-41b5-aff6-d3540016c72e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:45:09 crc kubenswrapper[4966]: I0127 15:45:09.164300 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:45:10 crc kubenswrapper[4966]: I0127 15:45:10.119572 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:45:10 crc kubenswrapper[4966]: I0127 15:45:10.120031 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:45:11 crc kubenswrapper[4966]: I0127 15:45:11.919324 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nxd8p"] Jan 27 15:45:11 crc kubenswrapper[4966]: I0127 15:45:11.919526 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nxd8p" podUID="3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4" containerName="registry-server" containerID="cri-o://741cf1ac4481b01f2b0546dc3ed391c36ff90c35c9fcab005a58902205b92125" gracePeriod=2 Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.503123 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqggh" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.598803 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5171e70d-cb12-4969-b12f-535c813ce6b9-utilities\") pod \"5171e70d-cb12-4969-b12f-535c813ce6b9\" (UID: \"5171e70d-cb12-4969-b12f-535c813ce6b9\") " Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.599211 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5171e70d-cb12-4969-b12f-535c813ce6b9-catalog-content\") pod \"5171e70d-cb12-4969-b12f-535c813ce6b9\" (UID: \"5171e70d-cb12-4969-b12f-535c813ce6b9\") " Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.599233 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd6pk\" (UniqueName: \"kubernetes.io/projected/5171e70d-cb12-4969-b12f-535c813ce6b9-kube-api-access-hd6pk\") pod \"5171e70d-cb12-4969-b12f-535c813ce6b9\" (UID: \"5171e70d-cb12-4969-b12f-535c813ce6b9\") " Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.600130 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5171e70d-cb12-4969-b12f-535c813ce6b9-utilities" (OuterVolumeSpecName: "utilities") pod "5171e70d-cb12-4969-b12f-535c813ce6b9" (UID: "5171e70d-cb12-4969-b12f-535c813ce6b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.603122 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5171e70d-cb12-4969-b12f-535c813ce6b9-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.606530 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5171e70d-cb12-4969-b12f-535c813ce6b9-kube-api-access-hd6pk" (OuterVolumeSpecName: "kube-api-access-hd6pk") pod "5171e70d-cb12-4969-b12f-535c813ce6b9" (UID: "5171e70d-cb12-4969-b12f-535c813ce6b9"). InnerVolumeSpecName "kube-api-access-hd6pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.648329 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 15:45:13 crc kubenswrapper[4966]: E0127 15:45:13.648592 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5171e70d-cb12-4969-b12f-535c813ce6b9" containerName="extract-utilities" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.648606 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="5171e70d-cb12-4969-b12f-535c813ce6b9" containerName="extract-utilities" Jan 27 15:45:13 crc kubenswrapper[4966]: E0127 15:45:13.648616 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5171e70d-cb12-4969-b12f-535c813ce6b9" containerName="extract-content" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.648621 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="5171e70d-cb12-4969-b12f-535c813ce6b9" containerName="extract-content" Jan 27 15:45:13 crc kubenswrapper[4966]: E0127 15:45:13.648638 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5171e70d-cb12-4969-b12f-535c813ce6b9" containerName="registry-server" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.648644 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="5171e70d-cb12-4969-b12f-535c813ce6b9" containerName="registry-server" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.648740 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="5171e70d-cb12-4969-b12f-535c813ce6b9" containerName="registry-server" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.649214 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.653774 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nxd8p" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.658501 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.659411 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5171e70d-cb12-4969-b12f-535c813ce6b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5171e70d-cb12-4969-b12f-535c813ce6b9" (UID: "5171e70d-cb12-4969-b12f-535c813ce6b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.704086 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2krw8\" (UniqueName: \"kubernetes.io/projected/3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4-kube-api-access-2krw8\") pod \"3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4\" (UID: \"3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4\") " Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.704186 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4-utilities\") pod \"3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4\" (UID: \"3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4\") " Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.704255 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4-catalog-content\") pod \"3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4\" (UID: \"3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4\") " Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.704441 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82f730e3-90c9-49c0-8bd2-b993a0f9bc66-kubelet-dir\") pod \"installer-9-crc\" (UID: \"82f730e3-90c9-49c0-8bd2-b993a0f9bc66\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.704477 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82f730e3-90c9-49c0-8bd2-b993a0f9bc66-kube-api-access\") pod \"installer-9-crc\" (UID: \"82f730e3-90c9-49c0-8bd2-b993a0f9bc66\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.704504 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/82f730e3-90c9-49c0-8bd2-b993a0f9bc66-var-lock\") pod \"installer-9-crc\" (UID: \"82f730e3-90c9-49c0-8bd2-b993a0f9bc66\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.704601 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5171e70d-cb12-4969-b12f-535c813ce6b9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.704614 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd6pk\" (UniqueName: \"kubernetes.io/projected/5171e70d-cb12-4969-b12f-535c813ce6b9-kube-api-access-hd6pk\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.705997 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4-utilities" (OuterVolumeSpecName: "utilities") pod "3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4" (UID: "3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.708772 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4-kube-api-access-2krw8" (OuterVolumeSpecName: "kube-api-access-2krw8") pod "3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4" (UID: "3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4"). InnerVolumeSpecName "kube-api-access-2krw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.806105 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82f730e3-90c9-49c0-8bd2-b993a0f9bc66-kubelet-dir\") pod \"installer-9-crc\" (UID: \"82f730e3-90c9-49c0-8bd2-b993a0f9bc66\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.806142 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82f730e3-90c9-49c0-8bd2-b993a0f9bc66-kube-api-access\") pod \"installer-9-crc\" (UID: \"82f730e3-90c9-49c0-8bd2-b993a0f9bc66\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.806181 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/82f730e3-90c9-49c0-8bd2-b993a0f9bc66-var-lock\") pod \"installer-9-crc\" (UID: \"82f730e3-90c9-49c0-8bd2-b993a0f9bc66\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.806240 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.806240 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82f730e3-90c9-49c0-8bd2-b993a0f9bc66-kubelet-dir\") pod \"installer-9-crc\" (UID: \"82f730e3-90c9-49c0-8bd2-b993a0f9bc66\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.806291 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/82f730e3-90c9-49c0-8bd2-b993a0f9bc66-var-lock\") pod \"installer-9-crc\" (UID: \"82f730e3-90c9-49c0-8bd2-b993a0f9bc66\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.806261 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2krw8\" (UniqueName: \"kubernetes.io/projected/3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4-kube-api-access-2krw8\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.821488 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82f730e3-90c9-49c0-8bd2-b993a0f9bc66-kube-api-access\") pod \"installer-9-crc\" (UID: \"82f730e3-90c9-49c0-8bd2-b993a0f9bc66\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.824504 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4" (UID: "3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.887347 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 15:45:13 crc kubenswrapper[4966]: W0127 15:45:13.896279 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod54d12582_0fd3_41b5_aff6_d3540016c72e.slice/crio-c7c10f3808fbc477baa630e1a92ca86c943584913f2d613fedc7bbf7d2eb2499 WatchSource:0}: Error finding container c7c10f3808fbc477baa630e1a92ca86c943584913f2d613fedc7bbf7d2eb2499: Status 404 returned error can't find the container with id c7c10f3808fbc477baa630e1a92ca86c943584913f2d613fedc7bbf7d2eb2499 Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.902494 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqggh" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.902569 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqggh" event={"ID":"5171e70d-cb12-4969-b12f-535c813ce6b9","Type":"ContainerDied","Data":"93d0a5f475b2f0df3978bfa25500eaafbd0cfe0419d1a1bade69fc2fc5ec55df"} Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.902642 4966 scope.go:117] "RemoveContainer" containerID="6a40331b65ac6086acb212b5a08152bfe07b3082755bfd00bb447f085686ba0d" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.907227 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.909110 4966 generic.go:334] "Generic (PLEG): container finished" podID="3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4" containerID="741cf1ac4481b01f2b0546dc3ed391c36ff90c35c9fcab005a58902205b92125" exitCode=0 Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.909166 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxd8p" event={"ID":"3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4","Type":"ContainerDied","Data":"741cf1ac4481b01f2b0546dc3ed391c36ff90c35c9fcab005a58902205b92125"} Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.909205 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxd8p" event={"ID":"3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4","Type":"ContainerDied","Data":"c4ee07a0d8233ed2523adfefebf990dc6c82a853babba48953579f60f2b52b8b"} Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.909293 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nxd8p" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.927690 4966 scope.go:117] "RemoveContainer" containerID="7dfe673ae1f9fe7f8dedca1bde7cab7075f2be05df92d3d244373e23c4dbbb32" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.938698 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rqggh"] Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.942079 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rqggh"] Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.966275 4966 scope.go:117] "RemoveContainer" containerID="c447b003d2afc5266dfcf5a227e7a410ad0a33c7258bbc7bdf80816941c018fa" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.968354 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nxd8p"] Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.974269 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nxd8p"] Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.986006 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:45:13 crc kubenswrapper[4966]: I0127 15:45:13.991804 4966 scope.go:117] "RemoveContainer" containerID="741cf1ac4481b01f2b0546dc3ed391c36ff90c35c9fcab005a58902205b92125" Jan 27 15:45:14 crc kubenswrapper[4966]: I0127 15:45:14.011871 4966 scope.go:117] "RemoveContainer" containerID="b629c2b867a7c58903aad84e34aefb291937e2568e0cf66cf8b3ce0ea1e6a461" Jan 27 15:45:14 crc kubenswrapper[4966]: I0127 15:45:14.032575 4966 scope.go:117] "RemoveContainer" containerID="4175a9fc860c106940fd8d3f098954c4f3a60e93808d43dfe577c6ac65dc9255" Jan 27 15:45:14 crc kubenswrapper[4966]: I0127 15:45:14.056122 4966 scope.go:117] "RemoveContainer" containerID="741cf1ac4481b01f2b0546dc3ed391c36ff90c35c9fcab005a58902205b92125" Jan 27 15:45:14 crc kubenswrapper[4966]: E0127 15:45:14.056953 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"741cf1ac4481b01f2b0546dc3ed391c36ff90c35c9fcab005a58902205b92125\": container with ID starting with 741cf1ac4481b01f2b0546dc3ed391c36ff90c35c9fcab005a58902205b92125 not found: ID does not exist" containerID="741cf1ac4481b01f2b0546dc3ed391c36ff90c35c9fcab005a58902205b92125" Jan 27 15:45:14 crc kubenswrapper[4966]: I0127 15:45:14.056996 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"741cf1ac4481b01f2b0546dc3ed391c36ff90c35c9fcab005a58902205b92125"} err="failed to get container status \"741cf1ac4481b01f2b0546dc3ed391c36ff90c35c9fcab005a58902205b92125\": rpc error: code = NotFound desc = could not find container \"741cf1ac4481b01f2b0546dc3ed391c36ff90c35c9fcab005a58902205b92125\": container with ID starting with 741cf1ac4481b01f2b0546dc3ed391c36ff90c35c9fcab005a58902205b92125 not found: ID does not exist" Jan 27 15:45:14 crc kubenswrapper[4966]: I0127 15:45:14.057045 4966 scope.go:117] "RemoveContainer" containerID="b629c2b867a7c58903aad84e34aefb291937e2568e0cf66cf8b3ce0ea1e6a461" Jan 27 15:45:14 crc kubenswrapper[4966]: E0127 15:45:14.057358 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b629c2b867a7c58903aad84e34aefb291937e2568e0cf66cf8b3ce0ea1e6a461\": container with ID starting with b629c2b867a7c58903aad84e34aefb291937e2568e0cf66cf8b3ce0ea1e6a461 not found: ID does not exist" containerID="b629c2b867a7c58903aad84e34aefb291937e2568e0cf66cf8b3ce0ea1e6a461" Jan 27 15:45:14 crc kubenswrapper[4966]: I0127 15:45:14.057392 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b629c2b867a7c58903aad84e34aefb291937e2568e0cf66cf8b3ce0ea1e6a461"} err="failed to get container status \"b629c2b867a7c58903aad84e34aefb291937e2568e0cf66cf8b3ce0ea1e6a461\": rpc error: code = NotFound desc = could not find container \"b629c2b867a7c58903aad84e34aefb291937e2568e0cf66cf8b3ce0ea1e6a461\": container with ID starting with b629c2b867a7c58903aad84e34aefb291937e2568e0cf66cf8b3ce0ea1e6a461 not found: ID does not exist" Jan 27 15:45:14 crc kubenswrapper[4966]: I0127 15:45:14.057413 4966 scope.go:117] "RemoveContainer" containerID="4175a9fc860c106940fd8d3f098954c4f3a60e93808d43dfe577c6ac65dc9255" Jan 27 15:45:14 crc kubenswrapper[4966]: E0127 15:45:14.057690 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4175a9fc860c106940fd8d3f098954c4f3a60e93808d43dfe577c6ac65dc9255\": container with ID starting with 4175a9fc860c106940fd8d3f098954c4f3a60e93808d43dfe577c6ac65dc9255 not found: ID does not exist" containerID="4175a9fc860c106940fd8d3f098954c4f3a60e93808d43dfe577c6ac65dc9255" Jan 27 15:45:14 crc kubenswrapper[4966]: I0127 15:45:14.057713 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4175a9fc860c106940fd8d3f098954c4f3a60e93808d43dfe577c6ac65dc9255"} err="failed to get container status \"4175a9fc860c106940fd8d3f098954c4f3a60e93808d43dfe577c6ac65dc9255\": rpc error: code = NotFound desc = could not find container \"4175a9fc860c106940fd8d3f098954c4f3a60e93808d43dfe577c6ac65dc9255\": container with ID starting with 4175a9fc860c106940fd8d3f098954c4f3a60e93808d43dfe577c6ac65dc9255 not found: ID does not exist" Jan 27 15:45:14 crc kubenswrapper[4966]: I0127 15:45:14.448720 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 15:45:14 crc kubenswrapper[4966]: I0127 15:45:14.530305 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4" path="/var/lib/kubelet/pods/3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4/volumes" Jan 27 15:45:14 crc kubenswrapper[4966]: I0127 15:45:14.531333 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5171e70d-cb12-4969-b12f-535c813ce6b9" path="/var/lib/kubelet/pods/5171e70d-cb12-4969-b12f-535c813ce6b9/volumes" Jan 27 15:45:14 crc kubenswrapper[4966]: I0127 15:45:14.916717 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"82f730e3-90c9-49c0-8bd2-b993a0f9bc66","Type":"ContainerStarted","Data":"370db505401f1831a1db80b505c53c00f6c992a4daf147d6bd229c8762a97b1b"} Jan 27 15:45:14 crc kubenswrapper[4966]: I0127 15:45:14.918369 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"54d12582-0fd3-41b5-aff6-d3540016c72e","Type":"ContainerStarted","Data":"01c4b2fb18f91ad86ac95b13012abf32a02379b3115c62e00323f5aabae2ffa7"} Jan 27 15:45:14 crc kubenswrapper[4966]: I0127 15:45:14.918418 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"54d12582-0fd3-41b5-aff6-d3540016c72e","Type":"ContainerStarted","Data":"c7c10f3808fbc477baa630e1a92ca86c943584913f2d613fedc7bbf7d2eb2499"} Jan 27 15:45:14 crc kubenswrapper[4966]: I0127 15:45:14.920307 4966 generic.go:334] "Generic (PLEG): container finished" podID="bbad73d0-4e65-4601-9d3e-7ac464269b5f" containerID="e6b8690c01d87a3fbbdb6b803e8a1f246e1d49642257fb9eec57d2ea26e00b66" exitCode=0 Jan 27 15:45:14 crc kubenswrapper[4966]: I0127 15:45:14.920393 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrghj" event={"ID":"bbad73d0-4e65-4601-9d3e-7ac464269b5f","Type":"ContainerDied","Data":"e6b8690c01d87a3fbbdb6b803e8a1f246e1d49642257fb9eec57d2ea26e00b66"} Jan 27 15:45:14 crc kubenswrapper[4966]: I0127 15:45:14.922923 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6njw" event={"ID":"11e4932d-65cd-40d3-a441-604e8d96855c","Type":"ContainerDied","Data":"ca8743ae3a0d01e367763e8f720b87b6e3fd42a43ee01b300ccd4fd0658cf183"} Jan 27 15:45:14 crc kubenswrapper[4966]: I0127 15:45:14.922880 4966 generic.go:334] "Generic (PLEG): container finished" podID="11e4932d-65cd-40d3-a441-604e8d96855c" containerID="ca8743ae3a0d01e367763e8f720b87b6e3fd42a43ee01b300ccd4fd0658cf183" exitCode=0 Jan 27 15:45:14 crc kubenswrapper[4966]: I0127 15:45:14.939206 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=6.939178437 podStartE2EDuration="6.939178437s" podCreationTimestamp="2026-01-27 15:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:45:14.930083489 +0000 UTC m=+181.232877027" watchObservedRunningTime="2026-01-27 15:45:14.939178437 +0000 UTC m=+181.241971955" Jan 27 15:45:15 crc kubenswrapper[4966]: I0127 15:45:15.930974 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"82f730e3-90c9-49c0-8bd2-b993a0f9bc66","Type":"ContainerStarted","Data":"e94d16d6c5997703344b8c768b8ff64a02a67a7d3dbd143f60c62f6e323ca9f8"} Jan 27 15:45:15 crc kubenswrapper[4966]: I0127 15:45:15.933566 4966 generic.go:334] "Generic (PLEG): container finished" podID="54d12582-0fd3-41b5-aff6-d3540016c72e" containerID="01c4b2fb18f91ad86ac95b13012abf32a02379b3115c62e00323f5aabae2ffa7" exitCode=0 Jan 27 15:45:15 crc kubenswrapper[4966]: I0127 15:45:15.933600 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"54d12582-0fd3-41b5-aff6-d3540016c72e","Type":"ContainerDied","Data":"01c4b2fb18f91ad86ac95b13012abf32a02379b3115c62e00323f5aabae2ffa7"} Jan 27 15:45:15 crc kubenswrapper[4966]: I0127 15:45:15.946843 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.946826923 podStartE2EDuration="2.946826923s" podCreationTimestamp="2026-01-27 15:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:45:15.944031875 +0000 UTC m=+182.246825383" watchObservedRunningTime="2026-01-27 15:45:15.946826923 +0000 UTC m=+182.249620411" Jan 27 15:45:16 crc kubenswrapper[4966]: I0127 15:45:16.939782 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrghj" event={"ID":"bbad73d0-4e65-4601-9d3e-7ac464269b5f","Type":"ContainerStarted","Data":"8933519b7f897a90af067c5a6d8dfcf872ff9e7902a57991d1977c60c2b0f614"} Jan 27 15:45:16 crc kubenswrapper[4966]: I0127 15:45:16.941786 4966 generic.go:334] "Generic (PLEG): container finished" podID="6e7220b0-9ce3-461c-a434-e09d4fde1b0a" containerID="c195c696c43b3faaa2c0779e59f668ed2c95b9e7647666be08449af7724ab5e2" exitCode=0 Jan 27 15:45:16 crc kubenswrapper[4966]: I0127 15:45:16.941889 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtmd5" event={"ID":"6e7220b0-9ce3-461c-a434-e09d4fde1b0a","Type":"ContainerDied","Data":"c195c696c43b3faaa2c0779e59f668ed2c95b9e7647666be08449af7724ab5e2"} Jan 27 15:45:16 crc kubenswrapper[4966]: I0127 15:45:16.943980 4966 generic.go:334] "Generic (PLEG): container finished" podID="42e73b55-ac4e-4b3d-80be-2d9320fa42ce" containerID="3a5fe73b01879b815a564ec84ce9824edd2190707d8543ce06118b94af754d75" exitCode=0 Jan 27 15:45:16 crc kubenswrapper[4966]: I0127 15:45:16.944045 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lpj5" event={"ID":"42e73b55-ac4e-4b3d-80be-2d9320fa42ce","Type":"ContainerDied","Data":"3a5fe73b01879b815a564ec84ce9824edd2190707d8543ce06118b94af754d75"} Jan 27 15:45:16 crc kubenswrapper[4966]: I0127 15:45:16.947436 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6njw" event={"ID":"11e4932d-65cd-40d3-a441-604e8d96855c","Type":"ContainerStarted","Data":"917aa9114cc92e0b768642b3c07a3c1f4358189da59e356658fd6851bad2f896"} Jan 27 15:45:16 crc kubenswrapper[4966]: I0127 15:45:16.967605 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wrghj" podStartSLOduration=3.079783372 podStartE2EDuration="50.967589203s" podCreationTimestamp="2026-01-27 15:44:26 +0000 UTC" firstStartedPulling="2026-01-27 15:44:28.476822518 +0000 UTC m=+134.779616006" lastFinishedPulling="2026-01-27 15:45:16.364628349 +0000 UTC m=+182.667421837" observedRunningTime="2026-01-27 15:45:16.965062622 +0000 UTC m=+183.267856110" watchObservedRunningTime="2026-01-27 15:45:16.967589203 +0000 UTC m=+183.270382691" Jan 27 15:45:17 crc kubenswrapper[4966]: I0127 15:45:17.002279 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s6njw" podStartSLOduration=2.086275906 podStartE2EDuration="52.002229369s" podCreationTimestamp="2026-01-27 15:44:25 +0000 UTC" firstStartedPulling="2026-01-27 15:44:26.449591949 +0000 UTC m=+132.752385447" lastFinishedPulling="2026-01-27 15:45:16.365545382 +0000 UTC m=+182.668338910" observedRunningTime="2026-01-27 15:45:17.000865605 +0000 UTC m=+183.303659104" watchObservedRunningTime="2026-01-27 15:45:17.002229369 +0000 UTC m=+183.305022867" Jan 27 15:45:17 crc kubenswrapper[4966]: I0127 15:45:17.164282 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wrghj" Jan 27 15:45:17 crc kubenswrapper[4966]: I0127 15:45:17.164532 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wrghj" Jan 27 15:45:17 crc kubenswrapper[4966]: I0127 15:45:17.256677 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:45:17 crc kubenswrapper[4966]: I0127 15:45:17.350243 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54d12582-0fd3-41b5-aff6-d3540016c72e-kube-api-access\") pod \"54d12582-0fd3-41b5-aff6-d3540016c72e\" (UID: \"54d12582-0fd3-41b5-aff6-d3540016c72e\") " Jan 27 15:45:17 crc kubenswrapper[4966]: I0127 15:45:17.350645 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54d12582-0fd3-41b5-aff6-d3540016c72e-kubelet-dir\") pod \"54d12582-0fd3-41b5-aff6-d3540016c72e\" (UID: \"54d12582-0fd3-41b5-aff6-d3540016c72e\") " Jan 27 15:45:17 crc kubenswrapper[4966]: I0127 15:45:17.350738 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54d12582-0fd3-41b5-aff6-d3540016c72e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "54d12582-0fd3-41b5-aff6-d3540016c72e" (UID: "54d12582-0fd3-41b5-aff6-d3540016c72e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:45:17 crc kubenswrapper[4966]: I0127 15:45:17.350940 4966 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54d12582-0fd3-41b5-aff6-d3540016c72e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:17 crc kubenswrapper[4966]: I0127 15:45:17.355639 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54d12582-0fd3-41b5-aff6-d3540016c72e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "54d12582-0fd3-41b5-aff6-d3540016c72e" (UID: "54d12582-0fd3-41b5-aff6-d3540016c72e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:45:17 crc kubenswrapper[4966]: I0127 15:45:17.451974 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54d12582-0fd3-41b5-aff6-d3540016c72e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:17 crc kubenswrapper[4966]: I0127 15:45:17.952069 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"54d12582-0fd3-41b5-aff6-d3540016c72e","Type":"ContainerDied","Data":"c7c10f3808fbc477baa630e1a92ca86c943584913f2d613fedc7bbf7d2eb2499"} Jan 27 15:45:17 crc kubenswrapper[4966]: I0127 15:45:17.952110 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7c10f3808fbc477baa630e1a92ca86c943584913f2d613fedc7bbf7d2eb2499" Jan 27 15:45:17 crc kubenswrapper[4966]: I0127 15:45:17.952081 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:45:17 crc kubenswrapper[4966]: I0127 15:45:17.953775 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtmd5" event={"ID":"6e7220b0-9ce3-461c-a434-e09d4fde1b0a","Type":"ContainerStarted","Data":"14f0e287f7dfcf5ac456350e45b80e862eadab33dd60fabcbe59a370af5130a3"} Jan 27 15:45:17 crc kubenswrapper[4966]: I0127 15:45:17.956462 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lpj5" event={"ID":"42e73b55-ac4e-4b3d-80be-2d9320fa42ce","Type":"ContainerStarted","Data":"6c77b92585dcd5b1959386f7ed7cc60450eadae913d86694ab6a90a442d166a2"} Jan 27 15:45:17 crc kubenswrapper[4966]: I0127 15:45:17.972857 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gtmd5" podStartSLOduration=2.9930066 podStartE2EDuration="53.97284298s" podCreationTimestamp="2026-01-27 15:44:24 +0000 UTC" firstStartedPulling="2026-01-27 15:44:26.447313268 +0000 UTC m=+132.750106756" lastFinishedPulling="2026-01-27 15:45:17.427149648 +0000 UTC m=+183.729943136" observedRunningTime="2026-01-27 15:45:17.969248813 +0000 UTC m=+184.272042301" watchObservedRunningTime="2026-01-27 15:45:17.97284298 +0000 UTC m=+184.275636468" Jan 27 15:45:17 crc kubenswrapper[4966]: I0127 15:45:17.990332 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9lpj5" podStartSLOduration=2.069220786 podStartE2EDuration="50.990314491s" podCreationTimestamp="2026-01-27 15:44:27 +0000 UTC" firstStartedPulling="2026-01-27 15:44:28.492044625 +0000 UTC m=+134.794838113" lastFinishedPulling="2026-01-27 15:45:17.41313833 +0000 UTC m=+183.715931818" observedRunningTime="2026-01-27 15:45:17.989552423 +0000 UTC m=+184.292345911" watchObservedRunningTime="2026-01-27 15:45:17.990314491 +0000 UTC m=+184.293107979" Jan 27 15:45:18 crc kubenswrapper[4966]: I0127 15:45:18.210511 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-wrghj" podUID="bbad73d0-4e65-4601-9d3e-7ac464269b5f" containerName="registry-server" probeResult="failure" output=< Jan 27 15:45:18 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 15:45:18 crc kubenswrapper[4966]: > Jan 27 15:45:25 crc kubenswrapper[4966]: I0127 15:45:25.144705 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gtmd5" Jan 27 15:45:25 crc kubenswrapper[4966]: I0127 15:45:25.145295 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gtmd5" Jan 27 15:45:25 crc kubenswrapper[4966]: I0127 15:45:25.237314 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gtmd5" Jan 27 15:45:25 crc kubenswrapper[4966]: I0127 15:45:25.537024 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s6njw" Jan 27 15:45:25 crc kubenswrapper[4966]: I0127 15:45:25.537082 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s6njw" Jan 27 15:45:25 crc kubenswrapper[4966]: I0127 15:45:25.590590 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s6njw" Jan 27 15:45:26 crc kubenswrapper[4966]: I0127 15:45:26.062022 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s6njw" Jan 27 15:45:26 crc kubenswrapper[4966]: I0127 15:45:26.069039 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gtmd5" Jan 27 15:45:27 crc kubenswrapper[4966]: I0127 15:45:27.115298 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s6njw"] Jan 27 15:45:27 crc kubenswrapper[4966]: I0127 15:45:27.203083 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wrghj" Jan 27 15:45:27 crc kubenswrapper[4966]: I0127 15:45:27.250247 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wrghj" Jan 27 15:45:27 crc kubenswrapper[4966]: I0127 15:45:27.551633 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9lpj5" Jan 27 15:45:27 crc kubenswrapper[4966]: I0127 15:45:27.551698 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9lpj5" Jan 27 15:45:27 crc kubenswrapper[4966]: I0127 15:45:27.612054 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9lpj5" Jan 27 15:45:28 crc kubenswrapper[4966]: I0127 15:45:28.017215 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s6njw" podUID="11e4932d-65cd-40d3-a441-604e8d96855c" containerName="registry-server" containerID="cri-o://917aa9114cc92e0b768642b3c07a3c1f4358189da59e356658fd6851bad2f896" gracePeriod=2 Jan 27 15:45:28 crc kubenswrapper[4966]: I0127 15:45:28.061849 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9lpj5" Jan 27 15:45:28 crc kubenswrapper[4966]: I0127 15:45:28.472494 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6njw" Jan 27 15:45:28 crc kubenswrapper[4966]: I0127 15:45:28.609019 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlnjb\" (UniqueName: \"kubernetes.io/projected/11e4932d-65cd-40d3-a441-604e8d96855c-kube-api-access-qlnjb\") pod \"11e4932d-65cd-40d3-a441-604e8d96855c\" (UID: \"11e4932d-65cd-40d3-a441-604e8d96855c\") " Jan 27 15:45:28 crc kubenswrapper[4966]: I0127 15:45:28.609185 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11e4932d-65cd-40d3-a441-604e8d96855c-utilities\") pod \"11e4932d-65cd-40d3-a441-604e8d96855c\" (UID: \"11e4932d-65cd-40d3-a441-604e8d96855c\") " Jan 27 15:45:28 crc kubenswrapper[4966]: I0127 15:45:28.609249 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11e4932d-65cd-40d3-a441-604e8d96855c-catalog-content\") pod \"11e4932d-65cd-40d3-a441-604e8d96855c\" (UID: \"11e4932d-65cd-40d3-a441-604e8d96855c\") " Jan 27 15:45:28 crc kubenswrapper[4966]: I0127 15:45:28.611571 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e4932d-65cd-40d3-a441-604e8d96855c-utilities" (OuterVolumeSpecName: "utilities") pod "11e4932d-65cd-40d3-a441-604e8d96855c" (UID: "11e4932d-65cd-40d3-a441-604e8d96855c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:45:28 crc kubenswrapper[4966]: I0127 15:45:28.617847 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e4932d-65cd-40d3-a441-604e8d96855c-kube-api-access-qlnjb" (OuterVolumeSpecName: "kube-api-access-qlnjb") pod "11e4932d-65cd-40d3-a441-604e8d96855c" (UID: "11e4932d-65cd-40d3-a441-604e8d96855c"). InnerVolumeSpecName "kube-api-access-qlnjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:45:28 crc kubenswrapper[4966]: I0127 15:45:28.682723 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e4932d-65cd-40d3-a441-604e8d96855c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11e4932d-65cd-40d3-a441-604e8d96855c" (UID: "11e4932d-65cd-40d3-a441-604e8d96855c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:45:28 crc kubenswrapper[4966]: I0127 15:45:28.711301 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlnjb\" (UniqueName: \"kubernetes.io/projected/11e4932d-65cd-40d3-a441-604e8d96855c-kube-api-access-qlnjb\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:28 crc kubenswrapper[4966]: I0127 15:45:28.711337 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11e4932d-65cd-40d3-a441-604e8d96855c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:28 crc kubenswrapper[4966]: I0127 15:45:28.711357 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11e4932d-65cd-40d3-a441-604e8d96855c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:29 crc kubenswrapper[4966]: I0127 15:45:29.026469 4966 generic.go:334] "Generic (PLEG): container finished" podID="11e4932d-65cd-40d3-a441-604e8d96855c" containerID="917aa9114cc92e0b768642b3c07a3c1f4358189da59e356658fd6851bad2f896" exitCode=0 Jan 27 15:45:29 crc kubenswrapper[4966]: I0127 15:45:29.026549 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6njw" Jan 27 15:45:29 crc kubenswrapper[4966]: I0127 15:45:29.026549 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6njw" event={"ID":"11e4932d-65cd-40d3-a441-604e8d96855c","Type":"ContainerDied","Data":"917aa9114cc92e0b768642b3c07a3c1f4358189da59e356658fd6851bad2f896"} Jan 27 15:45:29 crc kubenswrapper[4966]: I0127 15:45:29.026620 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6njw" event={"ID":"11e4932d-65cd-40d3-a441-604e8d96855c","Type":"ContainerDied","Data":"8c0e14a1aab7200d5f12d3a267207833366197f02b0ddeeb6edbce1ed56ea8cc"} Jan 27 15:45:29 crc kubenswrapper[4966]: I0127 15:45:29.026652 4966 scope.go:117] "RemoveContainer" containerID="917aa9114cc92e0b768642b3c07a3c1f4358189da59e356658fd6851bad2f896" Jan 27 15:45:29 crc kubenswrapper[4966]: I0127 15:45:29.061464 4966 scope.go:117] "RemoveContainer" containerID="ca8743ae3a0d01e367763e8f720b87b6e3fd42a43ee01b300ccd4fd0658cf183" Jan 27 15:45:29 crc kubenswrapper[4966]: I0127 15:45:29.073781 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s6njw"] Jan 27 15:45:29 crc kubenswrapper[4966]: I0127 15:45:29.074613 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s6njw"] Jan 27 15:45:29 crc kubenswrapper[4966]: I0127 15:45:29.108257 4966 scope.go:117] "RemoveContainer" containerID="597cded85d12a15bffcb6be9882dae2acd85b33f958bf19a3b63889bf44f94ef" Jan 27 15:45:29 crc kubenswrapper[4966]: I0127 15:45:29.126609 4966 scope.go:117] "RemoveContainer" containerID="917aa9114cc92e0b768642b3c07a3c1f4358189da59e356658fd6851bad2f896" Jan 27 15:45:29 crc kubenswrapper[4966]: E0127 15:45:29.127174 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"917aa9114cc92e0b768642b3c07a3c1f4358189da59e356658fd6851bad2f896\": container with ID starting with 917aa9114cc92e0b768642b3c07a3c1f4358189da59e356658fd6851bad2f896 not found: ID does not exist" containerID="917aa9114cc92e0b768642b3c07a3c1f4358189da59e356658fd6851bad2f896" Jan 27 15:45:29 crc kubenswrapper[4966]: I0127 15:45:29.127219 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"917aa9114cc92e0b768642b3c07a3c1f4358189da59e356658fd6851bad2f896"} err="failed to get container status \"917aa9114cc92e0b768642b3c07a3c1f4358189da59e356658fd6851bad2f896\": rpc error: code = NotFound desc = could not find container \"917aa9114cc92e0b768642b3c07a3c1f4358189da59e356658fd6851bad2f896\": container with ID starting with 917aa9114cc92e0b768642b3c07a3c1f4358189da59e356658fd6851bad2f896 not found: ID does not exist" Jan 27 15:45:29 crc kubenswrapper[4966]: I0127 15:45:29.127253 4966 scope.go:117] "RemoveContainer" containerID="ca8743ae3a0d01e367763e8f720b87b6e3fd42a43ee01b300ccd4fd0658cf183" Jan 27 15:45:29 crc kubenswrapper[4966]: E0127 15:45:29.127607 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca8743ae3a0d01e367763e8f720b87b6e3fd42a43ee01b300ccd4fd0658cf183\": container with ID starting with ca8743ae3a0d01e367763e8f720b87b6e3fd42a43ee01b300ccd4fd0658cf183 not found: ID does not exist" containerID="ca8743ae3a0d01e367763e8f720b87b6e3fd42a43ee01b300ccd4fd0658cf183" Jan 27 15:45:29 crc kubenswrapper[4966]: I0127 15:45:29.127638 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca8743ae3a0d01e367763e8f720b87b6e3fd42a43ee01b300ccd4fd0658cf183"} err="failed to get container status \"ca8743ae3a0d01e367763e8f720b87b6e3fd42a43ee01b300ccd4fd0658cf183\": rpc error: code = NotFound desc = could not find container \"ca8743ae3a0d01e367763e8f720b87b6e3fd42a43ee01b300ccd4fd0658cf183\": container with ID starting with ca8743ae3a0d01e367763e8f720b87b6e3fd42a43ee01b300ccd4fd0658cf183 not found: ID does not exist" Jan 27 15:45:29 crc kubenswrapper[4966]: I0127 15:45:29.127659 4966 scope.go:117] "RemoveContainer" containerID="597cded85d12a15bffcb6be9882dae2acd85b33f958bf19a3b63889bf44f94ef" Jan 27 15:45:29 crc kubenswrapper[4966]: E0127 15:45:29.127954 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"597cded85d12a15bffcb6be9882dae2acd85b33f958bf19a3b63889bf44f94ef\": container with ID starting with 597cded85d12a15bffcb6be9882dae2acd85b33f958bf19a3b63889bf44f94ef not found: ID does not exist" containerID="597cded85d12a15bffcb6be9882dae2acd85b33f958bf19a3b63889bf44f94ef" Jan 27 15:45:29 crc kubenswrapper[4966]: I0127 15:45:29.127981 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"597cded85d12a15bffcb6be9882dae2acd85b33f958bf19a3b63889bf44f94ef"} err="failed to get container status \"597cded85d12a15bffcb6be9882dae2acd85b33f958bf19a3b63889bf44f94ef\": rpc error: code = NotFound desc = could not find container \"597cded85d12a15bffcb6be9882dae2acd85b33f958bf19a3b63889bf44f94ef\": container with ID starting with 597cded85d12a15bffcb6be9882dae2acd85b33f958bf19a3b63889bf44f94ef not found: ID does not exist" Jan 27 15:45:29 crc kubenswrapper[4966]: I0127 15:45:29.518871 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9lpj5"] Jan 27 15:45:30 crc kubenswrapper[4966]: I0127 15:45:30.039102 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9lpj5" podUID="42e73b55-ac4e-4b3d-80be-2d9320fa42ce" containerName="registry-server" containerID="cri-o://6c77b92585dcd5b1959386f7ed7cc60450eadae913d86694ab6a90a442d166a2" gracePeriod=2 Jan 27 15:45:30 crc kubenswrapper[4966]: I0127 15:45:30.361051 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9lpj5" Jan 27 15:45:30 crc kubenswrapper[4966]: I0127 15:45:30.436429 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42e73b55-ac4e-4b3d-80be-2d9320fa42ce-catalog-content\") pod \"42e73b55-ac4e-4b3d-80be-2d9320fa42ce\" (UID: \"42e73b55-ac4e-4b3d-80be-2d9320fa42ce\") " Jan 27 15:45:30 crc kubenswrapper[4966]: I0127 15:45:30.436480 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42e73b55-ac4e-4b3d-80be-2d9320fa42ce-utilities\") pod \"42e73b55-ac4e-4b3d-80be-2d9320fa42ce\" (UID: \"42e73b55-ac4e-4b3d-80be-2d9320fa42ce\") " Jan 27 15:45:30 crc kubenswrapper[4966]: I0127 15:45:30.436546 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktsn9\" (UniqueName: \"kubernetes.io/projected/42e73b55-ac4e-4b3d-80be-2d9320fa42ce-kube-api-access-ktsn9\") pod \"42e73b55-ac4e-4b3d-80be-2d9320fa42ce\" (UID: \"42e73b55-ac4e-4b3d-80be-2d9320fa42ce\") " Jan 27 15:45:30 crc kubenswrapper[4966]: I0127 15:45:30.437440 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42e73b55-ac4e-4b3d-80be-2d9320fa42ce-utilities" (OuterVolumeSpecName: "utilities") pod "42e73b55-ac4e-4b3d-80be-2d9320fa42ce" (UID: "42e73b55-ac4e-4b3d-80be-2d9320fa42ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:45:30 crc kubenswrapper[4966]: I0127 15:45:30.441319 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42e73b55-ac4e-4b3d-80be-2d9320fa42ce-kube-api-access-ktsn9" (OuterVolumeSpecName: "kube-api-access-ktsn9") pod "42e73b55-ac4e-4b3d-80be-2d9320fa42ce" (UID: "42e73b55-ac4e-4b3d-80be-2d9320fa42ce"). InnerVolumeSpecName "kube-api-access-ktsn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:45:30 crc kubenswrapper[4966]: I0127 15:45:30.456585 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42e73b55-ac4e-4b3d-80be-2d9320fa42ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42e73b55-ac4e-4b3d-80be-2d9320fa42ce" (UID: "42e73b55-ac4e-4b3d-80be-2d9320fa42ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:45:30 crc kubenswrapper[4966]: I0127 15:45:30.527112 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11e4932d-65cd-40d3-a441-604e8d96855c" path="/var/lib/kubelet/pods/11e4932d-65cd-40d3-a441-604e8d96855c/volumes" Jan 27 15:45:30 crc kubenswrapper[4966]: I0127 15:45:30.538048 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktsn9\" (UniqueName: \"kubernetes.io/projected/42e73b55-ac4e-4b3d-80be-2d9320fa42ce-kube-api-access-ktsn9\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:30 crc kubenswrapper[4966]: I0127 15:45:30.538082 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42e73b55-ac4e-4b3d-80be-2d9320fa42ce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:30 crc kubenswrapper[4966]: I0127 15:45:30.538090 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42e73b55-ac4e-4b3d-80be-2d9320fa42ce-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:31 crc kubenswrapper[4966]: I0127 15:45:31.047605 4966 generic.go:334] "Generic (PLEG): container finished" podID="42e73b55-ac4e-4b3d-80be-2d9320fa42ce" containerID="6c77b92585dcd5b1959386f7ed7cc60450eadae913d86694ab6a90a442d166a2" exitCode=0 Jan 27 15:45:31 crc kubenswrapper[4966]: I0127 15:45:31.047989 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lpj5" event={"ID":"42e73b55-ac4e-4b3d-80be-2d9320fa42ce","Type":"ContainerDied","Data":"6c77b92585dcd5b1959386f7ed7cc60450eadae913d86694ab6a90a442d166a2"} Jan 27 15:45:31 crc kubenswrapper[4966]: I0127 15:45:31.048042 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lpj5" event={"ID":"42e73b55-ac4e-4b3d-80be-2d9320fa42ce","Type":"ContainerDied","Data":"09b88e93eaca967d61c350a888bc9151ccacf078a82bd296a0db4c426a37e618"} Jan 27 15:45:31 crc kubenswrapper[4966]: I0127 15:45:31.048079 4966 scope.go:117] "RemoveContainer" containerID="6c77b92585dcd5b1959386f7ed7cc60450eadae913d86694ab6a90a442d166a2" Jan 27 15:45:31 crc kubenswrapper[4966]: I0127 15:45:31.048318 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9lpj5" Jan 27 15:45:31 crc kubenswrapper[4966]: I0127 15:45:31.077071 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9lpj5"] Jan 27 15:45:31 crc kubenswrapper[4966]: I0127 15:45:31.079498 4966 scope.go:117] "RemoveContainer" containerID="3a5fe73b01879b815a564ec84ce9824edd2190707d8543ce06118b94af754d75" Jan 27 15:45:31 crc kubenswrapper[4966]: I0127 15:45:31.082468 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9lpj5"] Jan 27 15:45:31 crc kubenswrapper[4966]: I0127 15:45:31.101204 4966 scope.go:117] "RemoveContainer" containerID="ae64621d896310f978a4305bbbf2c76b809493488a5b147ab8dacff63f8762d7" Jan 27 15:45:31 crc kubenswrapper[4966]: I0127 15:45:31.131776 4966 scope.go:117] "RemoveContainer" containerID="6c77b92585dcd5b1959386f7ed7cc60450eadae913d86694ab6a90a442d166a2" Jan 27 15:45:31 crc kubenswrapper[4966]: E0127 15:45:31.132263 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c77b92585dcd5b1959386f7ed7cc60450eadae913d86694ab6a90a442d166a2\": container with ID starting with 6c77b92585dcd5b1959386f7ed7cc60450eadae913d86694ab6a90a442d166a2 not found: ID does not exist" containerID="6c77b92585dcd5b1959386f7ed7cc60450eadae913d86694ab6a90a442d166a2" Jan 27 15:45:31 crc kubenswrapper[4966]: I0127 15:45:31.132305 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c77b92585dcd5b1959386f7ed7cc60450eadae913d86694ab6a90a442d166a2"} err="failed to get container status \"6c77b92585dcd5b1959386f7ed7cc60450eadae913d86694ab6a90a442d166a2\": rpc error: code = NotFound desc = could not find container \"6c77b92585dcd5b1959386f7ed7cc60450eadae913d86694ab6a90a442d166a2\": container with ID starting with 6c77b92585dcd5b1959386f7ed7cc60450eadae913d86694ab6a90a442d166a2 not found: ID does not exist" Jan 27 15:45:31 crc kubenswrapper[4966]: I0127 15:45:31.132334 4966 scope.go:117] "RemoveContainer" containerID="3a5fe73b01879b815a564ec84ce9824edd2190707d8543ce06118b94af754d75" Jan 27 15:45:31 crc kubenswrapper[4966]: E0127 15:45:31.132587 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a5fe73b01879b815a564ec84ce9824edd2190707d8543ce06118b94af754d75\": container with ID starting with 3a5fe73b01879b815a564ec84ce9824edd2190707d8543ce06118b94af754d75 not found: ID does not exist" containerID="3a5fe73b01879b815a564ec84ce9824edd2190707d8543ce06118b94af754d75" Jan 27 15:45:31 crc kubenswrapper[4966]: I0127 15:45:31.132613 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a5fe73b01879b815a564ec84ce9824edd2190707d8543ce06118b94af754d75"} err="failed to get container status \"3a5fe73b01879b815a564ec84ce9824edd2190707d8543ce06118b94af754d75\": rpc error: code = NotFound desc = could not find container \"3a5fe73b01879b815a564ec84ce9824edd2190707d8543ce06118b94af754d75\": container with ID starting with 3a5fe73b01879b815a564ec84ce9824edd2190707d8543ce06118b94af754d75 not found: ID does not exist" Jan 27 15:45:31 crc kubenswrapper[4966]: I0127 15:45:31.132631 4966 scope.go:117] "RemoveContainer" containerID="ae64621d896310f978a4305bbbf2c76b809493488a5b147ab8dacff63f8762d7" Jan 27 15:45:31 crc kubenswrapper[4966]: E0127 15:45:31.132956 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae64621d896310f978a4305bbbf2c76b809493488a5b147ab8dacff63f8762d7\": container with ID starting with ae64621d896310f978a4305bbbf2c76b809493488a5b147ab8dacff63f8762d7 not found: ID does not exist" containerID="ae64621d896310f978a4305bbbf2c76b809493488a5b147ab8dacff63f8762d7" Jan 27 15:45:31 crc kubenswrapper[4966]: I0127 15:45:31.132995 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae64621d896310f978a4305bbbf2c76b809493488a5b147ab8dacff63f8762d7"} err="failed to get container status \"ae64621d896310f978a4305bbbf2c76b809493488a5b147ab8dacff63f8762d7\": rpc error: code = NotFound desc = could not find container \"ae64621d896310f978a4305bbbf2c76b809493488a5b147ab8dacff63f8762d7\": container with ID starting with ae64621d896310f978a4305bbbf2c76b809493488a5b147ab8dacff63f8762d7 not found: ID does not exist" Jan 27 15:45:32 crc kubenswrapper[4966]: I0127 15:45:32.526843 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42e73b55-ac4e-4b3d-80be-2d9320fa42ce" path="/var/lib/kubelet/pods/42e73b55-ac4e-4b3d-80be-2d9320fa42ce/volumes" Jan 27 15:45:32 crc kubenswrapper[4966]: I0127 15:45:32.772585 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:45:34 crc kubenswrapper[4966]: I0127 15:45:34.722123 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rckw5"] Jan 27 15:45:40 crc kubenswrapper[4966]: I0127 15:45:40.119651 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:45:40 crc kubenswrapper[4966]: I0127 15:45:40.119961 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:45:40 crc kubenswrapper[4966]: I0127 15:45:40.120011 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 15:45:40 crc kubenswrapper[4966]: I0127 15:45:40.120526 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664"} pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:45:40 crc kubenswrapper[4966]: I0127 15:45:40.120578 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" containerID="cri-o://3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664" gracePeriod=600 Jan 27 15:45:41 crc kubenswrapper[4966]: I0127 15:45:41.116196 4966 generic.go:334] "Generic (PLEG): container finished" podID="75889828-fc5d-4516-a3c4-db3affd4f810" containerID="3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664" exitCode=0 Jan 27 15:45:41 crc kubenswrapper[4966]: I0127 15:45:41.116256 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerDied","Data":"3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664"} Jan 27 15:45:41 crc kubenswrapper[4966]: I0127 15:45:41.116540 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"fa991efa8c264472d6ff0c3eb9586659e3c6d4cca2ccc3928e23ac1cf4a47b67"} Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.655258 4966 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.656126 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b" gracePeriod=15 Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.656233 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1" gracePeriod=15 Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.656185 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049" gracePeriod=15 Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.656307 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306" gracePeriod=15 Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.656242 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2" gracePeriod=15 Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.660574 4966 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 15:45:52 crc kubenswrapper[4966]: E0127 15:45:52.662266 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42e73b55-ac4e-4b3d-80be-2d9320fa42ce" containerName="extract-content" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.662447 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="42e73b55-ac4e-4b3d-80be-2d9320fa42ce" containerName="extract-content" Jan 27 15:45:52 crc kubenswrapper[4966]: E0127 15:45:52.662570 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e4932d-65cd-40d3-a441-604e8d96855c" containerName="extract-utilities" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.662677 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e4932d-65cd-40d3-a441-604e8d96855c" containerName="extract-utilities" Jan 27 15:45:52 crc kubenswrapper[4966]: E0127 15:45:52.662778 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.662922 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 15:45:52 crc kubenswrapper[4966]: E0127 15:45:52.663041 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.663158 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 15:45:52 crc kubenswrapper[4966]: E0127 15:45:52.663269 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e4932d-65cd-40d3-a441-604e8d96855c" containerName="registry-server" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.663389 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e4932d-65cd-40d3-a441-604e8d96855c" containerName="registry-server" Jan 27 15:45:52 crc kubenswrapper[4966]: E0127 15:45:52.663508 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.663625 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 15:45:52 crc kubenswrapper[4966]: E0127 15:45:52.663745 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42e73b55-ac4e-4b3d-80be-2d9320fa42ce" containerName="registry-server" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.663864 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="42e73b55-ac4e-4b3d-80be-2d9320fa42ce" containerName="registry-server" Jan 27 15:45:52 crc kubenswrapper[4966]: E0127 15:45:52.664023 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.664154 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 15:45:52 crc kubenswrapper[4966]: E0127 15:45:52.664291 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4" containerName="extract-utilities" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.665072 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4" containerName="extract-utilities" Jan 27 15:45:52 crc kubenswrapper[4966]: E0127 15:45:52.665206 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4" containerName="extract-content" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.665366 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4" containerName="extract-content" Jan 27 15:45:52 crc kubenswrapper[4966]: E0127 15:45:52.665529 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42e73b55-ac4e-4b3d-80be-2d9320fa42ce" containerName="extract-utilities" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.666309 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="42e73b55-ac4e-4b3d-80be-2d9320fa42ce" containerName="extract-utilities" Jan 27 15:45:52 crc kubenswrapper[4966]: E0127 15:45:52.666467 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4" containerName="registry-server" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.666585 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4" containerName="registry-server" Jan 27 15:45:52 crc kubenswrapper[4966]: E0127 15:45:52.666698 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d12582-0fd3-41b5-aff6-d3540016c72e" containerName="pruner" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.666801 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d12582-0fd3-41b5-aff6-d3540016c72e" containerName="pruner" Jan 27 15:45:52 crc kubenswrapper[4966]: E0127 15:45:52.666973 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.667099 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 15:45:52 crc kubenswrapper[4966]: E0127 15:45:52.667228 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.667366 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 15:45:52 crc kubenswrapper[4966]: E0127 15:45:52.667515 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e4932d-65cd-40d3-a441-604e8d96855c" containerName="extract-content" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.667637 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e4932d-65cd-40d3-a441-604e8d96855c" containerName="extract-content" Jan 27 15:45:52 crc kubenswrapper[4966]: E0127 15:45:52.667753 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.667871 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.668230 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.669023 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d12582-0fd3-41b5-aff6-d3540016c72e" containerName="pruner" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.669055 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="11e4932d-65cd-40d3-a441-604e8d96855c" containerName="registry-server" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.669073 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.669086 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3beb41-9a6e-40c4-b0bb-96938ba3b9d4" containerName="registry-server" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.669106 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.669122 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.669132 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.669143 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="42e73b55-ac4e-4b3d-80be-2d9320fa42ce" containerName="registry-server" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.669153 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.671540 4966 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.672277 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.678950 4966 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.721177 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.721993 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.722035 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.722065 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.722092 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.722119 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.722158 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.722201 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.823125 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.823179 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.823199 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.823221 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.823250 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.823254 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.823313 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.823279 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.823366 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.823383 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.823402 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.823433 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.823441 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.823459 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.823362 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:45:52 crc kubenswrapper[4966]: I0127 15:45:52.823495 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:45:53 crc kubenswrapper[4966]: I0127 15:45:53.184859 4966 generic.go:334] "Generic (PLEG): container finished" podID="82f730e3-90c9-49c0-8bd2-b993a0f9bc66" containerID="e94d16d6c5997703344b8c768b8ff64a02a67a7d3dbd143f60c62f6e323ca9f8" exitCode=0 Jan 27 15:45:53 crc kubenswrapper[4966]: I0127 15:45:53.184932 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"82f730e3-90c9-49c0-8bd2-b993a0f9bc66","Type":"ContainerDied","Data":"e94d16d6c5997703344b8c768b8ff64a02a67a7d3dbd143f60c62f6e323ca9f8"} Jan 27 15:45:53 crc kubenswrapper[4966]: I0127 15:45:53.185965 4966 status_manager.go:851] "Failed to get status for pod" podUID="82f730e3-90c9-49c0-8bd2-b993a0f9bc66" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:45:53 crc kubenswrapper[4966]: I0127 15:45:53.187532 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 15:45:53 crc kubenswrapper[4966]: I0127 15:45:53.189122 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 15:45:53 crc kubenswrapper[4966]: I0127 15:45:53.189999 4966 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049" exitCode=0 Jan 27 15:45:53 crc kubenswrapper[4966]: I0127 15:45:53.190027 4966 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2" exitCode=0 Jan 27 15:45:53 crc kubenswrapper[4966]: I0127 15:45:53.190038 4966 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1" exitCode=0 Jan 27 15:45:53 crc kubenswrapper[4966]: I0127 15:45:53.190048 4966 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306" exitCode=2 Jan 27 15:45:53 crc kubenswrapper[4966]: I0127 15:45:53.190093 4966 scope.go:117] "RemoveContainer" containerID="35b2902f9f3e699dc2083a94d8eff6a9e99bea2e11d7226f3e71bf9286c165a2" Jan 27 15:45:54 crc kubenswrapper[4966]: I0127 15:45:54.198609 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 15:45:54 crc kubenswrapper[4966]: I0127 15:45:54.414367 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:45:54 crc kubenswrapper[4966]: I0127 15:45:54.415364 4966 status_manager.go:851] "Failed to get status for pod" podUID="82f730e3-90c9-49c0-8bd2-b993a0f9bc66" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:45:54 crc kubenswrapper[4966]: I0127 15:45:54.444489 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82f730e3-90c9-49c0-8bd2-b993a0f9bc66-kube-api-access\") pod \"82f730e3-90c9-49c0-8bd2-b993a0f9bc66\" (UID: \"82f730e3-90c9-49c0-8bd2-b993a0f9bc66\") " Jan 27 15:45:54 crc kubenswrapper[4966]: I0127 15:45:54.444664 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/82f730e3-90c9-49c0-8bd2-b993a0f9bc66-var-lock\") pod \"82f730e3-90c9-49c0-8bd2-b993a0f9bc66\" (UID: \"82f730e3-90c9-49c0-8bd2-b993a0f9bc66\") " Jan 27 15:45:54 crc kubenswrapper[4966]: I0127 15:45:54.444693 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82f730e3-90c9-49c0-8bd2-b993a0f9bc66-kubelet-dir\") pod \"82f730e3-90c9-49c0-8bd2-b993a0f9bc66\" (UID: \"82f730e3-90c9-49c0-8bd2-b993a0f9bc66\") " Jan 27 15:45:54 crc kubenswrapper[4966]: I0127 15:45:54.444757 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82f730e3-90c9-49c0-8bd2-b993a0f9bc66-var-lock" (OuterVolumeSpecName: "var-lock") pod "82f730e3-90c9-49c0-8bd2-b993a0f9bc66" (UID: "82f730e3-90c9-49c0-8bd2-b993a0f9bc66"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:45:54 crc kubenswrapper[4966]: I0127 15:45:54.444805 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82f730e3-90c9-49c0-8bd2-b993a0f9bc66-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "82f730e3-90c9-49c0-8bd2-b993a0f9bc66" (UID: "82f730e3-90c9-49c0-8bd2-b993a0f9bc66"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:45:54 crc kubenswrapper[4966]: I0127 15:45:54.444968 4966 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/82f730e3-90c9-49c0-8bd2-b993a0f9bc66-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:54 crc kubenswrapper[4966]: I0127 15:45:54.444993 4966 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82f730e3-90c9-49c0-8bd2-b993a0f9bc66-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:54 crc kubenswrapper[4966]: I0127 15:45:54.450224 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82f730e3-90c9-49c0-8bd2-b993a0f9bc66-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "82f730e3-90c9-49c0-8bd2-b993a0f9bc66" (UID: "82f730e3-90c9-49c0-8bd2-b993a0f9bc66"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:45:54 crc kubenswrapper[4966]: I0127 15:45:54.522462 4966 status_manager.go:851] "Failed to get status for pod" podUID="82f730e3-90c9-49c0-8bd2-b993a0f9bc66" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:45:54 crc kubenswrapper[4966]: I0127 15:45:54.546099 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82f730e3-90c9-49c0-8bd2-b993a0f9bc66-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:54 crc kubenswrapper[4966]: E0127 15:45:54.956435 4966 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:45:54 crc kubenswrapper[4966]: E0127 15:45:54.957131 4966 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:45:54 crc kubenswrapper[4966]: E0127 15:45:54.957364 4966 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:45:54 crc kubenswrapper[4966]: E0127 15:45:54.957553 4966 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:45:54 crc kubenswrapper[4966]: E0127 15:45:54.957736 4966 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:45:54 crc kubenswrapper[4966]: I0127 15:45:54.957765 4966 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 15:45:54 crc kubenswrapper[4966]: E0127 15:45:54.957953 4966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.58:6443: connect: connection refused" interval="200ms" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.030716 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.031431 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.032441 4966 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.033135 4966 status_manager.go:851] "Failed to get status for pod" podUID="82f730e3-90c9-49c0-8bd2-b993a0f9bc66" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.052337 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.052365 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.052384 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.052451 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.052479 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.052562 4966 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.052574 4966 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.052585 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.154086 4966 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:55 crc kubenswrapper[4966]: E0127 15:45:55.159405 4966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.58:6443: connect: connection refused" interval="400ms" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.206879 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"82f730e3-90c9-49c0-8bd2-b993a0f9bc66","Type":"ContainerDied","Data":"370db505401f1831a1db80b505c53c00f6c992a4daf147d6bd229c8762a97b1b"} Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.206959 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="370db505401f1831a1db80b505c53c00f6c992a4daf147d6bd229c8762a97b1b" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.207004 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.211522 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.212679 4966 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b" exitCode=0 Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.213176 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.213222 4966 scope.go:117] "RemoveContainer" containerID="5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.217888 4966 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.218499 4966 status_manager.go:851] "Failed to get status for pod" podUID="82f730e3-90c9-49c0-8bd2-b993a0f9bc66" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.234001 4966 scope.go:117] "RemoveContainer" containerID="8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.243674 4966 status_manager.go:851] "Failed to get status for pod" podUID="82f730e3-90c9-49c0-8bd2-b993a0f9bc66" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.244273 4966 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.250380 4966 scope.go:117] "RemoveContainer" containerID="1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.263787 4966 scope.go:117] "RemoveContainer" containerID="27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.281021 4966 scope.go:117] "RemoveContainer" containerID="184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.294517 4966 scope.go:117] "RemoveContainer" containerID="fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.311375 4966 scope.go:117] "RemoveContainer" containerID="5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049" Jan 27 15:45:55 crc kubenswrapper[4966]: E0127 15:45:55.312014 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\": container with ID starting with 5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049 not found: ID does not exist" containerID="5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.312052 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049"} err="failed to get container status \"5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\": rpc error: code = NotFound desc = could not find container \"5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049\": container with ID starting with 5812c74850dc0298a18d4597ef92b199fffe35d1dfc0586149672556dd7e8049 not found: ID does not exist" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.312080 4966 scope.go:117] "RemoveContainer" containerID="8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2" Jan 27 15:45:55 crc kubenswrapper[4966]: E0127 15:45:55.312369 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\": container with ID starting with 8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2 not found: ID does not exist" containerID="8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.312462 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2"} err="failed to get container status \"8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\": rpc error: code = NotFound desc = could not find container \"8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2\": container with ID starting with 8eae81c71b3dad237f36ad4480111dbae9be1355dcff99a3e4f8acb974c9b7d2 not found: ID does not exist" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.312518 4966 scope.go:117] "RemoveContainer" containerID="1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1" Jan 27 15:45:55 crc kubenswrapper[4966]: E0127 15:45:55.312974 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\": container with ID starting with 1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1 not found: ID does not exist" containerID="1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.313005 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1"} err="failed to get container status \"1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\": rpc error: code = NotFound desc = could not find container \"1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1\": container with ID starting with 1893e7eab180144d2974d311a78f41b491a02eb33d4da6cbd90e711f3e41f2b1 not found: ID does not exist" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.313025 4966 scope.go:117] "RemoveContainer" containerID="27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306" Jan 27 15:45:55 crc kubenswrapper[4966]: E0127 15:45:55.313408 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\": container with ID starting with 27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306 not found: ID does not exist" containerID="27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.313602 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306"} err="failed to get container status \"27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\": rpc error: code = NotFound desc = could not find container \"27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306\": container with ID starting with 27e454a60a91ddb5e2bd208f5d0f2fe3b75297587839ab22f5bd5255a3a41306 not found: ID does not exist" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.314042 4966 scope.go:117] "RemoveContainer" containerID="184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b" Jan 27 15:45:55 crc kubenswrapper[4966]: E0127 15:45:55.314664 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\": container with ID starting with 184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b not found: ID does not exist" containerID="184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.314696 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b"} err="failed to get container status \"184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\": rpc error: code = NotFound desc = could not find container \"184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b\": container with ID starting with 184c5dfe8678fd3ab92d5d6c1ede631755aae812f9082b0481cddc550a6cb14b not found: ID does not exist" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.314713 4966 scope.go:117] "RemoveContainer" containerID="fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c" Jan 27 15:45:55 crc kubenswrapper[4966]: E0127 15:45:55.315772 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\": container with ID starting with fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c not found: ID does not exist" containerID="fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c" Jan 27 15:45:55 crc kubenswrapper[4966]: I0127 15:45:55.315808 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c"} err="failed to get container status \"fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\": rpc error: code = NotFound desc = could not find container \"fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c\": container with ID starting with fa107e1db32a329d5bc8f527f55a2b2ad7f3c42fc724d20acf4085e5480e448c not found: ID does not exist" Jan 27 15:45:55 crc kubenswrapper[4966]: E0127 15:45:55.560553 4966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.58:6443: connect: connection refused" interval="800ms" Jan 27 15:45:56 crc kubenswrapper[4966]: E0127 15:45:56.361575 4966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.58:6443: connect: connection refused" interval="1.6s" Jan 27 15:45:56 crc kubenswrapper[4966]: I0127 15:45:56.531917 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 15:45:57 crc kubenswrapper[4966]: E0127 15:45:57.707784 4966 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.58:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:45:57 crc kubenswrapper[4966]: I0127 15:45:57.708630 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:45:57 crc kubenswrapper[4966]: E0127 15:45:57.737196 4966 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.58:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ea107a0ad2e00 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 15:45:57.735730688 +0000 UTC m=+224.038524176,LastTimestamp:2026-01-27 15:45:57.735730688 +0000 UTC m=+224.038524176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 15:45:57 crc kubenswrapper[4966]: E0127 15:45:57.962275 4966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.58:6443: connect: connection refused" interval="3.2s" Jan 27 15:45:58 crc kubenswrapper[4966]: I0127 15:45:58.243003 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"21693d3ffc8ee0aeeea062b80798940b1010af01c4730a7485e81d77f03e4c81"} Jan 27 15:45:58 crc kubenswrapper[4966]: I0127 15:45:58.243060 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7986765933f58c1f6347f837b0a41cb2ee61d398e87a4f047e628d468039e8d6"} Jan 27 15:45:58 crc kubenswrapper[4966]: E0127 15:45:58.243674 4966 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.58:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:45:58 crc kubenswrapper[4966]: I0127 15:45:58.243912 4966 status_manager.go:851] "Failed to get status for pod" podUID="82f730e3-90c9-49c0-8bd2-b993a0f9bc66" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:45:59 crc kubenswrapper[4966]: E0127 15:45:59.588840 4966 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.58:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" volumeName="registry-storage" Jan 27 15:45:59 crc kubenswrapper[4966]: I0127 15:45:59.753559 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" podUID="612fb5e2-ec40-4a52-b6fb-463e64e0e872" containerName="oauth-openshift" containerID="cri-o://c30b370d3e538adb1b55462db75d700afb1165fb2d286192b8d3a5901442c905" gracePeriod=15 Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.159383 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.160079 4966 status_manager.go:851] "Failed to get status for pod" podUID="612fb5e2-ec40-4a52-b6fb-463e64e0e872" pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rckw5\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.160607 4966 status_manager.go:851] "Failed to get status for pod" podUID="82f730e3-90c9-49c0-8bd2-b993a0f9bc66" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.217536 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-cliconfig\") pod \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.217638 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/612fb5e2-ec40-4a52-b6fb-463e64e0e872-audit-dir\") pod \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.217730 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzlnp\" (UniqueName: \"kubernetes.io/projected/612fb5e2-ec40-4a52-b6fb-463e64e0e872-kube-api-access-rzlnp\") pod \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.217800 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-ocp-branding-template\") pod \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.217959 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/612fb5e2-ec40-4a52-b6fb-463e64e0e872-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "612fb5e2-ec40-4a52-b6fb-463e64e0e872" (UID: "612fb5e2-ec40-4a52-b6fb-463e64e0e872"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.217975 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-trusted-ca-bundle\") pod \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.218118 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-audit-policies\") pod \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.218178 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-template-provider-selection\") pod \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.218270 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-template-error\") pod \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.218320 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-service-ca\") pod \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.218367 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-idp-0-file-data\") pod \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.218424 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-session\") pod \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.218503 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-serving-cert\") pod \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.218554 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-router-certs\") pod \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.218625 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-template-login\") pod \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\" (UID: \"612fb5e2-ec40-4a52-b6fb-463e64e0e872\") " Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.218951 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "612fb5e2-ec40-4a52-b6fb-463e64e0e872" (UID: "612fb5e2-ec40-4a52-b6fb-463e64e0e872"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.218976 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "612fb5e2-ec40-4a52-b6fb-463e64e0e872" (UID: "612fb5e2-ec40-4a52-b6fb-463e64e0e872"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.219136 4966 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.219161 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.219183 4966 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/612fb5e2-ec40-4a52-b6fb-463e64e0e872-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.219170 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "612fb5e2-ec40-4a52-b6fb-463e64e0e872" (UID: "612fb5e2-ec40-4a52-b6fb-463e64e0e872"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.220720 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "612fb5e2-ec40-4a52-b6fb-463e64e0e872" (UID: "612fb5e2-ec40-4a52-b6fb-463e64e0e872"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.224394 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "612fb5e2-ec40-4a52-b6fb-463e64e0e872" (UID: "612fb5e2-ec40-4a52-b6fb-463e64e0e872"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.230590 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "612fb5e2-ec40-4a52-b6fb-463e64e0e872" (UID: "612fb5e2-ec40-4a52-b6fb-463e64e0e872"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.231374 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "612fb5e2-ec40-4a52-b6fb-463e64e0e872" (UID: "612fb5e2-ec40-4a52-b6fb-463e64e0e872"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.231709 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "612fb5e2-ec40-4a52-b6fb-463e64e0e872" (UID: "612fb5e2-ec40-4a52-b6fb-463e64e0e872"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.231841 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "612fb5e2-ec40-4a52-b6fb-463e64e0e872" (UID: "612fb5e2-ec40-4a52-b6fb-463e64e0e872"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.231920 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "612fb5e2-ec40-4a52-b6fb-463e64e0e872" (UID: "612fb5e2-ec40-4a52-b6fb-463e64e0e872"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.232285 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "612fb5e2-ec40-4a52-b6fb-463e64e0e872" (UID: "612fb5e2-ec40-4a52-b6fb-463e64e0e872"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.232665 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "612fb5e2-ec40-4a52-b6fb-463e64e0e872" (UID: "612fb5e2-ec40-4a52-b6fb-463e64e0e872"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.232802 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/612fb5e2-ec40-4a52-b6fb-463e64e0e872-kube-api-access-rzlnp" (OuterVolumeSpecName: "kube-api-access-rzlnp") pod "612fb5e2-ec40-4a52-b6fb-463e64e0e872" (UID: "612fb5e2-ec40-4a52-b6fb-463e64e0e872"). InnerVolumeSpecName "kube-api-access-rzlnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.256562 4966 generic.go:334] "Generic (PLEG): container finished" podID="612fb5e2-ec40-4a52-b6fb-463e64e0e872" containerID="c30b370d3e538adb1b55462db75d700afb1165fb2d286192b8d3a5901442c905" exitCode=0 Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.256631 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" event={"ID":"612fb5e2-ec40-4a52-b6fb-463e64e0e872","Type":"ContainerDied","Data":"c30b370d3e538adb1b55462db75d700afb1165fb2d286192b8d3a5901442c905"} Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.256648 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.256671 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" event={"ID":"612fb5e2-ec40-4a52-b6fb-463e64e0e872","Type":"ContainerDied","Data":"ae7d8525de78283e7edd86bcc0dd096d0053abda398b362335675ff7f7764e60"} Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.256703 4966 scope.go:117] "RemoveContainer" containerID="c30b370d3e538adb1b55462db75d700afb1165fb2d286192b8d3a5901442c905" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.257625 4966 status_manager.go:851] "Failed to get status for pod" podUID="612fb5e2-ec40-4a52-b6fb-463e64e0e872" pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rckw5\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.258274 4966 status_manager.go:851] "Failed to get status for pod" podUID="82f730e3-90c9-49c0-8bd2-b993a0f9bc66" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.281284 4966 status_manager.go:851] "Failed to get status for pod" podUID="612fb5e2-ec40-4a52-b6fb-463e64e0e872" pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rckw5\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.281579 4966 status_manager.go:851] "Failed to get status for pod" podUID="82f730e3-90c9-49c0-8bd2-b993a0f9bc66" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.293523 4966 scope.go:117] "RemoveContainer" containerID="c30b370d3e538adb1b55462db75d700afb1165fb2d286192b8d3a5901442c905" Jan 27 15:46:00 crc kubenswrapper[4966]: E0127 15:46:00.294031 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c30b370d3e538adb1b55462db75d700afb1165fb2d286192b8d3a5901442c905\": container with ID starting with c30b370d3e538adb1b55462db75d700afb1165fb2d286192b8d3a5901442c905 not found: ID does not exist" containerID="c30b370d3e538adb1b55462db75d700afb1165fb2d286192b8d3a5901442c905" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.294075 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c30b370d3e538adb1b55462db75d700afb1165fb2d286192b8d3a5901442c905"} err="failed to get container status \"c30b370d3e538adb1b55462db75d700afb1165fb2d286192b8d3a5901442c905\": rpc error: code = NotFound desc = could not find container \"c30b370d3e538adb1b55462db75d700afb1165fb2d286192b8d3a5901442c905\": container with ID starting with c30b370d3e538adb1b55462db75d700afb1165fb2d286192b8d3a5901442c905 not found: ID does not exist" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.320385 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.320426 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.320437 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.320446 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.320455 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.320465 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.320473 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.320482 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.320490 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzlnp\" (UniqueName: \"kubernetes.io/projected/612fb5e2-ec40-4a52-b6fb-463e64e0e872-kube-api-access-rzlnp\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.320499 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:00 crc kubenswrapper[4966]: I0127 15:46:00.320509 4966 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/612fb5e2-ec40-4a52-b6fb-463e64e0e872-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:01 crc kubenswrapper[4966]: E0127 15:46:01.163478 4966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.58:6443: connect: connection refused" interval="6.4s" Jan 27 15:46:03 crc kubenswrapper[4966]: E0127 15:46:03.602777 4966 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.58:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ea107a0ad2e00 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 15:45:57.735730688 +0000 UTC m=+224.038524176,LastTimestamp:2026-01-27 15:45:57.735730688 +0000 UTC m=+224.038524176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 15:46:04 crc kubenswrapper[4966]: I0127 15:46:04.524505 4966 status_manager.go:851] "Failed to get status for pod" podUID="612fb5e2-ec40-4a52-b6fb-463e64e0e872" pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rckw5\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:46:04 crc kubenswrapper[4966]: I0127 15:46:04.525012 4966 status_manager.go:851] "Failed to get status for pod" podUID="82f730e3-90c9-49c0-8bd2-b993a0f9bc66" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:46:05 crc kubenswrapper[4966]: I0127 15:46:05.519933 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:46:05 crc kubenswrapper[4966]: I0127 15:46:05.521398 4966 status_manager.go:851] "Failed to get status for pod" podUID="612fb5e2-ec40-4a52-b6fb-463e64e0e872" pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rckw5\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:46:05 crc kubenswrapper[4966]: I0127 15:46:05.522127 4966 status_manager.go:851] "Failed to get status for pod" podUID="82f730e3-90c9-49c0-8bd2-b993a0f9bc66" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:46:05 crc kubenswrapper[4966]: I0127 15:46:05.544362 4966 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9da132b9-a1bf-4cad-b364-950b3a4ccc81" Jan 27 15:46:05 crc kubenswrapper[4966]: I0127 15:46:05.544411 4966 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9da132b9-a1bf-4cad-b364-950b3a4ccc81" Jan 27 15:46:05 crc kubenswrapper[4966]: E0127 15:46:05.545310 4966 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.58:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:46:05 crc kubenswrapper[4966]: I0127 15:46:05.545983 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:46:06 crc kubenswrapper[4966]: I0127 15:46:06.299575 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 15:46:06 crc kubenswrapper[4966]: I0127 15:46:06.299655 4966 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0" exitCode=1 Jan 27 15:46:06 crc kubenswrapper[4966]: I0127 15:46:06.299770 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0"} Jan 27 15:46:06 crc kubenswrapper[4966]: I0127 15:46:06.300334 4966 scope.go:117] "RemoveContainer" containerID="83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0" Jan 27 15:46:06 crc kubenswrapper[4966]: I0127 15:46:06.300859 4966 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:46:06 crc kubenswrapper[4966]: I0127 15:46:06.301522 4966 status_manager.go:851] "Failed to get status for pod" podUID="612fb5e2-ec40-4a52-b6fb-463e64e0e872" pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rckw5\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:46:06 crc kubenswrapper[4966]: I0127 15:46:06.302060 4966 status_manager.go:851] "Failed to get status for pod" podUID="82f730e3-90c9-49c0-8bd2-b993a0f9bc66" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:46:06 crc kubenswrapper[4966]: I0127 15:46:06.302556 4966 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="9875d286fc89a29c5be89ee3580371d84daf19ed266a76015dbee04095fcc2a8" exitCode=0 Jan 27 15:46:06 crc kubenswrapper[4966]: I0127 15:46:06.302615 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"9875d286fc89a29c5be89ee3580371d84daf19ed266a76015dbee04095fcc2a8"} Jan 27 15:46:06 crc kubenswrapper[4966]: I0127 15:46:06.302712 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"42e4467ed93386645579b8890db6c38a94d08bf997bb845c94ea2bf0b38d9618"} Jan 27 15:46:06 crc kubenswrapper[4966]: I0127 15:46:06.303158 4966 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9da132b9-a1bf-4cad-b364-950b3a4ccc81" Jan 27 15:46:06 crc kubenswrapper[4966]: I0127 15:46:06.303186 4966 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9da132b9-a1bf-4cad-b364-950b3a4ccc81" Jan 27 15:46:06 crc kubenswrapper[4966]: I0127 15:46:06.303297 4966 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:46:06 crc kubenswrapper[4966]: I0127 15:46:06.303705 4966 status_manager.go:851] "Failed to get status for pod" podUID="612fb5e2-ec40-4a52-b6fb-463e64e0e872" pod="openshift-authentication/oauth-openshift-558db77b4-rckw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rckw5\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:46:06 crc kubenswrapper[4966]: E0127 15:46:06.303827 4966 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.58:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:46:06 crc kubenswrapper[4966]: I0127 15:46:06.303993 4966 status_manager.go:851] "Failed to get status for pod" podUID="82f730e3-90c9-49c0-8bd2-b993a0f9bc66" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.58:6443: connect: connection refused" Jan 27 15:46:07 crc kubenswrapper[4966]: I0127 15:46:07.312494 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"89d594aaeb042c38016edb99078ee3487f967a72137dce9dac9e962df5b07f61"} Jan 27 15:46:07 crc kubenswrapper[4966]: I0127 15:46:07.312842 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d42b2b339e932b78d71cc40e7c6d4317ca663d09966e6bc86bbb3f992d02f120"} Jan 27 15:46:07 crc kubenswrapper[4966]: I0127 15:46:07.312860 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0e84d12baf5113c91143cbd4c5e6638f7e5d63b6d0cfca11d7b253cc6fa7a386"} Jan 27 15:46:07 crc kubenswrapper[4966]: I0127 15:46:07.312872 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"204c23cb8e1a418e9f6567a5b9a7d1ef0d8561364ce65ca231cf203d2476be1a"} Jan 27 15:46:07 crc kubenswrapper[4966]: I0127 15:46:07.316469 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 15:46:07 crc kubenswrapper[4966]: I0127 15:46:07.316516 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fac85f1bd4c5565f2422e11b56b9230af9775caaee9c172882ea6b9e576f7040"} Jan 27 15:46:07 crc kubenswrapper[4966]: I0127 15:46:07.994778 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:46:08 crc kubenswrapper[4966]: I0127 15:46:08.326100 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d38dadfd27303fd659d6534b9396b99c6d73fc752ae17a22017b22283d9a10fc"} Jan 27 15:46:08 crc kubenswrapper[4966]: I0127 15:46:08.326526 4966 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9da132b9-a1bf-4cad-b364-950b3a4ccc81" Jan 27 15:46:08 crc kubenswrapper[4966]: I0127 15:46:08.326555 4966 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9da132b9-a1bf-4cad-b364-950b3a4ccc81" Jan 27 15:46:10 crc kubenswrapper[4966]: I0127 15:46:10.546563 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:46:10 crc kubenswrapper[4966]: I0127 15:46:10.547102 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:46:10 crc kubenswrapper[4966]: I0127 15:46:10.552712 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:46:13 crc kubenswrapper[4966]: I0127 15:46:13.338886 4966 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:46:13 crc kubenswrapper[4966]: I0127 15:46:13.355624 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:46:13 crc kubenswrapper[4966]: I0127 15:46:13.355707 4966 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9da132b9-a1bf-4cad-b364-950b3a4ccc81" Jan 27 15:46:13 crc kubenswrapper[4966]: I0127 15:46:13.355732 4966 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9da132b9-a1bf-4cad-b364-950b3a4ccc81" Jan 27 15:46:13 crc kubenswrapper[4966]: I0127 15:46:13.360614 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:46:13 crc kubenswrapper[4966]: I0127 15:46:13.594761 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:46:13 crc kubenswrapper[4966]: I0127 15:46:13.594931 4966 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 15:46:13 crc kubenswrapper[4966]: I0127 15:46:13.595006 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 15:46:14 crc kubenswrapper[4966]: I0127 15:46:14.367751 4966 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9da132b9-a1bf-4cad-b364-950b3a4ccc81" Jan 27 15:46:14 crc kubenswrapper[4966]: I0127 15:46:14.368199 4966 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9da132b9-a1bf-4cad-b364-950b3a4ccc81" Jan 27 15:46:14 crc kubenswrapper[4966]: I0127 15:46:14.533594 4966 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="367aa857-d080-492e-aefd-e43ac58a4d9f" Jan 27 15:46:15 crc kubenswrapper[4966]: I0127 15:46:15.371654 4966 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9da132b9-a1bf-4cad-b364-950b3a4ccc81" Jan 27 15:46:15 crc kubenswrapper[4966]: I0127 15:46:15.371975 4966 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9da132b9-a1bf-4cad-b364-950b3a4ccc81" Jan 27 15:46:15 crc kubenswrapper[4966]: I0127 15:46:15.374791 4966 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="367aa857-d080-492e-aefd-e43ac58a4d9f" Jan 27 15:46:22 crc kubenswrapper[4966]: I0127 15:46:22.498141 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 15:46:23 crc kubenswrapper[4966]: I0127 15:46:23.402456 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 15:46:23 crc kubenswrapper[4966]: I0127 15:46:23.484270 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 15:46:23 crc kubenswrapper[4966]: I0127 15:46:23.595693 4966 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 15:46:23 crc kubenswrapper[4966]: I0127 15:46:23.595811 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 15:46:24 crc kubenswrapper[4966]: I0127 15:46:24.156996 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 15:46:24 crc kubenswrapper[4966]: I0127 15:46:24.210637 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 15:46:24 crc kubenswrapper[4966]: I0127 15:46:24.437291 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 15:46:24 crc kubenswrapper[4966]: I0127 15:46:24.448665 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 15:46:24 crc kubenswrapper[4966]: I0127 15:46:24.551996 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 15:46:24 crc kubenswrapper[4966]: I0127 15:46:24.629649 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 15:46:24 crc kubenswrapper[4966]: I0127 15:46:24.751655 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 15:46:24 crc kubenswrapper[4966]: I0127 15:46:24.821307 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 15:46:24 crc kubenswrapper[4966]: I0127 15:46:24.825553 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 15:46:24 crc kubenswrapper[4966]: I0127 15:46:24.840215 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 15:46:24 crc kubenswrapper[4966]: I0127 15:46:24.858214 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 15:46:25 crc kubenswrapper[4966]: I0127 15:46:25.143308 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 15:46:25 crc kubenswrapper[4966]: I0127 15:46:25.231300 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 15:46:25 crc kubenswrapper[4966]: I0127 15:46:25.446147 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 15:46:25 crc kubenswrapper[4966]: I0127 15:46:25.586472 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 15:46:25 crc kubenswrapper[4966]: I0127 15:46:25.652368 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 15:46:25 crc kubenswrapper[4966]: I0127 15:46:25.809532 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 15:46:25 crc kubenswrapper[4966]: I0127 15:46:25.942205 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 15:46:26 crc kubenswrapper[4966]: I0127 15:46:26.000211 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 15:46:26 crc kubenswrapper[4966]: I0127 15:46:26.100739 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 15:46:26 crc kubenswrapper[4966]: I0127 15:46:26.157134 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 15:46:26 crc kubenswrapper[4966]: I0127 15:46:26.209984 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 15:46:26 crc kubenswrapper[4966]: I0127 15:46:26.254263 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 15:46:26 crc kubenswrapper[4966]: I0127 15:46:26.283130 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 15:46:26 crc kubenswrapper[4966]: I0127 15:46:26.299215 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 15:46:26 crc kubenswrapper[4966]: I0127 15:46:26.485808 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 15:46:26 crc kubenswrapper[4966]: I0127 15:46:26.575287 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 15:46:26 crc kubenswrapper[4966]: I0127 15:46:26.653580 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 15:46:26 crc kubenswrapper[4966]: I0127 15:46:26.666757 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 15:46:26 crc kubenswrapper[4966]: I0127 15:46:26.931078 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.107605 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.180130 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.248134 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.279824 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.302403 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.309866 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.386636 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.421120 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.421425 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.460914 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.563272 4966 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.567532 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-rckw5"] Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.567599 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.578058 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.596945 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.596924829 podStartE2EDuration="14.596924829s" podCreationTimestamp="2026-01-27 15:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:46:27.59017433 +0000 UTC m=+253.892967868" watchObservedRunningTime="2026-01-27 15:46:27.596924829 +0000 UTC m=+253.899718327" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.607200 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.635501 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.644276 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.704627 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.739334 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.814057 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.816756 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.853670 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.859509 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.865720 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 15:46:27 crc kubenswrapper[4966]: I0127 15:46:27.946459 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 15:46:28 crc kubenswrapper[4966]: I0127 15:46:28.053155 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 15:46:28 crc kubenswrapper[4966]: I0127 15:46:28.121576 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 15:46:28 crc kubenswrapper[4966]: I0127 15:46:28.134525 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 15:46:28 crc kubenswrapper[4966]: I0127 15:46:28.171997 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 15:46:28 crc kubenswrapper[4966]: I0127 15:46:28.245591 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 15:46:28 crc kubenswrapper[4966]: I0127 15:46:28.319560 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 15:46:28 crc kubenswrapper[4966]: I0127 15:46:28.357249 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 15:46:28 crc kubenswrapper[4966]: I0127 15:46:28.474734 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 15:46:28 crc kubenswrapper[4966]: I0127 15:46:28.516769 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 15:46:28 crc kubenswrapper[4966]: I0127 15:46:28.538617 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="612fb5e2-ec40-4a52-b6fb-463e64e0e872" path="/var/lib/kubelet/pods/612fb5e2-ec40-4a52-b6fb-463e64e0e872/volumes" Jan 27 15:46:28 crc kubenswrapper[4966]: I0127 15:46:28.640986 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 15:46:28 crc kubenswrapper[4966]: I0127 15:46:28.652721 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 15:46:28 crc kubenswrapper[4966]: I0127 15:46:28.772530 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 15:46:28 crc kubenswrapper[4966]: I0127 15:46:28.833771 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 15:46:28 crc kubenswrapper[4966]: I0127 15:46:28.854645 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 15:46:28 crc kubenswrapper[4966]: I0127 15:46:28.926346 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.012268 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.045574 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.119373 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.126828 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.240239 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.274925 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.307241 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.313868 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.355531 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.446021 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.450274 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.475528 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.615477 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.684357 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.687630 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.695012 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.732811 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.791665 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.796867 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.804527 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.815315 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.860352 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.863293 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 15:46:29 crc kubenswrapper[4966]: I0127 15:46:29.909096 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.027332 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.051264 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.094101 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.108388 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.161686 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.302222 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.356526 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.375779 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.498097 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.538684 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.544545 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.645530 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.662594 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.669243 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.689310 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.717805 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.973999 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-666545c866-rs2mz"] Jan 27 15:46:30 crc kubenswrapper[4966]: E0127 15:46:30.974243 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82f730e3-90c9-49c0-8bd2-b993a0f9bc66" containerName="installer" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.974258 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="82f730e3-90c9-49c0-8bd2-b993a0f9bc66" containerName="installer" Jan 27 15:46:30 crc kubenswrapper[4966]: E0127 15:46:30.974270 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612fb5e2-ec40-4a52-b6fb-463e64e0e872" containerName="oauth-openshift" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.974278 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="612fb5e2-ec40-4a52-b6fb-463e64e0e872" containerName="oauth-openshift" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.974414 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="612fb5e2-ec40-4a52-b6fb-463e64e0e872" containerName="oauth-openshift" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.974434 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="82f730e3-90c9-49c0-8bd2-b993a0f9bc66" containerName="installer" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.974885 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.976926 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.977660 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.977705 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.977870 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.978115 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.979114 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.979220 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.979646 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.979893 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.981822 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.982197 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.982386 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.996036 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.996969 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-666545c866-rs2mz"] Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.997081 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 15:46:30 crc kubenswrapper[4966]: I0127 15:46:30.997524 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.001144 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-system-service-ca\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.001202 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-system-router-certs\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.001254 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d0983135-11bf-4938-9360-757bc4556ec0-audit-policies\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.001290 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0983135-11bf-4938-9360-757bc4556ec0-audit-dir\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.001313 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-user-template-login\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.001356 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.001395 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.001423 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.001446 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-user-template-error\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.001468 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.001511 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.001543 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.001570 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxnsb\" (UniqueName: \"kubernetes.io/projected/d0983135-11bf-4938-9360-757bc4556ec0-kube-api-access-jxnsb\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.001597 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-system-session\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.015046 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.017370 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.056685 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.063627 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.102559 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d0983135-11bf-4938-9360-757bc4556ec0-audit-policies\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.102624 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0983135-11bf-4938-9360-757bc4556ec0-audit-dir\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.102665 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-user-template-login\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.102715 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.102762 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.102794 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-user-template-error\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.102826 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.102872 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.102949 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.102984 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.103033 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxnsb\" (UniqueName: \"kubernetes.io/projected/d0983135-11bf-4938-9360-757bc4556ec0-kube-api-access-jxnsb\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.103066 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-system-session\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.103146 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-system-service-ca\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.103185 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-system-router-certs\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.103869 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.104690 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0983135-11bf-4938-9360-757bc4556ec0-audit-dir\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.105682 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d0983135-11bf-4938-9360-757bc4556ec0-audit-policies\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.105999 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-system-service-ca\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.106143 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.109112 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-user-template-error\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.110178 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.110874 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-user-template-login\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.110922 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.111401 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.112036 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.112440 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.113217 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-system-router-certs\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.113602 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d0983135-11bf-4938-9360-757bc4556ec0-v4-0-config-system-session\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.124752 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxnsb\" (UniqueName: \"kubernetes.io/projected/d0983135-11bf-4938-9360-757bc4556ec0-kube-api-access-jxnsb\") pod \"oauth-openshift-666545c866-rs2mz\" (UID: \"d0983135-11bf-4938-9360-757bc4556ec0\") " pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.160262 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.160374 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.172097 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.253336 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.300306 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.301646 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.314025 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.316559 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.319473 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.365653 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.394722 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.443481 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.525174 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.536065 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.538695 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.669007 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.673050 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.828390 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.842823 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.848810 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.873006 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.962012 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.962195 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 15:46:31 crc kubenswrapper[4966]: I0127 15:46:31.963307 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 15:46:32 crc kubenswrapper[4966]: I0127 15:46:32.044792 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 15:46:32 crc kubenswrapper[4966]: I0127 15:46:32.046308 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 15:46:32 crc kubenswrapper[4966]: I0127 15:46:32.155417 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 15:46:32 crc kubenswrapper[4966]: I0127 15:46:32.207485 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 15:46:32 crc kubenswrapper[4966]: I0127 15:46:32.222242 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 15:46:32 crc kubenswrapper[4966]: I0127 15:46:32.414500 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 15:46:32 crc kubenswrapper[4966]: I0127 15:46:32.474605 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 15:46:32 crc kubenswrapper[4966]: I0127 15:46:32.569120 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 15:46:32 crc kubenswrapper[4966]: I0127 15:46:32.842624 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 15:46:33 crc kubenswrapper[4966]: I0127 15:46:33.021489 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 15:46:33 crc kubenswrapper[4966]: I0127 15:46:33.111016 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 15:46:33 crc kubenswrapper[4966]: I0127 15:46:33.263884 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 15:46:33 crc kubenswrapper[4966]: I0127 15:46:33.264846 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 15:46:33 crc kubenswrapper[4966]: I0127 15:46:33.369692 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 15:46:33 crc kubenswrapper[4966]: I0127 15:46:33.401331 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 15:46:33 crc kubenswrapper[4966]: I0127 15:46:33.403056 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 15:46:33 crc kubenswrapper[4966]: I0127 15:46:33.517027 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 15:46:33 crc kubenswrapper[4966]: I0127 15:46:33.595060 4966 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 15:46:33 crc kubenswrapper[4966]: I0127 15:46:33.595513 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 15:46:33 crc kubenswrapper[4966]: I0127 15:46:33.595668 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:46:33 crc kubenswrapper[4966]: I0127 15:46:33.596361 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 15:46:33 crc kubenswrapper[4966]: I0127 15:46:33.596798 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"fac85f1bd4c5565f2422e11b56b9230af9775caaee9c172882ea6b9e576f7040"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 27 15:46:33 crc kubenswrapper[4966]: I0127 15:46:33.597255 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://fac85f1bd4c5565f2422e11b56b9230af9775caaee9c172882ea6b9e576f7040" gracePeriod=30 Jan 27 15:46:33 crc kubenswrapper[4966]: I0127 15:46:33.669491 4966 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 15:46:33 crc kubenswrapper[4966]: I0127 15:46:33.752644 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 15:46:33 crc kubenswrapper[4966]: I0127 15:46:33.752792 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 15:46:33 crc kubenswrapper[4966]: I0127 15:46:33.873662 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 15:46:34 crc kubenswrapper[4966]: I0127 15:46:34.050602 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 15:46:34 crc kubenswrapper[4966]: I0127 15:46:34.053720 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 15:46:34 crc kubenswrapper[4966]: I0127 15:46:34.083098 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 15:46:34 crc kubenswrapper[4966]: I0127 15:46:34.166120 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 15:46:34 crc kubenswrapper[4966]: E0127 15:46:34.260220 4966 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 27 15:46:34 crc kubenswrapper[4966]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-666545c866-rs2mz_openshift-authentication_d0983135-11bf-4938-9360-757bc4556ec0_0(c11970947e9f872913f366033fe315e94d715206825fc6233fb7e0f94e98ef3b): error adding pod openshift-authentication_oauth-openshift-666545c866-rs2mz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c11970947e9f872913f366033fe315e94d715206825fc6233fb7e0f94e98ef3b" Netns:"/var/run/netns/9a5c87db-f056-46b7-b30c-3db47b226127" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-666545c866-rs2mz;K8S_POD_INFRA_CONTAINER_ID=c11970947e9f872913f366033fe315e94d715206825fc6233fb7e0f94e98ef3b;K8S_POD_UID=d0983135-11bf-4938-9360-757bc4556ec0" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-666545c866-rs2mz] networking: Multus: [openshift-authentication/oauth-openshift-666545c866-rs2mz/d0983135-11bf-4938-9360-757bc4556ec0]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-666545c866-rs2mz in out of cluster comm: pod "oauth-openshift-666545c866-rs2mz" not found Jan 27 15:46:34 crc kubenswrapper[4966]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 15:46:34 crc kubenswrapper[4966]: > Jan 27 15:46:34 crc kubenswrapper[4966]: E0127 15:46:34.260290 4966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 27 15:46:34 crc kubenswrapper[4966]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-666545c866-rs2mz_openshift-authentication_d0983135-11bf-4938-9360-757bc4556ec0_0(c11970947e9f872913f366033fe315e94d715206825fc6233fb7e0f94e98ef3b): error adding pod openshift-authentication_oauth-openshift-666545c866-rs2mz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c11970947e9f872913f366033fe315e94d715206825fc6233fb7e0f94e98ef3b" Netns:"/var/run/netns/9a5c87db-f056-46b7-b30c-3db47b226127" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-666545c866-rs2mz;K8S_POD_INFRA_CONTAINER_ID=c11970947e9f872913f366033fe315e94d715206825fc6233fb7e0f94e98ef3b;K8S_POD_UID=d0983135-11bf-4938-9360-757bc4556ec0" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-666545c866-rs2mz] networking: Multus: [openshift-authentication/oauth-openshift-666545c866-rs2mz/d0983135-11bf-4938-9360-757bc4556ec0]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-666545c866-rs2mz in out of cluster comm: pod "oauth-openshift-666545c866-rs2mz" not found Jan 27 15:46:34 crc kubenswrapper[4966]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 15:46:34 crc kubenswrapper[4966]: > pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:34 crc kubenswrapper[4966]: E0127 15:46:34.260330 4966 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 27 15:46:34 crc kubenswrapper[4966]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-666545c866-rs2mz_openshift-authentication_d0983135-11bf-4938-9360-757bc4556ec0_0(c11970947e9f872913f366033fe315e94d715206825fc6233fb7e0f94e98ef3b): error adding pod openshift-authentication_oauth-openshift-666545c866-rs2mz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c11970947e9f872913f366033fe315e94d715206825fc6233fb7e0f94e98ef3b" Netns:"/var/run/netns/9a5c87db-f056-46b7-b30c-3db47b226127" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-666545c866-rs2mz;K8S_POD_INFRA_CONTAINER_ID=c11970947e9f872913f366033fe315e94d715206825fc6233fb7e0f94e98ef3b;K8S_POD_UID=d0983135-11bf-4938-9360-757bc4556ec0" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-666545c866-rs2mz] networking: Multus: [openshift-authentication/oauth-openshift-666545c866-rs2mz/d0983135-11bf-4938-9360-757bc4556ec0]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-666545c866-rs2mz in out of cluster comm: pod "oauth-openshift-666545c866-rs2mz" not found Jan 27 15:46:34 crc kubenswrapper[4966]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 15:46:34 crc kubenswrapper[4966]: > pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:34 crc kubenswrapper[4966]: E0127 15:46:34.260392 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-666545c866-rs2mz_openshift-authentication(d0983135-11bf-4938-9360-757bc4556ec0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-666545c866-rs2mz_openshift-authentication(d0983135-11bf-4938-9360-757bc4556ec0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-666545c866-rs2mz_openshift-authentication_d0983135-11bf-4938-9360-757bc4556ec0_0(c11970947e9f872913f366033fe315e94d715206825fc6233fb7e0f94e98ef3b): error adding pod openshift-authentication_oauth-openshift-666545c866-rs2mz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"c11970947e9f872913f366033fe315e94d715206825fc6233fb7e0f94e98ef3b\\\" Netns:\\\"/var/run/netns/9a5c87db-f056-46b7-b30c-3db47b226127\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-666545c866-rs2mz;K8S_POD_INFRA_CONTAINER_ID=c11970947e9f872913f366033fe315e94d715206825fc6233fb7e0f94e98ef3b;K8S_POD_UID=d0983135-11bf-4938-9360-757bc4556ec0\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-666545c866-rs2mz] networking: Multus: [openshift-authentication/oauth-openshift-666545c866-rs2mz/d0983135-11bf-4938-9360-757bc4556ec0]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-666545c866-rs2mz in out of cluster comm: pod \\\"oauth-openshift-666545c866-rs2mz\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" podUID="d0983135-11bf-4938-9360-757bc4556ec0" Jan 27 15:46:34 crc kubenswrapper[4966]: I0127 15:46:34.282108 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 15:46:34 crc kubenswrapper[4966]: I0127 15:46:34.359499 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 15:46:34 crc kubenswrapper[4966]: I0127 15:46:34.423490 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 15:46:34 crc kubenswrapper[4966]: I0127 15:46:34.478679 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:34 crc kubenswrapper[4966]: I0127 15:46:34.479292 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:34 crc kubenswrapper[4966]: I0127 15:46:34.549072 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 15:46:34 crc kubenswrapper[4966]: I0127 15:46:34.554152 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 15:46:34 crc kubenswrapper[4966]: I0127 15:46:34.572566 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 15:46:34 crc kubenswrapper[4966]: I0127 15:46:34.599037 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 15:46:34 crc kubenswrapper[4966]: I0127 15:46:34.674717 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 15:46:34 crc kubenswrapper[4966]: I0127 15:46:34.763036 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 15:46:34 crc kubenswrapper[4966]: I0127 15:46:34.774079 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 15:46:34 crc kubenswrapper[4966]: I0127 15:46:34.820027 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 15:46:34 crc kubenswrapper[4966]: I0127 15:46:34.947532 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.009937 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.075468 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.096547 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.185192 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.303929 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.313318 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.331834 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.363829 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.406189 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.461651 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.474824 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.513060 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.526738 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.551919 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.558006 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.588884 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.787957 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.790266 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.792278 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.902079 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.946450 4966 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 15:46:35 crc kubenswrapper[4966]: I0127 15:46:35.946703 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://21693d3ffc8ee0aeeea062b80798940b1010af01c4730a7485e81d77f03e4c81" gracePeriod=5 Jan 27 15:46:36 crc kubenswrapper[4966]: I0127 15:46:36.017410 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 15:46:36 crc kubenswrapper[4966]: I0127 15:46:36.042602 4966 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 15:46:36 crc kubenswrapper[4966]: I0127 15:46:36.101052 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 15:46:36 crc kubenswrapper[4966]: I0127 15:46:36.190433 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 15:46:36 crc kubenswrapper[4966]: I0127 15:46:36.204638 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 15:46:36 crc kubenswrapper[4966]: I0127 15:46:36.276753 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 15:46:36 crc kubenswrapper[4966]: I0127 15:46:36.304441 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 15:46:36 crc kubenswrapper[4966]: I0127 15:46:36.305112 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 15:46:36 crc kubenswrapper[4966]: I0127 15:46:36.434120 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 15:46:36 crc kubenswrapper[4966]: I0127 15:46:36.457816 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 15:46:36 crc kubenswrapper[4966]: I0127 15:46:36.655979 4966 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 15:46:36 crc kubenswrapper[4966]: I0127 15:46:36.808760 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 15:46:36 crc kubenswrapper[4966]: I0127 15:46:36.816755 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 15:46:36 crc kubenswrapper[4966]: I0127 15:46:36.900194 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 15:46:36 crc kubenswrapper[4966]: I0127 15:46:36.950474 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 15:46:36 crc kubenswrapper[4966]: I0127 15:46:36.964213 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 15:46:37 crc kubenswrapper[4966]: I0127 15:46:37.103918 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 15:46:37 crc kubenswrapper[4966]: I0127 15:46:37.207724 4966 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 15:46:37 crc kubenswrapper[4966]: I0127 15:46:37.238022 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-666545c866-rs2mz"] Jan 27 15:46:37 crc kubenswrapper[4966]: I0127 15:46:37.388804 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 15:46:37 crc kubenswrapper[4966]: I0127 15:46:37.494987 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" event={"ID":"d0983135-11bf-4938-9360-757bc4556ec0","Type":"ContainerStarted","Data":"8b9c29df2d61e9cd3d863b70bf85a017a64ea458cba65c29fec31baa0a9c6023"} Jan 27 15:46:37 crc kubenswrapper[4966]: I0127 15:46:37.552523 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 15:46:37 crc kubenswrapper[4966]: I0127 15:46:37.713914 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 15:46:37 crc kubenswrapper[4966]: I0127 15:46:37.787936 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 15:46:37 crc kubenswrapper[4966]: I0127 15:46:37.831037 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 15:46:37 crc kubenswrapper[4966]: I0127 15:46:37.938114 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 15:46:37 crc kubenswrapper[4966]: I0127 15:46:37.939429 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 15:46:38 crc kubenswrapper[4966]: I0127 15:46:38.222729 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 15:46:38 crc kubenswrapper[4966]: I0127 15:46:38.387215 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 15:46:38 crc kubenswrapper[4966]: I0127 15:46:38.443036 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 15:46:38 crc kubenswrapper[4966]: I0127 15:46:38.458684 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 15:46:38 crc kubenswrapper[4966]: I0127 15:46:38.501302 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" event={"ID":"d0983135-11bf-4938-9360-757bc4556ec0","Type":"ContainerStarted","Data":"98617d42c30b56faedb054370539d05d2e39227d1504b38ccb8b17c22b2714ee"} Jan 27 15:46:38 crc kubenswrapper[4966]: I0127 15:46:38.502598 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:38 crc kubenswrapper[4966]: I0127 15:46:38.508001 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" Jan 27 15:46:38 crc kubenswrapper[4966]: I0127 15:46:38.520333 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" podStartSLOduration=64.520313333 podStartE2EDuration="1m4.520313333s" podCreationTimestamp="2026-01-27 15:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:46:38.517534682 +0000 UTC m=+264.820328180" watchObservedRunningTime="2026-01-27 15:46:38.520313333 +0000 UTC m=+264.823106831" Jan 27 15:46:38 crc kubenswrapper[4966]: I0127 15:46:38.549498 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 15:46:38 crc kubenswrapper[4966]: I0127 15:46:38.773224 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 15:46:38 crc kubenswrapper[4966]: I0127 15:46:38.970226 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 15:46:38 crc kubenswrapper[4966]: I0127 15:46:38.971923 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 15:46:39 crc kubenswrapper[4966]: I0127 15:46:39.088050 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 15:46:39 crc kubenswrapper[4966]: I0127 15:46:39.104734 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 15:46:39 crc kubenswrapper[4966]: I0127 15:46:39.110659 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 15:46:39 crc kubenswrapper[4966]: I0127 15:46:39.139192 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 15:46:39 crc kubenswrapper[4966]: I0127 15:46:39.583975 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 15:46:40 crc kubenswrapper[4966]: I0127 15:46:40.340979 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 15:46:40 crc kubenswrapper[4966]: I0127 15:46:40.639524 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 15:46:40 crc kubenswrapper[4966]: I0127 15:46:40.661473 4966 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 15:46:41 crc kubenswrapper[4966]: I0127 15:46:41.523225 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 15:46:41 crc kubenswrapper[4966]: I0127 15:46:41.523300 4966 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="21693d3ffc8ee0aeeea062b80798940b1010af01c4730a7485e81d77f03e4c81" exitCode=137 Jan 27 15:46:41 crc kubenswrapper[4966]: I0127 15:46:41.523344 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7986765933f58c1f6347f837b0a41cb2ee61d398e87a4f047e628d468039e8d6" Jan 27 15:46:41 crc kubenswrapper[4966]: I0127 15:46:41.560742 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 15:46:41 crc kubenswrapper[4966]: I0127 15:46:41.560877 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:46:41 crc kubenswrapper[4966]: I0127 15:46:41.664217 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 15:46:41 crc kubenswrapper[4966]: I0127 15:46:41.664284 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 15:46:41 crc kubenswrapper[4966]: I0127 15:46:41.664313 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 15:46:41 crc kubenswrapper[4966]: I0127 15:46:41.664344 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 15:46:41 crc kubenswrapper[4966]: I0127 15:46:41.664412 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:46:41 crc kubenswrapper[4966]: I0127 15:46:41.664419 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:46:41 crc kubenswrapper[4966]: I0127 15:46:41.664495 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:46:41 crc kubenswrapper[4966]: I0127 15:46:41.664527 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 15:46:41 crc kubenswrapper[4966]: I0127 15:46:41.664573 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:46:41 crc kubenswrapper[4966]: I0127 15:46:41.665145 4966 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:41 crc kubenswrapper[4966]: I0127 15:46:41.665171 4966 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:41 crc kubenswrapper[4966]: I0127 15:46:41.665184 4966 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:41 crc kubenswrapper[4966]: I0127 15:46:41.665210 4966 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:41 crc kubenswrapper[4966]: I0127 15:46:41.678173 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:46:41 crc kubenswrapper[4966]: I0127 15:46:41.765975 4966 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:42 crc kubenswrapper[4966]: I0127 15:46:42.530719 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:46:42 crc kubenswrapper[4966]: I0127 15:46:42.533841 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 15:46:54 crc kubenswrapper[4966]: I0127 15:46:54.607941 4966 generic.go:334] "Generic (PLEG): container finished" podID="1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6" containerID="22d7c4c40beca27781ab90f03b512e8744efdd00e64cc9eaf1f74f6e722837b0" exitCode=0 Jan 27 15:46:54 crc kubenswrapper[4966]: I0127 15:46:54.608017 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" event={"ID":"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6","Type":"ContainerDied","Data":"22d7c4c40beca27781ab90f03b512e8744efdd00e64cc9eaf1f74f6e722837b0"} Jan 27 15:46:54 crc kubenswrapper[4966]: I0127 15:46:54.608949 4966 scope.go:117] "RemoveContainer" containerID="22d7c4c40beca27781ab90f03b512e8744efdd00e64cc9eaf1f74f6e722837b0" Jan 27 15:46:55 crc kubenswrapper[4966]: I0127 15:46:55.616383 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" event={"ID":"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6","Type":"ContainerStarted","Data":"dbeb8c5d1a5967fc8ce57f40c7b77c409baa0f5234f843f0804cd2c35d43c8f6"} Jan 27 15:46:55 crc kubenswrapper[4966]: I0127 15:46:55.617120 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" Jan 27 15:46:55 crc kubenswrapper[4966]: I0127 15:46:55.619868 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" Jan 27 15:47:03 crc kubenswrapper[4966]: I0127 15:47:03.672721 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 27 15:47:03 crc kubenswrapper[4966]: I0127 15:47:03.675975 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 15:47:03 crc kubenswrapper[4966]: I0127 15:47:03.676087 4966 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="fac85f1bd4c5565f2422e11b56b9230af9775caaee9c172882ea6b9e576f7040" exitCode=137 Jan 27 15:47:03 crc kubenswrapper[4966]: I0127 15:47:03.676151 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"fac85f1bd4c5565f2422e11b56b9230af9775caaee9c172882ea6b9e576f7040"} Jan 27 15:47:03 crc kubenswrapper[4966]: I0127 15:47:03.676227 4966 scope.go:117] "RemoveContainer" containerID="83be25b7f05d2a1303a616fb9736297b604f86ec6d60b05b56fcdeed824f5fe0" Jan 27 15:47:04 crc kubenswrapper[4966]: I0127 15:47:04.684063 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 27 15:47:04 crc kubenswrapper[4966]: I0127 15:47:04.685372 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"298e80b34deea71f20d4432f6323f4ebf6d0071fae49e9b6103adf4b1a91de47"} Jan 27 15:47:07 crc kubenswrapper[4966]: I0127 15:47:07.994335 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:47:13 crc kubenswrapper[4966]: I0127 15:47:13.594471 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:47:13 crc kubenswrapper[4966]: I0127 15:47:13.599704 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:47:13 crc kubenswrapper[4966]: I0127 15:47:13.735510 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:47:14 crc kubenswrapper[4966]: I0127 15:47:14.313627 4966 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 15:47:27 crc kubenswrapper[4966]: I0127 15:47:27.899597 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k"] Jan 27 15:47:27 crc kubenswrapper[4966]: I0127 15:47:27.900317 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" podUID="ac92efeb-93b0-4044-9b79-fbfc19fc629e" containerName="route-controller-manager" containerID="cri-o://0be8d78ee63c6aca2701c271d45a9d0a3c9885720b08f30c5ae7200ace133328" gracePeriod=30 Jan 27 15:47:27 crc kubenswrapper[4966]: I0127 15:47:27.926700 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-w8jb4"] Jan 27 15:47:27 crc kubenswrapper[4966]: I0127 15:47:27.927174 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" podUID="b36ea7a2-dd43-4f61-a99e-523cd4bea6b6" containerName="controller-manager" containerID="cri-o://e9b0b48ce6f3d36592d27974224bd0c3614952c4fd5d913bd423c8674121d284" gracePeriod=30 Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.310229 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.316620 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.384467 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac92efeb-93b0-4044-9b79-fbfc19fc629e-config\") pod \"ac92efeb-93b0-4044-9b79-fbfc19fc629e\" (UID: \"ac92efeb-93b0-4044-9b79-fbfc19fc629e\") " Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.384752 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh4n4\" (UniqueName: \"kubernetes.io/projected/ac92efeb-93b0-4044-9b79-fbfc19fc629e-kube-api-access-rh4n4\") pod \"ac92efeb-93b0-4044-9b79-fbfc19fc629e\" (UID: \"ac92efeb-93b0-4044-9b79-fbfc19fc629e\") " Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.384850 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac92efeb-93b0-4044-9b79-fbfc19fc629e-client-ca\") pod \"ac92efeb-93b0-4044-9b79-fbfc19fc629e\" (UID: \"ac92efeb-93b0-4044-9b79-fbfc19fc629e\") " Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.384953 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac92efeb-93b0-4044-9b79-fbfc19fc629e-serving-cert\") pod \"ac92efeb-93b0-4044-9b79-fbfc19fc629e\" (UID: \"ac92efeb-93b0-4044-9b79-fbfc19fc629e\") " Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.385367 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac92efeb-93b0-4044-9b79-fbfc19fc629e-config" (OuterVolumeSpecName: "config") pod "ac92efeb-93b0-4044-9b79-fbfc19fc629e" (UID: "ac92efeb-93b0-4044-9b79-fbfc19fc629e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.385817 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac92efeb-93b0-4044-9b79-fbfc19fc629e-client-ca" (OuterVolumeSpecName: "client-ca") pod "ac92efeb-93b0-4044-9b79-fbfc19fc629e" (UID: "ac92efeb-93b0-4044-9b79-fbfc19fc629e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.406191 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac92efeb-93b0-4044-9b79-fbfc19fc629e-kube-api-access-rh4n4" (OuterVolumeSpecName: "kube-api-access-rh4n4") pod "ac92efeb-93b0-4044-9b79-fbfc19fc629e" (UID: "ac92efeb-93b0-4044-9b79-fbfc19fc629e"). InnerVolumeSpecName "kube-api-access-rh4n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.406770 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac92efeb-93b0-4044-9b79-fbfc19fc629e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ac92efeb-93b0-4044-9b79-fbfc19fc629e" (UID: "ac92efeb-93b0-4044-9b79-fbfc19fc629e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.486309 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-proxy-ca-bundles\") pod \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\" (UID: \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\") " Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.486385 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-config\") pod \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\" (UID: \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\") " Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.486478 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-serving-cert\") pod \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\" (UID: \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\") " Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.486505 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-client-ca\") pod \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\" (UID: \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\") " Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.486525 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9wzv\" (UniqueName: \"kubernetes.io/projected/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-kube-api-access-r9wzv\") pod \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\" (UID: \"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6\") " Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.486698 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac92efeb-93b0-4044-9b79-fbfc19fc629e-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.486708 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh4n4\" (UniqueName: \"kubernetes.io/projected/ac92efeb-93b0-4044-9b79-fbfc19fc629e-kube-api-access-rh4n4\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.486718 4966 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac92efeb-93b0-4044-9b79-fbfc19fc629e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.486726 4966 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac92efeb-93b0-4044-9b79-fbfc19fc629e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.487192 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b36ea7a2-dd43-4f61-a99e-523cd4bea6b6" (UID: "b36ea7a2-dd43-4f61-a99e-523cd4bea6b6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.487297 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-config" (OuterVolumeSpecName: "config") pod "b36ea7a2-dd43-4f61-a99e-523cd4bea6b6" (UID: "b36ea7a2-dd43-4f61-a99e-523cd4bea6b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.487742 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-client-ca" (OuterVolumeSpecName: "client-ca") pod "b36ea7a2-dd43-4f61-a99e-523cd4bea6b6" (UID: "b36ea7a2-dd43-4f61-a99e-523cd4bea6b6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.489248 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b36ea7a2-dd43-4f61-a99e-523cd4bea6b6" (UID: "b36ea7a2-dd43-4f61-a99e-523cd4bea6b6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.489251 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-kube-api-access-r9wzv" (OuterVolumeSpecName: "kube-api-access-r9wzv") pod "b36ea7a2-dd43-4f61-a99e-523cd4bea6b6" (UID: "b36ea7a2-dd43-4f61-a99e-523cd4bea6b6"). InnerVolumeSpecName "kube-api-access-r9wzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.587960 4966 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.587992 4966 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.588003 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9wzv\" (UniqueName: \"kubernetes.io/projected/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-kube-api-access-r9wzv\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.588012 4966 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.588044 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.815166 4966 generic.go:334] "Generic (PLEG): container finished" podID="b36ea7a2-dd43-4f61-a99e-523cd4bea6b6" containerID="e9b0b48ce6f3d36592d27974224bd0c3614952c4fd5d913bd423c8674121d284" exitCode=0 Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.815235 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" event={"ID":"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6","Type":"ContainerDied","Data":"e9b0b48ce6f3d36592d27974224bd0c3614952c4fd5d913bd423c8674121d284"} Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.815271 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" event={"ID":"b36ea7a2-dd43-4f61-a99e-523cd4bea6b6","Type":"ContainerDied","Data":"236f498289c18ca399709ac7813cce33cd3e5784a44e4f3c9510bf9eb9cc4719"} Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.815288 4966 scope.go:117] "RemoveContainer" containerID="e9b0b48ce6f3d36592d27974224bd0c3614952c4fd5d913bd423c8674121d284" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.815375 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-w8jb4" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.817069 4966 generic.go:334] "Generic (PLEG): container finished" podID="ac92efeb-93b0-4044-9b79-fbfc19fc629e" containerID="0be8d78ee63c6aca2701c271d45a9d0a3c9885720b08f30c5ae7200ace133328" exitCode=0 Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.817103 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" event={"ID":"ac92efeb-93b0-4044-9b79-fbfc19fc629e","Type":"ContainerDied","Data":"0be8d78ee63c6aca2701c271d45a9d0a3c9885720b08f30c5ae7200ace133328"} Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.817117 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" event={"ID":"ac92efeb-93b0-4044-9b79-fbfc19fc629e","Type":"ContainerDied","Data":"59e8f32ebe61814f54edf8716befc1a7e04791a19a7b5b62e2d8cbffa6ab4534"} Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.817163 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.832179 4966 scope.go:117] "RemoveContainer" containerID="e9b0b48ce6f3d36592d27974224bd0c3614952c4fd5d913bd423c8674121d284" Jan 27 15:47:28 crc kubenswrapper[4966]: E0127 15:47:28.832678 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9b0b48ce6f3d36592d27974224bd0c3614952c4fd5d913bd423c8674121d284\": container with ID starting with e9b0b48ce6f3d36592d27974224bd0c3614952c4fd5d913bd423c8674121d284 not found: ID does not exist" containerID="e9b0b48ce6f3d36592d27974224bd0c3614952c4fd5d913bd423c8674121d284" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.832721 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9b0b48ce6f3d36592d27974224bd0c3614952c4fd5d913bd423c8674121d284"} err="failed to get container status \"e9b0b48ce6f3d36592d27974224bd0c3614952c4fd5d913bd423c8674121d284\": rpc error: code = NotFound desc = could not find container \"e9b0b48ce6f3d36592d27974224bd0c3614952c4fd5d913bd423c8674121d284\": container with ID starting with e9b0b48ce6f3d36592d27974224bd0c3614952c4fd5d913bd423c8674121d284 not found: ID does not exist" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.832746 4966 scope.go:117] "RemoveContainer" containerID="0be8d78ee63c6aca2701c271d45a9d0a3c9885720b08f30c5ae7200ace133328" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.841128 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-w8jb4"] Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.844699 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-w8jb4"] Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.846037 4966 scope.go:117] "RemoveContainer" containerID="0be8d78ee63c6aca2701c271d45a9d0a3c9885720b08f30c5ae7200ace133328" Jan 27 15:47:28 crc kubenswrapper[4966]: E0127 15:47:28.846552 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0be8d78ee63c6aca2701c271d45a9d0a3c9885720b08f30c5ae7200ace133328\": container with ID starting with 0be8d78ee63c6aca2701c271d45a9d0a3c9885720b08f30c5ae7200ace133328 not found: ID does not exist" containerID="0be8d78ee63c6aca2701c271d45a9d0a3c9885720b08f30c5ae7200ace133328" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.846600 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0be8d78ee63c6aca2701c271d45a9d0a3c9885720b08f30c5ae7200ace133328"} err="failed to get container status \"0be8d78ee63c6aca2701c271d45a9d0a3c9885720b08f30c5ae7200ace133328\": rpc error: code = NotFound desc = could not find container \"0be8d78ee63c6aca2701c271d45a9d0a3c9885720b08f30c5ae7200ace133328\": container with ID starting with 0be8d78ee63c6aca2701c271d45a9d0a3c9885720b08f30c5ae7200ace133328 not found: ID does not exist" Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.855659 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k"] Jan 27 15:47:28 crc kubenswrapper[4966]: I0127 15:47:28.859539 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p7z6k"] Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.340000 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57b9555869-rmssk"] Jan 27 15:47:29 crc kubenswrapper[4966]: E0127 15:47:29.340267 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36ea7a2-dd43-4f61-a99e-523cd4bea6b6" containerName="controller-manager" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.340302 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36ea7a2-dd43-4f61-a99e-523cd4bea6b6" containerName="controller-manager" Jan 27 15:47:29 crc kubenswrapper[4966]: E0127 15:47:29.340316 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.340323 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 15:47:29 crc kubenswrapper[4966]: E0127 15:47:29.340336 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac92efeb-93b0-4044-9b79-fbfc19fc629e" containerName="route-controller-manager" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.340343 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac92efeb-93b0-4044-9b79-fbfc19fc629e" containerName="route-controller-manager" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.340453 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.340465 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="b36ea7a2-dd43-4f61-a99e-523cd4bea6b6" containerName="controller-manager" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.340481 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac92efeb-93b0-4044-9b79-fbfc19fc629e" containerName="route-controller-manager" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.340871 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.343437 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.343658 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.343787 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.343875 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.344019 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.344057 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.343781 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t"] Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.344666 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.347480 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.347889 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.348507 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.348840 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.349397 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.349678 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.351605 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t"] Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.356111 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.366815 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57b9555869-rmssk"] Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.496973 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5mls\" (UniqueName: \"kubernetes.io/projected/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-kube-api-access-m5mls\") pod \"controller-manager-57b9555869-rmssk\" (UID: \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\") " pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.497035 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0d60f56-c8ec-4004-a1c4-4f014dbccf7f-client-ca\") pod \"route-controller-manager-86647b877f-m6l5t\" (UID: \"e0d60f56-c8ec-4004-a1c4-4f014dbccf7f\") " pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.497082 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-proxy-ca-bundles\") pod \"controller-manager-57b9555869-rmssk\" (UID: \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\") " pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.497132 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lztf9\" (UniqueName: \"kubernetes.io/projected/e0d60f56-c8ec-4004-a1c4-4f014dbccf7f-kube-api-access-lztf9\") pod \"route-controller-manager-86647b877f-m6l5t\" (UID: \"e0d60f56-c8ec-4004-a1c4-4f014dbccf7f\") " pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.497153 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0d60f56-c8ec-4004-a1c4-4f014dbccf7f-config\") pod \"route-controller-manager-86647b877f-m6l5t\" (UID: \"e0d60f56-c8ec-4004-a1c4-4f014dbccf7f\") " pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.497290 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-client-ca\") pod \"controller-manager-57b9555869-rmssk\" (UID: \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\") " pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.497363 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-config\") pod \"controller-manager-57b9555869-rmssk\" (UID: \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\") " pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.497451 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0d60f56-c8ec-4004-a1c4-4f014dbccf7f-serving-cert\") pod \"route-controller-manager-86647b877f-m6l5t\" (UID: \"e0d60f56-c8ec-4004-a1c4-4f014dbccf7f\") " pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.497503 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-serving-cert\") pod \"controller-manager-57b9555869-rmssk\" (UID: \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\") " pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.598999 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-proxy-ca-bundles\") pod \"controller-manager-57b9555869-rmssk\" (UID: \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\") " pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.599427 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lztf9\" (UniqueName: \"kubernetes.io/projected/e0d60f56-c8ec-4004-a1c4-4f014dbccf7f-kube-api-access-lztf9\") pod \"route-controller-manager-86647b877f-m6l5t\" (UID: \"e0d60f56-c8ec-4004-a1c4-4f014dbccf7f\") " pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.599470 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0d60f56-c8ec-4004-a1c4-4f014dbccf7f-config\") pod \"route-controller-manager-86647b877f-m6l5t\" (UID: \"e0d60f56-c8ec-4004-a1c4-4f014dbccf7f\") " pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.599521 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-client-ca\") pod \"controller-manager-57b9555869-rmssk\" (UID: \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\") " pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.599567 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-config\") pod \"controller-manager-57b9555869-rmssk\" (UID: \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\") " pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.599638 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0d60f56-c8ec-4004-a1c4-4f014dbccf7f-serving-cert\") pod \"route-controller-manager-86647b877f-m6l5t\" (UID: \"e0d60f56-c8ec-4004-a1c4-4f014dbccf7f\") " pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.599680 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-serving-cert\") pod \"controller-manager-57b9555869-rmssk\" (UID: \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\") " pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.599718 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5mls\" (UniqueName: \"kubernetes.io/projected/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-kube-api-access-m5mls\") pod \"controller-manager-57b9555869-rmssk\" (UID: \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\") " pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.599753 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0d60f56-c8ec-4004-a1c4-4f014dbccf7f-client-ca\") pod \"route-controller-manager-86647b877f-m6l5t\" (UID: \"e0d60f56-c8ec-4004-a1c4-4f014dbccf7f\") " pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.600728 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-proxy-ca-bundles\") pod \"controller-manager-57b9555869-rmssk\" (UID: \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\") " pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.601355 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0d60f56-c8ec-4004-a1c4-4f014dbccf7f-client-ca\") pod \"route-controller-manager-86647b877f-m6l5t\" (UID: \"e0d60f56-c8ec-4004-a1c4-4f014dbccf7f\") " pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.602184 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-config\") pod \"controller-manager-57b9555869-rmssk\" (UID: \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\") " pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.602301 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-client-ca\") pod \"controller-manager-57b9555869-rmssk\" (UID: \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\") " pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.602473 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0d60f56-c8ec-4004-a1c4-4f014dbccf7f-config\") pod \"route-controller-manager-86647b877f-m6l5t\" (UID: \"e0d60f56-c8ec-4004-a1c4-4f014dbccf7f\") " pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.605199 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0d60f56-c8ec-4004-a1c4-4f014dbccf7f-serving-cert\") pod \"route-controller-manager-86647b877f-m6l5t\" (UID: \"e0d60f56-c8ec-4004-a1c4-4f014dbccf7f\") " pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.612143 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-serving-cert\") pod \"controller-manager-57b9555869-rmssk\" (UID: \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\") " pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.616796 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lztf9\" (UniqueName: \"kubernetes.io/projected/e0d60f56-c8ec-4004-a1c4-4f014dbccf7f-kube-api-access-lztf9\") pod \"route-controller-manager-86647b877f-m6l5t\" (UID: \"e0d60f56-c8ec-4004-a1c4-4f014dbccf7f\") " pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.627651 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5mls\" (UniqueName: \"kubernetes.io/projected/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-kube-api-access-m5mls\") pod \"controller-manager-57b9555869-rmssk\" (UID: \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\") " pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.660499 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.671609 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.860589 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57b9555869-rmssk"] Jan 27 15:47:29 crc kubenswrapper[4966]: I0127 15:47:29.886640 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t"] Jan 27 15:47:29 crc kubenswrapper[4966]: W0127 15:47:29.894274 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0d60f56_c8ec_4004_a1c4_4f014dbccf7f.slice/crio-c2ae821b203888804a414e03992f0295f9206d1b8fcc53e86b0b1a9a86371524 WatchSource:0}: Error finding container c2ae821b203888804a414e03992f0295f9206d1b8fcc53e86b0b1a9a86371524: Status 404 returned error can't find the container with id c2ae821b203888804a414e03992f0295f9206d1b8fcc53e86b0b1a9a86371524 Jan 27 15:47:30 crc kubenswrapper[4966]: I0127 15:47:30.526881 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac92efeb-93b0-4044-9b79-fbfc19fc629e" path="/var/lib/kubelet/pods/ac92efeb-93b0-4044-9b79-fbfc19fc629e/volumes" Jan 27 15:47:30 crc kubenswrapper[4966]: I0127 15:47:30.527855 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b36ea7a2-dd43-4f61-a99e-523cd4bea6b6" path="/var/lib/kubelet/pods/b36ea7a2-dd43-4f61-a99e-523cd4bea6b6/volumes" Jan 27 15:47:30 crc kubenswrapper[4966]: I0127 15:47:30.832055 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" event={"ID":"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c","Type":"ContainerStarted","Data":"283bff71032261b4c039b94ac10da80694945d69a3042855b63ddfc9fe76dd77"} Jan 27 15:47:30 crc kubenswrapper[4966]: I0127 15:47:30.832512 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" event={"ID":"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c","Type":"ContainerStarted","Data":"8b9e928590d0041bfb481a6b0602b82f5f5865eb0fbd61e3cf282f11dcd354bc"} Jan 27 15:47:30 crc kubenswrapper[4966]: I0127 15:47:30.832539 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" Jan 27 15:47:30 crc kubenswrapper[4966]: I0127 15:47:30.834343 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" event={"ID":"e0d60f56-c8ec-4004-a1c4-4f014dbccf7f","Type":"ContainerStarted","Data":"e2295b9d9c5579954267f6c3ac4518b2cfb177f96abe9f67553ea254d6c45bbd"} Jan 27 15:47:30 crc kubenswrapper[4966]: I0127 15:47:30.834366 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" event={"ID":"e0d60f56-c8ec-4004-a1c4-4f014dbccf7f","Type":"ContainerStarted","Data":"c2ae821b203888804a414e03992f0295f9206d1b8fcc53e86b0b1a9a86371524"} Jan 27 15:47:30 crc kubenswrapper[4966]: I0127 15:47:30.834647 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" Jan 27 15:47:30 crc kubenswrapper[4966]: I0127 15:47:30.840149 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" Jan 27 15:47:30 crc kubenswrapper[4966]: I0127 15:47:30.841174 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" Jan 27 15:47:30 crc kubenswrapper[4966]: I0127 15:47:30.852839 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" podStartSLOduration=3.8528168689999998 podStartE2EDuration="3.852816869s" podCreationTimestamp="2026-01-27 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:47:30.851362122 +0000 UTC m=+317.154155630" watchObservedRunningTime="2026-01-27 15:47:30.852816869 +0000 UTC m=+317.155610367" Jan 27 15:47:30 crc kubenswrapper[4966]: I0127 15:47:30.874113 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" podStartSLOduration=2.874089351 podStartE2EDuration="2.874089351s" podCreationTimestamp="2026-01-27 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:47:30.868998876 +0000 UTC m=+317.171792374" watchObservedRunningTime="2026-01-27 15:47:30.874089351 +0000 UTC m=+317.176882839" Jan 27 15:47:40 crc kubenswrapper[4966]: I0127 15:47:40.119561 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:47:40 crc kubenswrapper[4966]: I0127 15:47:40.120242 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:47:40 crc kubenswrapper[4966]: I0127 15:47:40.549562 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57b9555869-rmssk"] Jan 27 15:47:40 crc kubenswrapper[4966]: I0127 15:47:40.549820 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" podUID="97aa03a2-fbbe-4c5a-80fd-3a8dba26501c" containerName="controller-manager" containerID="cri-o://283bff71032261b4c039b94ac10da80694945d69a3042855b63ddfc9fe76dd77" gracePeriod=30 Jan 27 15:47:40 crc kubenswrapper[4966]: I0127 15:47:40.885732 4966 generic.go:334] "Generic (PLEG): container finished" podID="97aa03a2-fbbe-4c5a-80fd-3a8dba26501c" containerID="283bff71032261b4c039b94ac10da80694945d69a3042855b63ddfc9fe76dd77" exitCode=0 Jan 27 15:47:40 crc kubenswrapper[4966]: I0127 15:47:40.885780 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" event={"ID":"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c","Type":"ContainerDied","Data":"283bff71032261b4c039b94ac10da80694945d69a3042855b63ddfc9fe76dd77"} Jan 27 15:47:41 crc kubenswrapper[4966]: I0127 15:47:41.044443 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" Jan 27 15:47:41 crc kubenswrapper[4966]: I0127 15:47:41.178892 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-config\") pod \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\" (UID: \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\") " Jan 27 15:47:41 crc kubenswrapper[4966]: I0127 15:47:41.179165 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-client-ca\") pod \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\" (UID: \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\") " Jan 27 15:47:41 crc kubenswrapper[4966]: I0127 15:47:41.179218 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-serving-cert\") pod \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\" (UID: \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\") " Jan 27 15:47:41 crc kubenswrapper[4966]: I0127 15:47:41.179240 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-proxy-ca-bundles\") pod \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\" (UID: \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\") " Jan 27 15:47:41 crc kubenswrapper[4966]: I0127 15:47:41.179263 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5mls\" (UniqueName: \"kubernetes.io/projected/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-kube-api-access-m5mls\") pod \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\" (UID: \"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c\") " Jan 27 15:47:41 crc kubenswrapper[4966]: I0127 15:47:41.179733 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-client-ca" (OuterVolumeSpecName: "client-ca") pod "97aa03a2-fbbe-4c5a-80fd-3a8dba26501c" (UID: "97aa03a2-fbbe-4c5a-80fd-3a8dba26501c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:47:41 crc kubenswrapper[4966]: I0127 15:47:41.179875 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "97aa03a2-fbbe-4c5a-80fd-3a8dba26501c" (UID: "97aa03a2-fbbe-4c5a-80fd-3a8dba26501c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:47:41 crc kubenswrapper[4966]: I0127 15:47:41.180009 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-config" (OuterVolumeSpecName: "config") pod "97aa03a2-fbbe-4c5a-80fd-3a8dba26501c" (UID: "97aa03a2-fbbe-4c5a-80fd-3a8dba26501c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:47:41 crc kubenswrapper[4966]: I0127 15:47:41.184168 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-kube-api-access-m5mls" (OuterVolumeSpecName: "kube-api-access-m5mls") pod "97aa03a2-fbbe-4c5a-80fd-3a8dba26501c" (UID: "97aa03a2-fbbe-4c5a-80fd-3a8dba26501c"). InnerVolumeSpecName "kube-api-access-m5mls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:47:41 crc kubenswrapper[4966]: I0127 15:47:41.184928 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "97aa03a2-fbbe-4c5a-80fd-3a8dba26501c" (UID: "97aa03a2-fbbe-4c5a-80fd-3a8dba26501c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:47:41 crc kubenswrapper[4966]: I0127 15:47:41.280063 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:41 crc kubenswrapper[4966]: I0127 15:47:41.280095 4966 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:41 crc kubenswrapper[4966]: I0127 15:47:41.280106 4966 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:41 crc kubenswrapper[4966]: I0127 15:47:41.280113 4966 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:41 crc kubenswrapper[4966]: I0127 15:47:41.280126 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5mls\" (UniqueName: \"kubernetes.io/projected/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c-kube-api-access-m5mls\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:41 crc kubenswrapper[4966]: I0127 15:47:41.893418 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" event={"ID":"97aa03a2-fbbe-4c5a-80fd-3a8dba26501c","Type":"ContainerDied","Data":"8b9e928590d0041bfb481a6b0602b82f5f5865eb0fbd61e3cf282f11dcd354bc"} Jan 27 15:47:41 crc kubenswrapper[4966]: I0127 15:47:41.893475 4966 scope.go:117] "RemoveContainer" containerID="283bff71032261b4c039b94ac10da80694945d69a3042855b63ddfc9fe76dd77" Jan 27 15:47:41 crc kubenswrapper[4966]: I0127 15:47:41.893574 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b9555869-rmssk" Jan 27 15:47:41 crc kubenswrapper[4966]: I0127 15:47:41.940073 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57b9555869-rmssk"] Jan 27 15:47:41 crc kubenswrapper[4966]: I0127 15:47:41.942783 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-57b9555869-rmssk"] Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.346814 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7df4589fcc-vdfk5"] Jan 27 15:47:42 crc kubenswrapper[4966]: E0127 15:47:42.347012 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97aa03a2-fbbe-4c5a-80fd-3a8dba26501c" containerName="controller-manager" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.347024 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="97aa03a2-fbbe-4c5a-80fd-3a8dba26501c" containerName="controller-manager" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.347127 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="97aa03a2-fbbe-4c5a-80fd-3a8dba26501c" containerName="controller-manager" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.347458 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.349629 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.349854 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.351128 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.351655 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.352849 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.355228 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.359228 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.364243 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7df4589fcc-vdfk5"] Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.394978 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03e547fd-14a6-41eb-9bf7-8aea75e60ddf-client-ca\") pod \"controller-manager-7df4589fcc-vdfk5\" (UID: \"03e547fd-14a6-41eb-9bf7-8aea75e60ddf\") " pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.395023 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03e547fd-14a6-41eb-9bf7-8aea75e60ddf-proxy-ca-bundles\") pod \"controller-manager-7df4589fcc-vdfk5\" (UID: \"03e547fd-14a6-41eb-9bf7-8aea75e60ddf\") " pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.395055 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46xxr\" (UniqueName: \"kubernetes.io/projected/03e547fd-14a6-41eb-9bf7-8aea75e60ddf-kube-api-access-46xxr\") pod \"controller-manager-7df4589fcc-vdfk5\" (UID: \"03e547fd-14a6-41eb-9bf7-8aea75e60ddf\") " pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.395098 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03e547fd-14a6-41eb-9bf7-8aea75e60ddf-config\") pod \"controller-manager-7df4589fcc-vdfk5\" (UID: \"03e547fd-14a6-41eb-9bf7-8aea75e60ddf\") " pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.395190 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03e547fd-14a6-41eb-9bf7-8aea75e60ddf-serving-cert\") pod \"controller-manager-7df4589fcc-vdfk5\" (UID: \"03e547fd-14a6-41eb-9bf7-8aea75e60ddf\") " pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.496051 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03e547fd-14a6-41eb-9bf7-8aea75e60ddf-config\") pod \"controller-manager-7df4589fcc-vdfk5\" (UID: \"03e547fd-14a6-41eb-9bf7-8aea75e60ddf\") " pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.496098 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03e547fd-14a6-41eb-9bf7-8aea75e60ddf-serving-cert\") pod \"controller-manager-7df4589fcc-vdfk5\" (UID: \"03e547fd-14a6-41eb-9bf7-8aea75e60ddf\") " pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.496147 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03e547fd-14a6-41eb-9bf7-8aea75e60ddf-client-ca\") pod \"controller-manager-7df4589fcc-vdfk5\" (UID: \"03e547fd-14a6-41eb-9bf7-8aea75e60ddf\") " pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.496165 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03e547fd-14a6-41eb-9bf7-8aea75e60ddf-proxy-ca-bundles\") pod \"controller-manager-7df4589fcc-vdfk5\" (UID: \"03e547fd-14a6-41eb-9bf7-8aea75e60ddf\") " pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.496187 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46xxr\" (UniqueName: \"kubernetes.io/projected/03e547fd-14a6-41eb-9bf7-8aea75e60ddf-kube-api-access-46xxr\") pod \"controller-manager-7df4589fcc-vdfk5\" (UID: \"03e547fd-14a6-41eb-9bf7-8aea75e60ddf\") " pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.497280 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03e547fd-14a6-41eb-9bf7-8aea75e60ddf-client-ca\") pod \"controller-manager-7df4589fcc-vdfk5\" (UID: \"03e547fd-14a6-41eb-9bf7-8aea75e60ddf\") " pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.497647 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03e547fd-14a6-41eb-9bf7-8aea75e60ddf-proxy-ca-bundles\") pod \"controller-manager-7df4589fcc-vdfk5\" (UID: \"03e547fd-14a6-41eb-9bf7-8aea75e60ddf\") " pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.497672 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03e547fd-14a6-41eb-9bf7-8aea75e60ddf-config\") pod \"controller-manager-7df4589fcc-vdfk5\" (UID: \"03e547fd-14a6-41eb-9bf7-8aea75e60ddf\") " pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.500228 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03e547fd-14a6-41eb-9bf7-8aea75e60ddf-serving-cert\") pod \"controller-manager-7df4589fcc-vdfk5\" (UID: \"03e547fd-14a6-41eb-9bf7-8aea75e60ddf\") " pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.527379 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97aa03a2-fbbe-4c5a-80fd-3a8dba26501c" path="/var/lib/kubelet/pods/97aa03a2-fbbe-4c5a-80fd-3a8dba26501c/volumes" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.534471 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46xxr\" (UniqueName: \"kubernetes.io/projected/03e547fd-14a6-41eb-9bf7-8aea75e60ddf-kube-api-access-46xxr\") pod \"controller-manager-7df4589fcc-vdfk5\" (UID: \"03e547fd-14a6-41eb-9bf7-8aea75e60ddf\") " pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 15:47:42 crc kubenswrapper[4966]: I0127 15:47:42.684875 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 15:47:43 crc kubenswrapper[4966]: I0127 15:47:43.089356 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7df4589fcc-vdfk5"] Jan 27 15:47:43 crc kubenswrapper[4966]: I0127 15:47:43.905488 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" event={"ID":"03e547fd-14a6-41eb-9bf7-8aea75e60ddf","Type":"ContainerStarted","Data":"e0308cf11a67b3756b015108ae56eb2f5293f8f037d81cc480162fc732578ef6"} Jan 27 15:47:43 crc kubenswrapper[4966]: I0127 15:47:43.905827 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" event={"ID":"03e547fd-14a6-41eb-9bf7-8aea75e60ddf","Type":"ContainerStarted","Data":"b3ed0fdb5b64b6a1d7b4b0b995f930851bea14ff3325981012996ba14dbd117e"} Jan 27 15:47:43 crc kubenswrapper[4966]: I0127 15:47:43.905869 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 15:47:43 crc kubenswrapper[4966]: I0127 15:47:43.911804 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 15:47:43 crc kubenswrapper[4966]: I0127 15:47:43.939338 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" podStartSLOduration=3.939316859 podStartE2EDuration="3.939316859s" podCreationTimestamp="2026-01-27 15:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:47:43.937315155 +0000 UTC m=+330.240108663" watchObservedRunningTime="2026-01-27 15:47:43.939316859 +0000 UTC m=+330.242110347" Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.796755 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dnhcl"] Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.797860 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dnhcl" podUID="c3ad1e5f-77aa-4005-bd12-618819d83c12" containerName="registry-server" containerID="cri-o://74c361ba077011ebf2e5fe3ff200db6938dabefc1bfb5559a1430bf50c8723d6" gracePeriod=30 Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.805095 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gtmd5"] Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.805830 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gtmd5" podUID="6e7220b0-9ce3-461c-a434-e09d4fde1b0a" containerName="registry-server" containerID="cri-o://14f0e287f7dfcf5ac456350e45b80e862eadab33dd60fabcbe59a370af5130a3" gracePeriod=30 Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.822905 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tjhrf"] Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.823264 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" podUID="1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6" containerName="marketplace-operator" containerID="cri-o://dbeb8c5d1a5967fc8ce57f40c7b77c409baa0f5234f843f0804cd2c35d43c8f6" gracePeriod=30 Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.838455 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrghj"] Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.838756 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wrghj" podUID="bbad73d0-4e65-4601-9d3e-7ac464269b5f" containerName="registry-server" containerID="cri-o://8933519b7f897a90af067c5a6d8dfcf872ff9e7902a57991d1977c60c2b0f614" gracePeriod=30 Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.854396 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jrddt"] Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.854643 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jrddt" podUID="9e0696ef-3017-4937-92ee-fe9e794c9fdd" containerName="registry-server" containerID="cri-o://dcad32c69724e125a80f3b57df58adbe69dc27eacaa8930b2ae72cbd622c4d2f" gracePeriod=30 Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.862109 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7cfns"] Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.863990 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7cfns" Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.865231 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7cfns"] Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.965858 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/692eec10-7d08-44ba-aa26-0ac0eacfb1e7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7cfns\" (UID: \"692eec10-7d08-44ba-aa26-0ac0eacfb1e7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7cfns" Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.966024 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcvjb\" (UniqueName: \"kubernetes.io/projected/692eec10-7d08-44ba-aa26-0ac0eacfb1e7-kube-api-access-hcvjb\") pod \"marketplace-operator-79b997595-7cfns\" (UID: \"692eec10-7d08-44ba-aa26-0ac0eacfb1e7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7cfns" Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.966073 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/692eec10-7d08-44ba-aa26-0ac0eacfb1e7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7cfns\" (UID: \"692eec10-7d08-44ba-aa26-0ac0eacfb1e7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7cfns" Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.977415 4966 generic.go:334] "Generic (PLEG): container finished" podID="bbad73d0-4e65-4601-9d3e-7ac464269b5f" containerID="8933519b7f897a90af067c5a6d8dfcf872ff9e7902a57991d1977c60c2b0f614" exitCode=0 Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.977555 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrghj" event={"ID":"bbad73d0-4e65-4601-9d3e-7ac464269b5f","Type":"ContainerDied","Data":"8933519b7f897a90af067c5a6d8dfcf872ff9e7902a57991d1977c60c2b0f614"} Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.979548 4966 generic.go:334] "Generic (PLEG): container finished" podID="c3ad1e5f-77aa-4005-bd12-618819d83c12" containerID="74c361ba077011ebf2e5fe3ff200db6938dabefc1bfb5559a1430bf50c8723d6" exitCode=0 Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.979598 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnhcl" event={"ID":"c3ad1e5f-77aa-4005-bd12-618819d83c12","Type":"ContainerDied","Data":"74c361ba077011ebf2e5fe3ff200db6938dabefc1bfb5559a1430bf50c8723d6"} Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.981070 4966 generic.go:334] "Generic (PLEG): container finished" podID="6e7220b0-9ce3-461c-a434-e09d4fde1b0a" containerID="14f0e287f7dfcf5ac456350e45b80e862eadab33dd60fabcbe59a370af5130a3" exitCode=0 Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.981115 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtmd5" event={"ID":"6e7220b0-9ce3-461c-a434-e09d4fde1b0a","Type":"ContainerDied","Data":"14f0e287f7dfcf5ac456350e45b80e862eadab33dd60fabcbe59a370af5130a3"} Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.982481 4966 generic.go:334] "Generic (PLEG): container finished" podID="1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6" containerID="dbeb8c5d1a5967fc8ce57f40c7b77c409baa0f5234f843f0804cd2c35d43c8f6" exitCode=0 Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.982526 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" event={"ID":"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6","Type":"ContainerDied","Data":"dbeb8c5d1a5967fc8ce57f40c7b77c409baa0f5234f843f0804cd2c35d43c8f6"} Jan 27 15:47:55 crc kubenswrapper[4966]: I0127 15:47:55.982567 4966 scope.go:117] "RemoveContainer" containerID="22d7c4c40beca27781ab90f03b512e8744efdd00e64cc9eaf1f74f6e722837b0" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.066680 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcvjb\" (UniqueName: \"kubernetes.io/projected/692eec10-7d08-44ba-aa26-0ac0eacfb1e7-kube-api-access-hcvjb\") pod \"marketplace-operator-79b997595-7cfns\" (UID: \"692eec10-7d08-44ba-aa26-0ac0eacfb1e7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7cfns" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.066723 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/692eec10-7d08-44ba-aa26-0ac0eacfb1e7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7cfns\" (UID: \"692eec10-7d08-44ba-aa26-0ac0eacfb1e7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7cfns" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.066774 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/692eec10-7d08-44ba-aa26-0ac0eacfb1e7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7cfns\" (UID: \"692eec10-7d08-44ba-aa26-0ac0eacfb1e7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7cfns" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.069437 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/692eec10-7d08-44ba-aa26-0ac0eacfb1e7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7cfns\" (UID: \"692eec10-7d08-44ba-aa26-0ac0eacfb1e7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7cfns" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.072972 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/692eec10-7d08-44ba-aa26-0ac0eacfb1e7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7cfns\" (UID: \"692eec10-7d08-44ba-aa26-0ac0eacfb1e7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7cfns" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.084254 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcvjb\" (UniqueName: \"kubernetes.io/projected/692eec10-7d08-44ba-aa26-0ac0eacfb1e7-kube-api-access-hcvjb\") pod \"marketplace-operator-79b997595-7cfns\" (UID: \"692eec10-7d08-44ba-aa26-0ac0eacfb1e7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7cfns" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.247602 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7cfns" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.302514 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnhcl" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.476301 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3ad1e5f-77aa-4005-bd12-618819d83c12-catalog-content\") pod \"c3ad1e5f-77aa-4005-bd12-618819d83c12\" (UID: \"c3ad1e5f-77aa-4005-bd12-618819d83c12\") " Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.490416 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3ad1e5f-77aa-4005-bd12-618819d83c12-utilities\") pod \"c3ad1e5f-77aa-4005-bd12-618819d83c12\" (UID: \"c3ad1e5f-77aa-4005-bd12-618819d83c12\") " Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.490470 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77ppn\" (UniqueName: \"kubernetes.io/projected/c3ad1e5f-77aa-4005-bd12-618819d83c12-kube-api-access-77ppn\") pod \"c3ad1e5f-77aa-4005-bd12-618819d83c12\" (UID: \"c3ad1e5f-77aa-4005-bd12-618819d83c12\") " Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.491713 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3ad1e5f-77aa-4005-bd12-618819d83c12-utilities" (OuterVolumeSpecName: "utilities") pod "c3ad1e5f-77aa-4005-bd12-618819d83c12" (UID: "c3ad1e5f-77aa-4005-bd12-618819d83c12"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.495858 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3ad1e5f-77aa-4005-bd12-618819d83c12-kube-api-access-77ppn" (OuterVolumeSpecName: "kube-api-access-77ppn") pod "c3ad1e5f-77aa-4005-bd12-618819d83c12" (UID: "c3ad1e5f-77aa-4005-bd12-618819d83c12"). InnerVolumeSpecName "kube-api-access-77ppn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.529064 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3ad1e5f-77aa-4005-bd12-618819d83c12-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3ad1e5f-77aa-4005-bd12-618819d83c12" (UID: "c3ad1e5f-77aa-4005-bd12-618819d83c12"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.565544 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrddt" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.574451 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.591289 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6-marketplace-operator-metrics\") pod \"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6\" (UID: \"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6\") " Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.591338 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0696ef-3017-4937-92ee-fe9e794c9fdd-utilities\") pod \"9e0696ef-3017-4937-92ee-fe9e794c9fdd\" (UID: \"9e0696ef-3017-4937-92ee-fe9e794c9fdd\") " Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.591362 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vshm4\" (UniqueName: \"kubernetes.io/projected/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6-kube-api-access-vshm4\") pod \"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6\" (UID: \"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6\") " Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.591381 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6-marketplace-trusted-ca\") pod \"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6\" (UID: \"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6\") " Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.591481 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0696ef-3017-4937-92ee-fe9e794c9fdd-catalog-content\") pod \"9e0696ef-3017-4937-92ee-fe9e794c9fdd\" (UID: \"9e0696ef-3017-4937-92ee-fe9e794c9fdd\") " Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.591551 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xftq\" (UniqueName: \"kubernetes.io/projected/9e0696ef-3017-4937-92ee-fe9e794c9fdd-kube-api-access-8xftq\") pod \"9e0696ef-3017-4937-92ee-fe9e794c9fdd\" (UID: \"9e0696ef-3017-4937-92ee-fe9e794c9fdd\") " Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.591809 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3ad1e5f-77aa-4005-bd12-618819d83c12-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.591825 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3ad1e5f-77aa-4005-bd12-618819d83c12-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.591835 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77ppn\" (UniqueName: \"kubernetes.io/projected/c3ad1e5f-77aa-4005-bd12-618819d83c12-kube-api-access-77ppn\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.597172 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e0696ef-3017-4937-92ee-fe9e794c9fdd-utilities" (OuterVolumeSpecName: "utilities") pod "9e0696ef-3017-4937-92ee-fe9e794c9fdd" (UID: "9e0696ef-3017-4937-92ee-fe9e794c9fdd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.607472 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6" (UID: "1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.614522 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6-kube-api-access-vshm4" (OuterVolumeSpecName: "kube-api-access-vshm4") pod "1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6" (UID: "1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6"). InnerVolumeSpecName "kube-api-access-vshm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.619722 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtmd5" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.622189 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6" (UID: "1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.622561 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrghj" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.624346 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e0696ef-3017-4937-92ee-fe9e794c9fdd-kube-api-access-8xftq" (OuterVolumeSpecName: "kube-api-access-8xftq") pod "9e0696ef-3017-4937-92ee-fe9e794c9fdd" (UID: "9e0696ef-3017-4937-92ee-fe9e794c9fdd"). InnerVolumeSpecName "kube-api-access-8xftq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.693017 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w84bx\" (UniqueName: \"kubernetes.io/projected/bbad73d0-4e65-4601-9d3e-7ac464269b5f-kube-api-access-w84bx\") pod \"bbad73d0-4e65-4601-9d3e-7ac464269b5f\" (UID: \"bbad73d0-4e65-4601-9d3e-7ac464269b5f\") " Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.693095 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbad73d0-4e65-4601-9d3e-7ac464269b5f-utilities\") pod \"bbad73d0-4e65-4601-9d3e-7ac464269b5f\" (UID: \"bbad73d0-4e65-4601-9d3e-7ac464269b5f\") " Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.693151 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbad73d0-4e65-4601-9d3e-7ac464269b5f-catalog-content\") pod \"bbad73d0-4e65-4601-9d3e-7ac464269b5f\" (UID: \"bbad73d0-4e65-4601-9d3e-7ac464269b5f\") " Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.693176 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7220b0-9ce3-461c-a434-e09d4fde1b0a-utilities\") pod \"6e7220b0-9ce3-461c-a434-e09d4fde1b0a\" (UID: \"6e7220b0-9ce3-461c-a434-e09d4fde1b0a\") " Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.693787 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbad73d0-4e65-4601-9d3e-7ac464269b5f-utilities" (OuterVolumeSpecName: "utilities") pod "bbad73d0-4e65-4601-9d3e-7ac464269b5f" (UID: "bbad73d0-4e65-4601-9d3e-7ac464269b5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.695721 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbad73d0-4e65-4601-9d3e-7ac464269b5f-kube-api-access-w84bx" (OuterVolumeSpecName: "kube-api-access-w84bx") pod "bbad73d0-4e65-4601-9d3e-7ac464269b5f" (UID: "bbad73d0-4e65-4601-9d3e-7ac464269b5f"). InnerVolumeSpecName "kube-api-access-w84bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.696627 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e7220b0-9ce3-461c-a434-e09d4fde1b0a-utilities" (OuterVolumeSpecName: "utilities") pod "6e7220b0-9ce3-461c-a434-e09d4fde1b0a" (UID: "6e7220b0-9ce3-461c-a434-e09d4fde1b0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.701058 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzws2\" (UniqueName: \"kubernetes.io/projected/6e7220b0-9ce3-461c-a434-e09d4fde1b0a-kube-api-access-kzws2\") pod \"6e7220b0-9ce3-461c-a434-e09d4fde1b0a\" (UID: \"6e7220b0-9ce3-461c-a434-e09d4fde1b0a\") " Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.701128 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7220b0-9ce3-461c-a434-e09d4fde1b0a-catalog-content\") pod \"6e7220b0-9ce3-461c-a434-e09d4fde1b0a\" (UID: \"6e7220b0-9ce3-461c-a434-e09d4fde1b0a\") " Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.701586 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w84bx\" (UniqueName: \"kubernetes.io/projected/bbad73d0-4e65-4601-9d3e-7ac464269b5f-kube-api-access-w84bx\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.701613 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xftq\" (UniqueName: \"kubernetes.io/projected/9e0696ef-3017-4937-92ee-fe9e794c9fdd-kube-api-access-8xftq\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.701626 4966 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.701638 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0696ef-3017-4937-92ee-fe9e794c9fdd-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.701651 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vshm4\" (UniqueName: \"kubernetes.io/projected/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6-kube-api-access-vshm4\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.701662 4966 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.701673 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbad73d0-4e65-4601-9d3e-7ac464269b5f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.701683 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7220b0-9ce3-461c-a434-e09d4fde1b0a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.704583 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e7220b0-9ce3-461c-a434-e09d4fde1b0a-kube-api-access-kzws2" (OuterVolumeSpecName: "kube-api-access-kzws2") pod "6e7220b0-9ce3-461c-a434-e09d4fde1b0a" (UID: "6e7220b0-9ce3-461c-a434-e09d4fde1b0a"). InnerVolumeSpecName "kube-api-access-kzws2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.717971 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbad73d0-4e65-4601-9d3e-7ac464269b5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbad73d0-4e65-4601-9d3e-7ac464269b5f" (UID: "bbad73d0-4e65-4601-9d3e-7ac464269b5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.748299 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e7220b0-9ce3-461c-a434-e09d4fde1b0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e7220b0-9ce3-461c-a434-e09d4fde1b0a" (UID: "6e7220b0-9ce3-461c-a434-e09d4fde1b0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.770244 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e0696ef-3017-4937-92ee-fe9e794c9fdd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e0696ef-3017-4937-92ee-fe9e794c9fdd" (UID: "9e0696ef-3017-4937-92ee-fe9e794c9fdd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.797419 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7cfns"] Jan 27 15:47:56 crc kubenswrapper[4966]: W0127 15:47:56.802431 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod692eec10_7d08_44ba_aa26_0ac0eacfb1e7.slice/crio-bb3a0e885e7c911c5f05b4f8b67890aaacc77a7aa0b46d6bb9fcfab2499f9184 WatchSource:0}: Error finding container bb3a0e885e7c911c5f05b4f8b67890aaacc77a7aa0b46d6bb9fcfab2499f9184: Status 404 returned error can't find the container with id bb3a0e885e7c911c5f05b4f8b67890aaacc77a7aa0b46d6bb9fcfab2499f9184 Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.802683 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbad73d0-4e65-4601-9d3e-7ac464269b5f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.802702 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0696ef-3017-4937-92ee-fe9e794c9fdd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.802711 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzws2\" (UniqueName: \"kubernetes.io/projected/6e7220b0-9ce3-461c-a434-e09d4fde1b0a-kube-api-access-kzws2\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.802721 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7220b0-9ce3-461c-a434-e09d4fde1b0a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.989133 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7cfns" event={"ID":"692eec10-7d08-44ba-aa26-0ac0eacfb1e7","Type":"ContainerStarted","Data":"46428c67caf65127a288a300ffd73ce39c75f4ab674da139d54d36675cf4c0c5"} Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.989371 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7cfns" event={"ID":"692eec10-7d08-44ba-aa26-0ac0eacfb1e7","Type":"ContainerStarted","Data":"bb3a0e885e7c911c5f05b4f8b67890aaacc77a7aa0b46d6bb9fcfab2499f9184"} Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.989747 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7cfns" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.990593 4966 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7cfns container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" start-of-body= Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.990662 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7cfns" podUID="692eec10-7d08-44ba-aa26-0ac0eacfb1e7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.992561 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtmd5" event={"ID":"6e7220b0-9ce3-461c-a434-e09d4fde1b0a","Type":"ContainerDied","Data":"2308d2af77c77b1e6552f761eb9a67621e05d54d6c46c6a9cd5bda4ca58b7e57"} Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.992601 4966 scope.go:117] "RemoveContainer" containerID="14f0e287f7dfcf5ac456350e45b80e862eadab33dd60fabcbe59a370af5130a3" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.992680 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtmd5" Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.999038 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" event={"ID":"1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6","Type":"ContainerDied","Data":"bf01320752cdb04afb2fb7687f6c90f8861da12ad2ceb251eb35b86de8f6734a"} Jan 27 15:47:56 crc kubenswrapper[4966]: I0127 15:47:56.999131 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tjhrf" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.009861 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrghj" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.009851 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrghj" event={"ID":"bbad73d0-4e65-4601-9d3e-7ac464269b5f","Type":"ContainerDied","Data":"6d6c291c45e9ec7ed0f7e916c770b419d45df29ae8e2e3f4d842e555412ad38e"} Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.022450 4966 scope.go:117] "RemoveContainer" containerID="c195c696c43b3faaa2c0779e59f668ed2c95b9e7647666be08449af7724ab5e2" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.023537 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnhcl" event={"ID":"c3ad1e5f-77aa-4005-bd12-618819d83c12","Type":"ContainerDied","Data":"528d574363302529d696bd0b276d6d7cef2d9e91d16fdae828957b7d05046ed0"} Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.024247 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnhcl" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.029838 4966 generic.go:334] "Generic (PLEG): container finished" podID="9e0696ef-3017-4937-92ee-fe9e794c9fdd" containerID="dcad32c69724e125a80f3b57df58adbe69dc27eacaa8930b2ae72cbd622c4d2f" exitCode=0 Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.029943 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrddt" event={"ID":"9e0696ef-3017-4937-92ee-fe9e794c9fdd","Type":"ContainerDied","Data":"dcad32c69724e125a80f3b57df58adbe69dc27eacaa8930b2ae72cbd622c4d2f"} Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.029995 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrddt" event={"ID":"9e0696ef-3017-4937-92ee-fe9e794c9fdd","Type":"ContainerDied","Data":"138cf139f1f4d72ea0f28b267091e4ce7d4ae249fdb1319d425dfdf3bfc93271"} Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.030827 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7cfns" podStartSLOduration=2.030806766 podStartE2EDuration="2.030806766s" podCreationTimestamp="2026-01-27 15:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:47:57.019553761 +0000 UTC m=+343.322347269" watchObservedRunningTime="2026-01-27 15:47:57.030806766 +0000 UTC m=+343.333600264" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.031002 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrddt" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.043977 4966 scope.go:117] "RemoveContainer" containerID="2e687a76f890ee6fdad32b0e81382a029765024915768a1d4a62c5194e60bda3" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.065971 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gtmd5"] Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.066508 4966 scope.go:117] "RemoveContainer" containerID="dbeb8c5d1a5967fc8ce57f40c7b77c409baa0f5234f843f0804cd2c35d43c8f6" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.074623 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gtmd5"] Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.079408 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tjhrf"] Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.081279 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tjhrf"] Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.092468 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dnhcl"] Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.096121 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dnhcl"] Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.104119 4966 scope.go:117] "RemoveContainer" containerID="8933519b7f897a90af067c5a6d8dfcf872ff9e7902a57991d1977c60c2b0f614" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.114130 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrghj"] Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.118813 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrghj"] Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.125969 4966 scope.go:117] "RemoveContainer" containerID="e6b8690c01d87a3fbbdb6b803e8a1f246e1d49642257fb9eec57d2ea26e00b66" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.130783 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jrddt"] Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.136712 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jrddt"] Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.140148 4966 scope.go:117] "RemoveContainer" containerID="5586c2f2f0faa7f0931a1dfd67f401d1bfcc5c908be52ce91621c83634a6b93f" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.154802 4966 scope.go:117] "RemoveContainer" containerID="74c361ba077011ebf2e5fe3ff200db6938dabefc1bfb5559a1430bf50c8723d6" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.173431 4966 scope.go:117] "RemoveContainer" containerID="e05e3618a09bcafe68421c66e24c5b8a2ae69e2e4566dcc64ad622a4a1e6577f" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.192070 4966 scope.go:117] "RemoveContainer" containerID="9199338a4760f5c9899818c13748e6e8e09f8a4c793b2f5bb86795c847902705" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.215732 4966 scope.go:117] "RemoveContainer" containerID="dcad32c69724e125a80f3b57df58adbe69dc27eacaa8930b2ae72cbd622c4d2f" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.228107 4966 scope.go:117] "RemoveContainer" containerID="cd1b2c2c9b9ab3281f1f798d6677e835a620655818603c5c149b20106bf8a234" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.243710 4966 scope.go:117] "RemoveContainer" containerID="d3c2a0261b6de22871ac056b2c1650ebe4f6cd32e007410144f869f204340ff4" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.256549 4966 scope.go:117] "RemoveContainer" containerID="dcad32c69724e125a80f3b57df58adbe69dc27eacaa8930b2ae72cbd622c4d2f" Jan 27 15:47:57 crc kubenswrapper[4966]: E0127 15:47:57.257555 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcad32c69724e125a80f3b57df58adbe69dc27eacaa8930b2ae72cbd622c4d2f\": container with ID starting with dcad32c69724e125a80f3b57df58adbe69dc27eacaa8930b2ae72cbd622c4d2f not found: ID does not exist" containerID="dcad32c69724e125a80f3b57df58adbe69dc27eacaa8930b2ae72cbd622c4d2f" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.257584 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcad32c69724e125a80f3b57df58adbe69dc27eacaa8930b2ae72cbd622c4d2f"} err="failed to get container status \"dcad32c69724e125a80f3b57df58adbe69dc27eacaa8930b2ae72cbd622c4d2f\": rpc error: code = NotFound desc = could not find container \"dcad32c69724e125a80f3b57df58adbe69dc27eacaa8930b2ae72cbd622c4d2f\": container with ID starting with dcad32c69724e125a80f3b57df58adbe69dc27eacaa8930b2ae72cbd622c4d2f not found: ID does not exist" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.257607 4966 scope.go:117] "RemoveContainer" containerID="cd1b2c2c9b9ab3281f1f798d6677e835a620655818603c5c149b20106bf8a234" Jan 27 15:47:57 crc kubenswrapper[4966]: E0127 15:47:57.258744 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd1b2c2c9b9ab3281f1f798d6677e835a620655818603c5c149b20106bf8a234\": container with ID starting with cd1b2c2c9b9ab3281f1f798d6677e835a620655818603c5c149b20106bf8a234 not found: ID does not exist" containerID="cd1b2c2c9b9ab3281f1f798d6677e835a620655818603c5c149b20106bf8a234" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.258770 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd1b2c2c9b9ab3281f1f798d6677e835a620655818603c5c149b20106bf8a234"} err="failed to get container status \"cd1b2c2c9b9ab3281f1f798d6677e835a620655818603c5c149b20106bf8a234\": rpc error: code = NotFound desc = could not find container \"cd1b2c2c9b9ab3281f1f798d6677e835a620655818603c5c149b20106bf8a234\": container with ID starting with cd1b2c2c9b9ab3281f1f798d6677e835a620655818603c5c149b20106bf8a234 not found: ID does not exist" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.258785 4966 scope.go:117] "RemoveContainer" containerID="d3c2a0261b6de22871ac056b2c1650ebe4f6cd32e007410144f869f204340ff4" Jan 27 15:47:57 crc kubenswrapper[4966]: E0127 15:47:57.259035 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3c2a0261b6de22871ac056b2c1650ebe4f6cd32e007410144f869f204340ff4\": container with ID starting with d3c2a0261b6de22871ac056b2c1650ebe4f6cd32e007410144f869f204340ff4 not found: ID does not exist" containerID="d3c2a0261b6de22871ac056b2c1650ebe4f6cd32e007410144f869f204340ff4" Jan 27 15:47:57 crc kubenswrapper[4966]: I0127 15:47:57.259056 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3c2a0261b6de22871ac056b2c1650ebe4f6cd32e007410144f869f204340ff4"} err="failed to get container status \"d3c2a0261b6de22871ac056b2c1650ebe4f6cd32e007410144f869f204340ff4\": rpc error: code = NotFound desc = could not find container \"d3c2a0261b6de22871ac056b2c1650ebe4f6cd32e007410144f869f204340ff4\": container with ID starting with d3c2a0261b6de22871ac056b2c1650ebe4f6cd32e007410144f869f204340ff4 not found: ID does not exist" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.045604 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7cfns" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.345933 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6zhtk"] Jan 27 15:47:58 crc kubenswrapper[4966]: E0127 15:47:58.346123 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbad73d0-4e65-4601-9d3e-7ac464269b5f" containerName="registry-server" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.346133 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbad73d0-4e65-4601-9d3e-7ac464269b5f" containerName="registry-server" Jan 27 15:47:58 crc kubenswrapper[4966]: E0127 15:47:58.346142 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7220b0-9ce3-461c-a434-e09d4fde1b0a" containerName="extract-utilities" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.346148 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7220b0-9ce3-461c-a434-e09d4fde1b0a" containerName="extract-utilities" Jan 27 15:47:58 crc kubenswrapper[4966]: E0127 15:47:58.346159 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ad1e5f-77aa-4005-bd12-618819d83c12" containerName="registry-server" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.346167 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ad1e5f-77aa-4005-bd12-618819d83c12" containerName="registry-server" Jan 27 15:47:58 crc kubenswrapper[4966]: E0127 15:47:58.346179 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0696ef-3017-4937-92ee-fe9e794c9fdd" containerName="registry-server" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.346187 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0696ef-3017-4937-92ee-fe9e794c9fdd" containerName="registry-server" Jan 27 15:47:58 crc kubenswrapper[4966]: E0127 15:47:58.346197 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7220b0-9ce3-461c-a434-e09d4fde1b0a" containerName="extract-content" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.346203 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7220b0-9ce3-461c-a434-e09d4fde1b0a" containerName="extract-content" Jan 27 15:47:58 crc kubenswrapper[4966]: E0127 15:47:58.346210 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbad73d0-4e65-4601-9d3e-7ac464269b5f" containerName="extract-utilities" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.346216 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbad73d0-4e65-4601-9d3e-7ac464269b5f" containerName="extract-utilities" Jan 27 15:47:58 crc kubenswrapper[4966]: E0127 15:47:58.346224 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ad1e5f-77aa-4005-bd12-618819d83c12" containerName="extract-content" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.346229 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ad1e5f-77aa-4005-bd12-618819d83c12" containerName="extract-content" Jan 27 15:47:58 crc kubenswrapper[4966]: E0127 15:47:58.346238 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0696ef-3017-4937-92ee-fe9e794c9fdd" containerName="extract-utilities" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.346243 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0696ef-3017-4937-92ee-fe9e794c9fdd" containerName="extract-utilities" Jan 27 15:47:58 crc kubenswrapper[4966]: E0127 15:47:58.346250 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7220b0-9ce3-461c-a434-e09d4fde1b0a" containerName="registry-server" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.346257 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7220b0-9ce3-461c-a434-e09d4fde1b0a" containerName="registry-server" Jan 27 15:47:58 crc kubenswrapper[4966]: E0127 15:47:58.346264 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0696ef-3017-4937-92ee-fe9e794c9fdd" containerName="extract-content" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.346271 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0696ef-3017-4937-92ee-fe9e794c9fdd" containerName="extract-content" Jan 27 15:47:58 crc kubenswrapper[4966]: E0127 15:47:58.346281 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbad73d0-4e65-4601-9d3e-7ac464269b5f" containerName="extract-content" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.346287 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbad73d0-4e65-4601-9d3e-7ac464269b5f" containerName="extract-content" Jan 27 15:47:58 crc kubenswrapper[4966]: E0127 15:47:58.346294 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6" containerName="marketplace-operator" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.346299 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6" containerName="marketplace-operator" Jan 27 15:47:58 crc kubenswrapper[4966]: E0127 15:47:58.346305 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ad1e5f-77aa-4005-bd12-618819d83c12" containerName="extract-utilities" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.346310 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ad1e5f-77aa-4005-bd12-618819d83c12" containerName="extract-utilities" Jan 27 15:47:58 crc kubenswrapper[4966]: E0127 15:47:58.346318 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6" containerName="marketplace-operator" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.346324 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6" containerName="marketplace-operator" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.346408 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3ad1e5f-77aa-4005-bd12-618819d83c12" containerName="registry-server" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.346419 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e7220b0-9ce3-461c-a434-e09d4fde1b0a" containerName="registry-server" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.346427 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbad73d0-4e65-4601-9d3e-7ac464269b5f" containerName="registry-server" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.346436 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6" containerName="marketplace-operator" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.346442 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e0696ef-3017-4937-92ee-fe9e794c9fdd" containerName="registry-server" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.346453 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6" containerName="marketplace-operator" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.347155 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zhtk" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.353338 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.362852 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6zhtk"] Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.425262 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgv94\" (UniqueName: \"kubernetes.io/projected/b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680-kube-api-access-cgv94\") pod \"community-operators-6zhtk\" (UID: \"b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680\") " pod="openshift-marketplace/community-operators-6zhtk" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.425327 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680-catalog-content\") pod \"community-operators-6zhtk\" (UID: \"b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680\") " pod="openshift-marketplace/community-operators-6zhtk" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.425368 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680-utilities\") pod \"community-operators-6zhtk\" (UID: \"b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680\") " pod="openshift-marketplace/community-operators-6zhtk" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.526518 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgv94\" (UniqueName: \"kubernetes.io/projected/b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680-kube-api-access-cgv94\") pod \"community-operators-6zhtk\" (UID: \"b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680\") " pod="openshift-marketplace/community-operators-6zhtk" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.526577 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680-catalog-content\") pod \"community-operators-6zhtk\" (UID: \"b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680\") " pod="openshift-marketplace/community-operators-6zhtk" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.526613 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680-utilities\") pod \"community-operators-6zhtk\" (UID: \"b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680\") " pod="openshift-marketplace/community-operators-6zhtk" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.527016 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680-catalog-content\") pod \"community-operators-6zhtk\" (UID: \"b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680\") " pod="openshift-marketplace/community-operators-6zhtk" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.527099 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680-utilities\") pod \"community-operators-6zhtk\" (UID: \"b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680\") " pod="openshift-marketplace/community-operators-6zhtk" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.527142 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6" path="/var/lib/kubelet/pods/1f9786a8-d22b-41bd-bcd0-1e3d8c9136c6/volumes" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.527594 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e7220b0-9ce3-461c-a434-e09d4fde1b0a" path="/var/lib/kubelet/pods/6e7220b0-9ce3-461c-a434-e09d4fde1b0a/volumes" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.528203 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e0696ef-3017-4937-92ee-fe9e794c9fdd" path="/var/lib/kubelet/pods/9e0696ef-3017-4937-92ee-fe9e794c9fdd/volumes" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.528765 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbad73d0-4e65-4601-9d3e-7ac464269b5f" path="/var/lib/kubelet/pods/bbad73d0-4e65-4601-9d3e-7ac464269b5f/volumes" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.529809 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3ad1e5f-77aa-4005-bd12-618819d83c12" path="/var/lib/kubelet/pods/c3ad1e5f-77aa-4005-bd12-618819d83c12/volumes" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.547962 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgv94\" (UniqueName: \"kubernetes.io/projected/b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680-kube-api-access-cgv94\") pod \"community-operators-6zhtk\" (UID: \"b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680\") " pod="openshift-marketplace/community-operators-6zhtk" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.663332 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zhtk" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.943225 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q8wrz"] Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.948917 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8wrz" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.962222 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 15:47:58 crc kubenswrapper[4966]: I0127 15:47:58.972692 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q8wrz"] Jan 27 15:47:59 crc kubenswrapper[4966]: I0127 15:47:59.034474 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/818e66f7-b294-448b-9d55-99de7ebd3f34-catalog-content\") pod \"redhat-operators-q8wrz\" (UID: \"818e66f7-b294-448b-9d55-99de7ebd3f34\") " pod="openshift-marketplace/redhat-operators-q8wrz" Jan 27 15:47:59 crc kubenswrapper[4966]: I0127 15:47:59.034546 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk7bw\" (UniqueName: \"kubernetes.io/projected/818e66f7-b294-448b-9d55-99de7ebd3f34-kube-api-access-jk7bw\") pod \"redhat-operators-q8wrz\" (UID: \"818e66f7-b294-448b-9d55-99de7ebd3f34\") " pod="openshift-marketplace/redhat-operators-q8wrz" Jan 27 15:47:59 crc kubenswrapper[4966]: I0127 15:47:59.034565 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/818e66f7-b294-448b-9d55-99de7ebd3f34-utilities\") pod \"redhat-operators-q8wrz\" (UID: \"818e66f7-b294-448b-9d55-99de7ebd3f34\") " pod="openshift-marketplace/redhat-operators-q8wrz" Jan 27 15:47:59 crc kubenswrapper[4966]: I0127 15:47:59.094082 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6zhtk"] Jan 27 15:47:59 crc kubenswrapper[4966]: W0127 15:47:59.103268 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8b8a2b7_39e4_4fc2_afd3_7f07fb8bc680.slice/crio-3eb70555e7ace9fe82fa043398ec19659b1710d56b0ecba16d395b6eacdfa509 WatchSource:0}: Error finding container 3eb70555e7ace9fe82fa043398ec19659b1710d56b0ecba16d395b6eacdfa509: Status 404 returned error can't find the container with id 3eb70555e7ace9fe82fa043398ec19659b1710d56b0ecba16d395b6eacdfa509 Jan 27 15:47:59 crc kubenswrapper[4966]: I0127 15:47:59.135958 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/818e66f7-b294-448b-9d55-99de7ebd3f34-catalog-content\") pod \"redhat-operators-q8wrz\" (UID: \"818e66f7-b294-448b-9d55-99de7ebd3f34\") " pod="openshift-marketplace/redhat-operators-q8wrz" Jan 27 15:47:59 crc kubenswrapper[4966]: I0127 15:47:59.136012 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk7bw\" (UniqueName: \"kubernetes.io/projected/818e66f7-b294-448b-9d55-99de7ebd3f34-kube-api-access-jk7bw\") pod \"redhat-operators-q8wrz\" (UID: \"818e66f7-b294-448b-9d55-99de7ebd3f34\") " pod="openshift-marketplace/redhat-operators-q8wrz" Jan 27 15:47:59 crc kubenswrapper[4966]: I0127 15:47:59.136035 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/818e66f7-b294-448b-9d55-99de7ebd3f34-utilities\") pod \"redhat-operators-q8wrz\" (UID: \"818e66f7-b294-448b-9d55-99de7ebd3f34\") " pod="openshift-marketplace/redhat-operators-q8wrz" Jan 27 15:47:59 crc kubenswrapper[4966]: I0127 15:47:59.136720 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/818e66f7-b294-448b-9d55-99de7ebd3f34-catalog-content\") pod \"redhat-operators-q8wrz\" (UID: \"818e66f7-b294-448b-9d55-99de7ebd3f34\") " pod="openshift-marketplace/redhat-operators-q8wrz" Jan 27 15:47:59 crc kubenswrapper[4966]: I0127 15:47:59.136961 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/818e66f7-b294-448b-9d55-99de7ebd3f34-utilities\") pod \"redhat-operators-q8wrz\" (UID: \"818e66f7-b294-448b-9d55-99de7ebd3f34\") " pod="openshift-marketplace/redhat-operators-q8wrz" Jan 27 15:47:59 crc kubenswrapper[4966]: I0127 15:47:59.156273 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk7bw\" (UniqueName: \"kubernetes.io/projected/818e66f7-b294-448b-9d55-99de7ebd3f34-kube-api-access-jk7bw\") pod \"redhat-operators-q8wrz\" (UID: \"818e66f7-b294-448b-9d55-99de7ebd3f34\") " pod="openshift-marketplace/redhat-operators-q8wrz" Jan 27 15:47:59 crc kubenswrapper[4966]: I0127 15:47:59.275696 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8wrz" Jan 27 15:47:59 crc kubenswrapper[4966]: I0127 15:47:59.668453 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q8wrz"] Jan 27 15:48:00 crc kubenswrapper[4966]: I0127 15:48:00.052409 4966 generic.go:334] "Generic (PLEG): container finished" podID="818e66f7-b294-448b-9d55-99de7ebd3f34" containerID="5e58fd459178bf3f15c0bab8027275e5a8a3300137fc0f05e97734e47d8b56d7" exitCode=0 Jan 27 15:48:00 crc kubenswrapper[4966]: I0127 15:48:00.052520 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8wrz" event={"ID":"818e66f7-b294-448b-9d55-99de7ebd3f34","Type":"ContainerDied","Data":"5e58fd459178bf3f15c0bab8027275e5a8a3300137fc0f05e97734e47d8b56d7"} Jan 27 15:48:00 crc kubenswrapper[4966]: I0127 15:48:00.053188 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8wrz" event={"ID":"818e66f7-b294-448b-9d55-99de7ebd3f34","Type":"ContainerStarted","Data":"78c9ec53ec445fff7c72b39db468da310e3d39b5b3dfb5a8e766ff2b2c4a712a"} Jan 27 15:48:00 crc kubenswrapper[4966]: I0127 15:48:00.054544 4966 generic.go:334] "Generic (PLEG): container finished" podID="b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680" containerID="770a5f8200e923718c40271c35681a073e52ac27df4549fb0b825500d9fde32b" exitCode=0 Jan 27 15:48:00 crc kubenswrapper[4966]: I0127 15:48:00.055649 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zhtk" event={"ID":"b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680","Type":"ContainerDied","Data":"770a5f8200e923718c40271c35681a073e52ac27df4549fb0b825500d9fde32b"} Jan 27 15:48:00 crc kubenswrapper[4966]: I0127 15:48:00.055694 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zhtk" event={"ID":"b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680","Type":"ContainerStarted","Data":"3eb70555e7ace9fe82fa043398ec19659b1710d56b0ecba16d395b6eacdfa509"} Jan 27 15:48:00 crc kubenswrapper[4966]: I0127 15:48:00.948195 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dvjsg"] Jan 27 15:48:00 crc kubenswrapper[4966]: I0127 15:48:00.949564 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvjsg" Jan 27 15:48:00 crc kubenswrapper[4966]: I0127 15:48:00.954483 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 15:48:00 crc kubenswrapper[4966]: I0127 15:48:00.961502 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvjsg"] Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.058536 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvh28\" (UniqueName: \"kubernetes.io/projected/8ebef5e4-f520-44af-9488-659932ab7ff8-kube-api-access-dvh28\") pod \"redhat-marketplace-dvjsg\" (UID: \"8ebef5e4-f520-44af-9488-659932ab7ff8\") " pod="openshift-marketplace/redhat-marketplace-dvjsg" Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.058591 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ebef5e4-f520-44af-9488-659932ab7ff8-catalog-content\") pod \"redhat-marketplace-dvjsg\" (UID: \"8ebef5e4-f520-44af-9488-659932ab7ff8\") " pod="openshift-marketplace/redhat-marketplace-dvjsg" Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.058613 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ebef5e4-f520-44af-9488-659932ab7ff8-utilities\") pod \"redhat-marketplace-dvjsg\" (UID: \"8ebef5e4-f520-44af-9488-659932ab7ff8\") " pod="openshift-marketplace/redhat-marketplace-dvjsg" Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.159167 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ebef5e4-f520-44af-9488-659932ab7ff8-catalog-content\") pod \"redhat-marketplace-dvjsg\" (UID: \"8ebef5e4-f520-44af-9488-659932ab7ff8\") " pod="openshift-marketplace/redhat-marketplace-dvjsg" Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.159323 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ebef5e4-f520-44af-9488-659932ab7ff8-utilities\") pod \"redhat-marketplace-dvjsg\" (UID: \"8ebef5e4-f520-44af-9488-659932ab7ff8\") " pod="openshift-marketplace/redhat-marketplace-dvjsg" Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.159454 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvh28\" (UniqueName: \"kubernetes.io/projected/8ebef5e4-f520-44af-9488-659932ab7ff8-kube-api-access-dvh28\") pod \"redhat-marketplace-dvjsg\" (UID: \"8ebef5e4-f520-44af-9488-659932ab7ff8\") " pod="openshift-marketplace/redhat-marketplace-dvjsg" Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.160030 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ebef5e4-f520-44af-9488-659932ab7ff8-catalog-content\") pod \"redhat-marketplace-dvjsg\" (UID: \"8ebef5e4-f520-44af-9488-659932ab7ff8\") " pod="openshift-marketplace/redhat-marketplace-dvjsg" Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.160528 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ebef5e4-f520-44af-9488-659932ab7ff8-utilities\") pod \"redhat-marketplace-dvjsg\" (UID: \"8ebef5e4-f520-44af-9488-659932ab7ff8\") " pod="openshift-marketplace/redhat-marketplace-dvjsg" Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.181332 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvh28\" (UniqueName: \"kubernetes.io/projected/8ebef5e4-f520-44af-9488-659932ab7ff8-kube-api-access-dvh28\") pod \"redhat-marketplace-dvjsg\" (UID: \"8ebef5e4-f520-44af-9488-659932ab7ff8\") " pod="openshift-marketplace/redhat-marketplace-dvjsg" Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.314713 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvjsg" Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.544384 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7br4n"] Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.545708 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7br4n" Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.547613 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.555602 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7br4n"] Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.665458 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcb091b-8f56-46c3-8437-2505b27684da-utilities\") pod \"certified-operators-7br4n\" (UID: \"0bcb091b-8f56-46c3-8437-2505b27684da\") " pod="openshift-marketplace/certified-operators-7br4n" Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.665625 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcb091b-8f56-46c3-8437-2505b27684da-catalog-content\") pod \"certified-operators-7br4n\" (UID: \"0bcb091b-8f56-46c3-8437-2505b27684da\") " pod="openshift-marketplace/certified-operators-7br4n" Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.665675 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl8lt\" (UniqueName: \"kubernetes.io/projected/0bcb091b-8f56-46c3-8437-2505b27684da-kube-api-access-wl8lt\") pod \"certified-operators-7br4n\" (UID: \"0bcb091b-8f56-46c3-8437-2505b27684da\") " pod="openshift-marketplace/certified-operators-7br4n" Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.720937 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvjsg"] Jan 27 15:48:01 crc kubenswrapper[4966]: W0127 15:48:01.729400 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ebef5e4_f520_44af_9488_659932ab7ff8.slice/crio-9cf1bda5174a053cfa4e448bfecdbb41a8eff890aec2c308d43f775796c6e0de WatchSource:0}: Error finding container 9cf1bda5174a053cfa4e448bfecdbb41a8eff890aec2c308d43f775796c6e0de: Status 404 returned error can't find the container with id 9cf1bda5174a053cfa4e448bfecdbb41a8eff890aec2c308d43f775796c6e0de Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.766312 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcb091b-8f56-46c3-8437-2505b27684da-catalog-content\") pod \"certified-operators-7br4n\" (UID: \"0bcb091b-8f56-46c3-8437-2505b27684da\") " pod="openshift-marketplace/certified-operators-7br4n" Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.766356 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl8lt\" (UniqueName: \"kubernetes.io/projected/0bcb091b-8f56-46c3-8437-2505b27684da-kube-api-access-wl8lt\") pod \"certified-operators-7br4n\" (UID: \"0bcb091b-8f56-46c3-8437-2505b27684da\") " pod="openshift-marketplace/certified-operators-7br4n" Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.766400 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcb091b-8f56-46c3-8437-2505b27684da-utilities\") pod \"certified-operators-7br4n\" (UID: \"0bcb091b-8f56-46c3-8437-2505b27684da\") " pod="openshift-marketplace/certified-operators-7br4n" Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.766808 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcb091b-8f56-46c3-8437-2505b27684da-utilities\") pod \"certified-operators-7br4n\" (UID: \"0bcb091b-8f56-46c3-8437-2505b27684da\") " pod="openshift-marketplace/certified-operators-7br4n" Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.766951 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcb091b-8f56-46c3-8437-2505b27684da-catalog-content\") pod \"certified-operators-7br4n\" (UID: \"0bcb091b-8f56-46c3-8437-2505b27684da\") " pod="openshift-marketplace/certified-operators-7br4n" Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.786728 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl8lt\" (UniqueName: \"kubernetes.io/projected/0bcb091b-8f56-46c3-8437-2505b27684da-kube-api-access-wl8lt\") pod \"certified-operators-7br4n\" (UID: \"0bcb091b-8f56-46c3-8437-2505b27684da\") " pod="openshift-marketplace/certified-operators-7br4n" Jan 27 15:48:01 crc kubenswrapper[4966]: I0127 15:48:01.869915 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7br4n" Jan 27 15:48:02 crc kubenswrapper[4966]: I0127 15:48:02.067110 4966 generic.go:334] "Generic (PLEG): container finished" podID="b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680" containerID="b54e0e295b2862e6e03d97caba4759065123e6e8f4c4eca16a4bcb39ba9bcbdb" exitCode=0 Jan 27 15:48:02 crc kubenswrapper[4966]: I0127 15:48:02.067178 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zhtk" event={"ID":"b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680","Type":"ContainerDied","Data":"b54e0e295b2862e6e03d97caba4759065123e6e8f4c4eca16a4bcb39ba9bcbdb"} Jan 27 15:48:02 crc kubenswrapper[4966]: I0127 15:48:02.068514 4966 generic.go:334] "Generic (PLEG): container finished" podID="8ebef5e4-f520-44af-9488-659932ab7ff8" containerID="b273871c4a9f22949532b5e7684341e2c30338181b4f4ff1688224122d55945f" exitCode=0 Jan 27 15:48:02 crc kubenswrapper[4966]: I0127 15:48:02.068560 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvjsg" event={"ID":"8ebef5e4-f520-44af-9488-659932ab7ff8","Type":"ContainerDied","Data":"b273871c4a9f22949532b5e7684341e2c30338181b4f4ff1688224122d55945f"} Jan 27 15:48:02 crc kubenswrapper[4966]: I0127 15:48:02.068581 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvjsg" event={"ID":"8ebef5e4-f520-44af-9488-659932ab7ff8","Type":"ContainerStarted","Data":"9cf1bda5174a053cfa4e448bfecdbb41a8eff890aec2c308d43f775796c6e0de"} Jan 27 15:48:02 crc kubenswrapper[4966]: I0127 15:48:02.072665 4966 generic.go:334] "Generic (PLEG): container finished" podID="818e66f7-b294-448b-9d55-99de7ebd3f34" containerID="e8b531e53dbce184f216b265528f5a139622f3703afb859de7f409347c98763f" exitCode=0 Jan 27 15:48:02 crc kubenswrapper[4966]: I0127 15:48:02.072711 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8wrz" event={"ID":"818e66f7-b294-448b-9d55-99de7ebd3f34","Type":"ContainerDied","Data":"e8b531e53dbce184f216b265528f5a139622f3703afb859de7f409347c98763f"} Jan 27 15:48:02 crc kubenswrapper[4966]: I0127 15:48:02.246978 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7br4n"] Jan 27 15:48:02 crc kubenswrapper[4966]: W0127 15:48:02.253623 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bcb091b_8f56_46c3_8437_2505b27684da.slice/crio-4a4b6ed956bd0455d5a08494ecdea3968c5527ab8685f2477ebf8073924cf0e5 WatchSource:0}: Error finding container 4a4b6ed956bd0455d5a08494ecdea3968c5527ab8685f2477ebf8073924cf0e5: Status 404 returned error can't find the container with id 4a4b6ed956bd0455d5a08494ecdea3968c5527ab8685f2477ebf8073924cf0e5 Jan 27 15:48:03 crc kubenswrapper[4966]: I0127 15:48:03.085098 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvjsg" event={"ID":"8ebef5e4-f520-44af-9488-659932ab7ff8","Type":"ContainerStarted","Data":"7595388f340a57c46e05b9630a7708ccaf7dbc8796dec19fd5ffe084ee08d717"} Jan 27 15:48:03 crc kubenswrapper[4966]: I0127 15:48:03.088202 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zhtk" event={"ID":"b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680","Type":"ContainerStarted","Data":"03e9341e5e55a8cc92fc93883fd156c84e643556f0f0801d69970f40ebb61cbe"} Jan 27 15:48:03 crc kubenswrapper[4966]: I0127 15:48:03.092161 4966 generic.go:334] "Generic (PLEG): container finished" podID="0bcb091b-8f56-46c3-8437-2505b27684da" containerID="3fe8343f0a04530817f61cc6055a3fc00404e0be0c14a45d03f6979a202d0287" exitCode=0 Jan 27 15:48:03 crc kubenswrapper[4966]: I0127 15:48:03.092247 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7br4n" event={"ID":"0bcb091b-8f56-46c3-8437-2505b27684da","Type":"ContainerDied","Data":"3fe8343f0a04530817f61cc6055a3fc00404e0be0c14a45d03f6979a202d0287"} Jan 27 15:48:03 crc kubenswrapper[4966]: I0127 15:48:03.092277 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7br4n" event={"ID":"0bcb091b-8f56-46c3-8437-2505b27684da","Type":"ContainerStarted","Data":"4a4b6ed956bd0455d5a08494ecdea3968c5527ab8685f2477ebf8073924cf0e5"} Jan 27 15:48:03 crc kubenswrapper[4966]: I0127 15:48:03.096059 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8wrz" event={"ID":"818e66f7-b294-448b-9d55-99de7ebd3f34","Type":"ContainerStarted","Data":"f3e596043655fad9da672c7e18fb97862234930509d7e9186201bedd788c4129"} Jan 27 15:48:03 crc kubenswrapper[4966]: I0127 15:48:03.136799 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q8wrz" podStartSLOduration=2.724017607 podStartE2EDuration="5.136779073s" podCreationTimestamp="2026-01-27 15:47:58 +0000 UTC" firstStartedPulling="2026-01-27 15:48:00.054404577 +0000 UTC m=+346.357198065" lastFinishedPulling="2026-01-27 15:48:02.467166043 +0000 UTC m=+348.769959531" observedRunningTime="2026-01-27 15:48:03.136466163 +0000 UTC m=+349.439259661" watchObservedRunningTime="2026-01-27 15:48:03.136779073 +0000 UTC m=+349.439572561" Jan 27 15:48:03 crc kubenswrapper[4966]: I0127 15:48:03.172760 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6zhtk" podStartSLOduration=2.742645201 podStartE2EDuration="5.172746298s" podCreationTimestamp="2026-01-27 15:47:58 +0000 UTC" firstStartedPulling="2026-01-27 15:48:00.060065891 +0000 UTC m=+346.362859379" lastFinishedPulling="2026-01-27 15:48:02.490166988 +0000 UTC m=+348.792960476" observedRunningTime="2026-01-27 15:48:03.169533514 +0000 UTC m=+349.472327012" watchObservedRunningTime="2026-01-27 15:48:03.172746298 +0000 UTC m=+349.475539786" Jan 27 15:48:04 crc kubenswrapper[4966]: I0127 15:48:04.102800 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7br4n" event={"ID":"0bcb091b-8f56-46c3-8437-2505b27684da","Type":"ContainerStarted","Data":"8eeb971fcbc5fcfbfb37c503a6bd9e4e0291f687782833f916c72b5a76422f40"} Jan 27 15:48:04 crc kubenswrapper[4966]: I0127 15:48:04.104864 4966 generic.go:334] "Generic (PLEG): container finished" podID="8ebef5e4-f520-44af-9488-659932ab7ff8" containerID="7595388f340a57c46e05b9630a7708ccaf7dbc8796dec19fd5ffe084ee08d717" exitCode=0 Jan 27 15:48:04 crc kubenswrapper[4966]: I0127 15:48:04.105022 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvjsg" event={"ID":"8ebef5e4-f520-44af-9488-659932ab7ff8","Type":"ContainerDied","Data":"7595388f340a57c46e05b9630a7708ccaf7dbc8796dec19fd5ffe084ee08d717"} Jan 27 15:48:05 crc kubenswrapper[4966]: I0127 15:48:05.112728 4966 generic.go:334] "Generic (PLEG): container finished" podID="0bcb091b-8f56-46c3-8437-2505b27684da" containerID="8eeb971fcbc5fcfbfb37c503a6bd9e4e0291f687782833f916c72b5a76422f40" exitCode=0 Jan 27 15:48:05 crc kubenswrapper[4966]: I0127 15:48:05.112961 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7br4n" event={"ID":"0bcb091b-8f56-46c3-8437-2505b27684da","Type":"ContainerDied","Data":"8eeb971fcbc5fcfbfb37c503a6bd9e4e0291f687782833f916c72b5a76422f40"} Jan 27 15:48:05 crc kubenswrapper[4966]: I0127 15:48:05.117323 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvjsg" event={"ID":"8ebef5e4-f520-44af-9488-659932ab7ff8","Type":"ContainerStarted","Data":"6230354fdd51712bb1d53a9380c5e5fa6450a910bfd4f16e32600a4d101a575e"} Jan 27 15:48:05 crc kubenswrapper[4966]: I0127 15:48:05.158406 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dvjsg" podStartSLOduration=2.645357416 podStartE2EDuration="5.158391129s" podCreationTimestamp="2026-01-27 15:48:00 +0000 UTC" firstStartedPulling="2026-01-27 15:48:02.069851113 +0000 UTC m=+348.372644601" lastFinishedPulling="2026-01-27 15:48:04.582884826 +0000 UTC m=+350.885678314" observedRunningTime="2026-01-27 15:48:05.156700953 +0000 UTC m=+351.459494451" watchObservedRunningTime="2026-01-27 15:48:05.158391129 +0000 UTC m=+351.461184617" Jan 27 15:48:06 crc kubenswrapper[4966]: I0127 15:48:06.124468 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7br4n" event={"ID":"0bcb091b-8f56-46c3-8437-2505b27684da","Type":"ContainerStarted","Data":"da68f97af0c8651899efda07024d13a2cb4d0107c795679c95ba6aec257f76e9"} Jan 27 15:48:06 crc kubenswrapper[4966]: I0127 15:48:06.142213 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7br4n" podStartSLOduration=2.7492268920000003 podStartE2EDuration="5.142195586s" podCreationTimestamp="2026-01-27 15:48:01 +0000 UTC" firstStartedPulling="2026-01-27 15:48:03.093722848 +0000 UTC m=+349.396516336" lastFinishedPulling="2026-01-27 15:48:05.486691542 +0000 UTC m=+351.789485030" observedRunningTime="2026-01-27 15:48:06.139731566 +0000 UTC m=+352.442525064" watchObservedRunningTime="2026-01-27 15:48:06.142195586 +0000 UTC m=+352.444989074" Jan 27 15:48:08 crc kubenswrapper[4966]: I0127 15:48:08.664301 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6zhtk" Jan 27 15:48:08 crc kubenswrapper[4966]: I0127 15:48:08.664970 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6zhtk" Jan 27 15:48:08 crc kubenswrapper[4966]: I0127 15:48:08.727630 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6zhtk" Jan 27 15:48:09 crc kubenswrapper[4966]: I0127 15:48:09.193088 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6zhtk" Jan 27 15:48:09 crc kubenswrapper[4966]: I0127 15:48:09.277207 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q8wrz" Jan 27 15:48:09 crc kubenswrapper[4966]: I0127 15:48:09.278175 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q8wrz" Jan 27 15:48:09 crc kubenswrapper[4966]: I0127 15:48:09.324441 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q8wrz" Jan 27 15:48:10 crc kubenswrapper[4966]: I0127 15:48:10.120268 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:48:10 crc kubenswrapper[4966]: I0127 15:48:10.120337 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:48:10 crc kubenswrapper[4966]: I0127 15:48:10.179736 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q8wrz" Jan 27 15:48:11 crc kubenswrapper[4966]: I0127 15:48:11.315774 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dvjsg" Jan 27 15:48:11 crc kubenswrapper[4966]: I0127 15:48:11.315855 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dvjsg" Jan 27 15:48:11 crc kubenswrapper[4966]: I0127 15:48:11.362431 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dvjsg" Jan 27 15:48:11 crc kubenswrapper[4966]: I0127 15:48:11.870630 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7br4n" Jan 27 15:48:11 crc kubenswrapper[4966]: I0127 15:48:11.870688 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7br4n" Jan 27 15:48:11 crc kubenswrapper[4966]: I0127 15:48:11.906370 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7br4n" Jan 27 15:48:12 crc kubenswrapper[4966]: I0127 15:48:12.193379 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7br4n" Jan 27 15:48:12 crc kubenswrapper[4966]: I0127 15:48:12.197746 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dvjsg" Jan 27 15:48:27 crc kubenswrapper[4966]: I0127 15:48:27.340074 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-954x4"] Jan 27 15:48:27 crc kubenswrapper[4966]: I0127 15:48:27.341428 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-954x4" Jan 27 15:48:27 crc kubenswrapper[4966]: I0127 15:48:27.344595 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Jan 27 15:48:27 crc kubenswrapper[4966]: I0127 15:48:27.345006 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Jan 27 15:48:27 crc kubenswrapper[4966]: I0127 15:48:27.345051 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Jan 27 15:48:27 crc kubenswrapper[4966]: I0127 15:48:27.344887 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Jan 27 15:48:27 crc kubenswrapper[4966]: I0127 15:48:27.345054 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Jan 27 15:48:27 crc kubenswrapper[4966]: I0127 15:48:27.352295 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-954x4"] Jan 27 15:48:27 crc kubenswrapper[4966]: I0127 15:48:27.381248 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/3d8b10e3-996f-45dc-9d21-b0fd3a7cee0c-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-954x4\" (UID: \"3d8b10e3-996f-45dc-9d21-b0fd3a7cee0c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-954x4" Jan 27 15:48:27 crc kubenswrapper[4966]: I0127 15:48:27.381549 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tqd6\" (UniqueName: \"kubernetes.io/projected/3d8b10e3-996f-45dc-9d21-b0fd3a7cee0c-kube-api-access-7tqd6\") pod \"cluster-monitoring-operator-6d5b84845-954x4\" (UID: \"3d8b10e3-996f-45dc-9d21-b0fd3a7cee0c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-954x4" Jan 27 15:48:27 crc kubenswrapper[4966]: I0127 15:48:27.381618 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/3d8b10e3-996f-45dc-9d21-b0fd3a7cee0c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-954x4\" (UID: \"3d8b10e3-996f-45dc-9d21-b0fd3a7cee0c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-954x4" Jan 27 15:48:27 crc kubenswrapper[4966]: I0127 15:48:27.483207 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tqd6\" (UniqueName: \"kubernetes.io/projected/3d8b10e3-996f-45dc-9d21-b0fd3a7cee0c-kube-api-access-7tqd6\") pod \"cluster-monitoring-operator-6d5b84845-954x4\" (UID: \"3d8b10e3-996f-45dc-9d21-b0fd3a7cee0c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-954x4" Jan 27 15:48:27 crc kubenswrapper[4966]: I0127 15:48:27.483306 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/3d8b10e3-996f-45dc-9d21-b0fd3a7cee0c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-954x4\" (UID: \"3d8b10e3-996f-45dc-9d21-b0fd3a7cee0c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-954x4" Jan 27 15:48:27 crc kubenswrapper[4966]: I0127 15:48:27.483399 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/3d8b10e3-996f-45dc-9d21-b0fd3a7cee0c-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-954x4\" (UID: \"3d8b10e3-996f-45dc-9d21-b0fd3a7cee0c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-954x4" Jan 27 15:48:27 crc kubenswrapper[4966]: I0127 15:48:27.485516 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/3d8b10e3-996f-45dc-9d21-b0fd3a7cee0c-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-954x4\" (UID: \"3d8b10e3-996f-45dc-9d21-b0fd3a7cee0c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-954x4" Jan 27 15:48:27 crc kubenswrapper[4966]: I0127 15:48:27.490713 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/3d8b10e3-996f-45dc-9d21-b0fd3a7cee0c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-954x4\" (UID: \"3d8b10e3-996f-45dc-9d21-b0fd3a7cee0c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-954x4" Jan 27 15:48:27 crc kubenswrapper[4966]: I0127 15:48:27.499395 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tqd6\" (UniqueName: \"kubernetes.io/projected/3d8b10e3-996f-45dc-9d21-b0fd3a7cee0c-kube-api-access-7tqd6\") pod \"cluster-monitoring-operator-6d5b84845-954x4\" (UID: \"3d8b10e3-996f-45dc-9d21-b0fd3a7cee0c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-954x4" Jan 27 15:48:27 crc kubenswrapper[4966]: I0127 15:48:27.657817 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-954x4" Jan 27 15:48:28 crc kubenswrapper[4966]: I0127 15:48:28.126221 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-954x4"] Jan 27 15:48:28 crc kubenswrapper[4966]: W0127 15:48:28.132031 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d8b10e3_996f_45dc_9d21_b0fd3a7cee0c.slice/crio-856aa422bff959c78c3f7fad962b94cd1079988f9faa83e72f092638fcfa6c2b WatchSource:0}: Error finding container 856aa422bff959c78c3f7fad962b94cd1079988f9faa83e72f092638fcfa6c2b: Status 404 returned error can't find the container with id 856aa422bff959c78c3f7fad962b94cd1079988f9faa83e72f092638fcfa6c2b Jan 27 15:48:28 crc kubenswrapper[4966]: I0127 15:48:28.232216 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-954x4" event={"ID":"3d8b10e3-996f-45dc-9d21-b0fd3a7cee0c","Type":"ContainerStarted","Data":"856aa422bff959c78c3f7fad962b94cd1079988f9faa83e72f092638fcfa6c2b"} Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.239919 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xfl7t"] Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.241882 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.247587 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-954x4" event={"ID":"3d8b10e3-996f-45dc-9d21-b0fd3a7cee0c","Type":"ContainerStarted","Data":"8d5c1d3237dce60c37671c4ebb16a9b07f718a0c4286c22b79a270a1f9a12d88"} Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.279047 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xfl7t"] Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.320276 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-954x4" podStartSLOduration=1.706460689 podStartE2EDuration="3.320252994s" podCreationTimestamp="2026-01-27 15:48:27 +0000 UTC" firstStartedPulling="2026-01-27 15:48:28.13383837 +0000 UTC m=+374.436631858" lastFinishedPulling="2026-01-27 15:48:29.747630675 +0000 UTC m=+376.050424163" observedRunningTime="2026-01-27 15:48:30.312225164 +0000 UTC m=+376.615018672" watchObservedRunningTime="2026-01-27 15:48:30.320252994 +0000 UTC m=+376.623046482" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.332731 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b55dd294-68a4-4eba-b567-796255583e28-bound-sa-token\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.332798 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b55dd294-68a4-4eba-b567-796255583e28-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.332816 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b55dd294-68a4-4eba-b567-796255583e28-registry-tls\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.332852 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b55dd294-68a4-4eba-b567-796255583e28-registry-certificates\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.332884 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.332922 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b55dd294-68a4-4eba-b567-796255583e28-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.332960 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv6fv\" (UniqueName: \"kubernetes.io/projected/b55dd294-68a4-4eba-b567-796255583e28-kube-api-access-jv6fv\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.333028 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b55dd294-68a4-4eba-b567-796255583e28-trusted-ca\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.364230 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.376616 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz"] Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.377494 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.380203 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.380527 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-65ppq" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.384681 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz"] Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.434381 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b55dd294-68a4-4eba-b567-796255583e28-trusted-ca\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.434451 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/f58eecb1-9324-4165-9446-631a0438392e-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lptwz\" (UID: \"f58eecb1-9324-4165-9446-631a0438392e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.434485 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b55dd294-68a4-4eba-b567-796255583e28-bound-sa-token\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.434535 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b55dd294-68a4-4eba-b567-796255583e28-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.434572 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b55dd294-68a4-4eba-b567-796255583e28-registry-tls\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.434598 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b55dd294-68a4-4eba-b567-796255583e28-registry-certificates\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.434628 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b55dd294-68a4-4eba-b567-796255583e28-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.434647 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv6fv\" (UniqueName: \"kubernetes.io/projected/b55dd294-68a4-4eba-b567-796255583e28-kube-api-access-jv6fv\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.435245 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b55dd294-68a4-4eba-b567-796255583e28-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.436091 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b55dd294-68a4-4eba-b567-796255583e28-registry-certificates\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.437008 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b55dd294-68a4-4eba-b567-796255583e28-trusted-ca\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.439796 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b55dd294-68a4-4eba-b567-796255583e28-registry-tls\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.439819 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b55dd294-68a4-4eba-b567-796255583e28-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.452428 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b55dd294-68a4-4eba-b567-796255583e28-bound-sa-token\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.452624 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv6fv\" (UniqueName: \"kubernetes.io/projected/b55dd294-68a4-4eba-b567-796255583e28-kube-api-access-jv6fv\") pod \"image-registry-66df7c8f76-xfl7t\" (UID: \"b55dd294-68a4-4eba-b567-796255583e28\") " pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.536227 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/f58eecb1-9324-4165-9446-631a0438392e-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lptwz\" (UID: \"f58eecb1-9324-4165-9446-631a0438392e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" Jan 27 15:48:30 crc kubenswrapper[4966]: E0127 15:48:30.536338 4966 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Jan 27 15:48:30 crc kubenswrapper[4966]: E0127 15:48:30.536386 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f58eecb1-9324-4165-9446-631a0438392e-tls-certificates podName:f58eecb1-9324-4165-9446-631a0438392e nodeName:}" failed. No retries permitted until 2026-01-27 15:48:31.036371335 +0000 UTC m=+377.339164823 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/f58eecb1-9324-4165-9446-631a0438392e-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-lptwz" (UID: "f58eecb1-9324-4165-9446-631a0438392e") : secret "prometheus-operator-admission-webhook-tls" not found Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.555639 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:30 crc kubenswrapper[4966]: I0127 15:48:30.969539 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xfl7t"] Jan 27 15:48:30 crc kubenswrapper[4966]: W0127 15:48:30.981049 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb55dd294_68a4_4eba_b567_796255583e28.slice/crio-062f9a0743cc29ee7b23fc141643d833fa5be0a0579fda19d153d71a9c907169 WatchSource:0}: Error finding container 062f9a0743cc29ee7b23fc141643d833fa5be0a0579fda19d153d71a9c907169: Status 404 returned error can't find the container with id 062f9a0743cc29ee7b23fc141643d833fa5be0a0579fda19d153d71a9c907169 Jan 27 15:48:31 crc kubenswrapper[4966]: I0127 15:48:31.041359 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/f58eecb1-9324-4165-9446-631a0438392e-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lptwz\" (UID: \"f58eecb1-9324-4165-9446-631a0438392e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" Jan 27 15:48:31 crc kubenswrapper[4966]: I0127 15:48:31.046651 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/f58eecb1-9324-4165-9446-631a0438392e-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lptwz\" (UID: \"f58eecb1-9324-4165-9446-631a0438392e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" Jan 27 15:48:31 crc kubenswrapper[4966]: I0127 15:48:31.256861 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" event={"ID":"b55dd294-68a4-4eba-b567-796255583e28","Type":"ContainerStarted","Data":"6ccfcda31069d04170025c037a991f04c331d17f9f977e8cbf9729581f476671"} Jan 27 15:48:31 crc kubenswrapper[4966]: I0127 15:48:31.256913 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" event={"ID":"b55dd294-68a4-4eba-b567-796255583e28","Type":"ContainerStarted","Data":"062f9a0743cc29ee7b23fc141643d833fa5be0a0579fda19d153d71a9c907169"} Jan 27 15:48:31 crc kubenswrapper[4966]: I0127 15:48:31.257009 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:31 crc kubenswrapper[4966]: I0127 15:48:31.281825 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" podStartSLOduration=1.2818074 podStartE2EDuration="1.2818074s" podCreationTimestamp="2026-01-27 15:48:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:48:31.278010188 +0000 UTC m=+377.580803686" watchObservedRunningTime="2026-01-27 15:48:31.2818074 +0000 UTC m=+377.584600888" Jan 27 15:48:31 crc kubenswrapper[4966]: I0127 15:48:31.300090 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" Jan 27 15:48:31 crc kubenswrapper[4966]: I0127 15:48:31.691107 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz"] Jan 27 15:48:31 crc kubenswrapper[4966]: W0127 15:48:31.694473 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf58eecb1_9324_4165_9446_631a0438392e.slice/crio-0aa183cd2c394fcc8b25a4d5f3d2bf951407a2e81c9c759e54ffd8e0338a7b0a WatchSource:0}: Error finding container 0aa183cd2c394fcc8b25a4d5f3d2bf951407a2e81c9c759e54ffd8e0338a7b0a: Status 404 returned error can't find the container with id 0aa183cd2c394fcc8b25a4d5f3d2bf951407a2e81c9c759e54ffd8e0338a7b0a Jan 27 15:48:32 crc kubenswrapper[4966]: I0127 15:48:32.264697 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" event={"ID":"f58eecb1-9324-4165-9446-631a0438392e","Type":"ContainerStarted","Data":"0aa183cd2c394fcc8b25a4d5f3d2bf951407a2e81c9c759e54ffd8e0338a7b0a"} Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.279273 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" event={"ID":"f58eecb1-9324-4165-9446-631a0438392e","Type":"ContainerStarted","Data":"9c8d1c00d3ce58b79a757a98f778173e98590799e4aec661eba4e7e4c12f7247"} Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.279626 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.287922 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.297516 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" podStartSLOduration=1.9802464739999999 podStartE2EDuration="3.297490743s" podCreationTimestamp="2026-01-27 15:48:30 +0000 UTC" firstStartedPulling="2026-01-27 15:48:31.696244975 +0000 UTC m=+377.999038463" lastFinishedPulling="2026-01-27 15:48:33.013489254 +0000 UTC m=+379.316282732" observedRunningTime="2026-01-27 15:48:33.29244253 +0000 UTC m=+379.595236038" watchObservedRunningTime="2026-01-27 15:48:33.297490743 +0000 UTC m=+379.600284241" Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.419720 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-tfv7d"] Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.420768 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-tfv7d" Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.422228 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-4d6fc" Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.422364 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.423240 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.423719 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.434487 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-tfv7d"] Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.577493 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b6fd0f07-fc09-4659-871b-f5c6f8ba38ec-metrics-client-ca\") pod \"prometheus-operator-db54df47d-tfv7d\" (UID: \"b6fd0f07-fc09-4659-871b-f5c6f8ba38ec\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tfv7d" Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.577541 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b6fd0f07-fc09-4659-871b-f5c6f8ba38ec-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-tfv7d\" (UID: \"b6fd0f07-fc09-4659-871b-f5c6f8ba38ec\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tfv7d" Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.577581 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h95td\" (UniqueName: \"kubernetes.io/projected/b6fd0f07-fc09-4659-871b-f5c6f8ba38ec-kube-api-access-h95td\") pod \"prometheus-operator-db54df47d-tfv7d\" (UID: \"b6fd0f07-fc09-4659-871b-f5c6f8ba38ec\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tfv7d" Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.577740 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b6fd0f07-fc09-4659-871b-f5c6f8ba38ec-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-tfv7d\" (UID: \"b6fd0f07-fc09-4659-871b-f5c6f8ba38ec\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tfv7d" Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.678975 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b6fd0f07-fc09-4659-871b-f5c6f8ba38ec-metrics-client-ca\") pod \"prometheus-operator-db54df47d-tfv7d\" (UID: \"b6fd0f07-fc09-4659-871b-f5c6f8ba38ec\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tfv7d" Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.679483 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b6fd0f07-fc09-4659-871b-f5c6f8ba38ec-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-tfv7d\" (UID: \"b6fd0f07-fc09-4659-871b-f5c6f8ba38ec\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tfv7d" Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.679622 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h95td\" (UniqueName: \"kubernetes.io/projected/b6fd0f07-fc09-4659-871b-f5c6f8ba38ec-kube-api-access-h95td\") pod \"prometheus-operator-db54df47d-tfv7d\" (UID: \"b6fd0f07-fc09-4659-871b-f5c6f8ba38ec\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tfv7d" Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.679710 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b6fd0f07-fc09-4659-871b-f5c6f8ba38ec-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-tfv7d\" (UID: \"b6fd0f07-fc09-4659-871b-f5c6f8ba38ec\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tfv7d" Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.679890 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b6fd0f07-fc09-4659-871b-f5c6f8ba38ec-metrics-client-ca\") pod \"prometheus-operator-db54df47d-tfv7d\" (UID: \"b6fd0f07-fc09-4659-871b-f5c6f8ba38ec\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tfv7d" Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.684925 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b6fd0f07-fc09-4659-871b-f5c6f8ba38ec-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-tfv7d\" (UID: \"b6fd0f07-fc09-4659-871b-f5c6f8ba38ec\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tfv7d" Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.688830 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b6fd0f07-fc09-4659-871b-f5c6f8ba38ec-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-tfv7d\" (UID: \"b6fd0f07-fc09-4659-871b-f5c6f8ba38ec\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tfv7d" Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.696967 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h95td\" (UniqueName: \"kubernetes.io/projected/b6fd0f07-fc09-4659-871b-f5c6f8ba38ec-kube-api-access-h95td\") pod \"prometheus-operator-db54df47d-tfv7d\" (UID: \"b6fd0f07-fc09-4659-871b-f5c6f8ba38ec\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tfv7d" Jan 27 15:48:33 crc kubenswrapper[4966]: I0127 15:48:33.735836 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-tfv7d" Jan 27 15:48:34 crc kubenswrapper[4966]: I0127 15:48:34.144738 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-tfv7d"] Jan 27 15:48:34 crc kubenswrapper[4966]: W0127 15:48:34.154352 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6fd0f07_fc09_4659_871b_f5c6f8ba38ec.slice/crio-3d7513c504a304b87001cf04f5b2553e470aab2d298b4eafa546891b763d0e02 WatchSource:0}: Error finding container 3d7513c504a304b87001cf04f5b2553e470aab2d298b4eafa546891b763d0e02: Status 404 returned error can't find the container with id 3d7513c504a304b87001cf04f5b2553e470aab2d298b4eafa546891b763d0e02 Jan 27 15:48:34 crc kubenswrapper[4966]: I0127 15:48:34.285280 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-tfv7d" event={"ID":"b6fd0f07-fc09-4659-871b-f5c6f8ba38ec","Type":"ContainerStarted","Data":"3d7513c504a304b87001cf04f5b2553e470aab2d298b4eafa546891b763d0e02"} Jan 27 15:48:36 crc kubenswrapper[4966]: I0127 15:48:36.298627 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-tfv7d" event={"ID":"b6fd0f07-fc09-4659-871b-f5c6f8ba38ec","Type":"ContainerStarted","Data":"50a37ad404608eb5c9d3140551a1b9d236cbc9ac0342b83cd0f976426024725a"} Jan 27 15:48:36 crc kubenswrapper[4966]: I0127 15:48:36.300226 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-tfv7d" event={"ID":"b6fd0f07-fc09-4659-871b-f5c6f8ba38ec","Type":"ContainerStarted","Data":"502dbccacf2f7638f324f0101f25214aed52849c89331afec32fa7d08c0d6d78"} Jan 27 15:48:36 crc kubenswrapper[4966]: I0127 15:48:36.328412 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-tfv7d" podStartSLOduration=2.051325855 podStartE2EDuration="3.328392712s" podCreationTimestamp="2026-01-27 15:48:33 +0000 UTC" firstStartedPulling="2026-01-27 15:48:34.15776044 +0000 UTC m=+380.460553938" lastFinishedPulling="2026-01-27 15:48:35.434827317 +0000 UTC m=+381.737620795" observedRunningTime="2026-01-27 15:48:36.316443485 +0000 UTC m=+382.619236983" watchObservedRunningTime="2026-01-27 15:48:36.328392712 +0000 UTC m=+382.631186210" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.772091 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq"] Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.773085 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.774854 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-7rx88" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.775173 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.776274 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.811773 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq"] Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.813062 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j"] Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.814139 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.819116 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.819374 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.819677 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-tgpq4" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.826588 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.834060 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j"] Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.863257 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-v2696"] Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.864364 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.866181 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.866213 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-jl2x7" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.866856 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.933477 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d83fbdd7-a083-4c93-9948-1ede8ed239f5-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-tjpnq\" (UID: \"d83fbdd7-a083-4c93-9948-1ede8ed239f5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.933523 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d83fbdd7-a083-4c93-9948-1ede8ed239f5-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-tjpnq\" (UID: \"d83fbdd7-a083-4c93-9948-1ede8ed239f5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.933551 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jhrp\" (UniqueName: \"kubernetes.io/projected/d83fbdd7-a083-4c93-9948-1ede8ed239f5-kube-api-access-4jhrp\") pod \"openshift-state-metrics-566fddb674-tjpnq\" (UID: \"d83fbdd7-a083-4c93-9948-1ede8ed239f5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.933571 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/419c82c1-8186-4352-a26c-e3114a250a46-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-7pf2j\" (UID: \"419c82c1-8186-4352-a26c-e3114a250a46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.933593 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/419c82c1-8186-4352-a26c-e3114a250a46-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-7pf2j\" (UID: \"419c82c1-8186-4352-a26c-e3114a250a46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.933613 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d83fbdd7-a083-4c93-9948-1ede8ed239f5-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-tjpnq\" (UID: \"d83fbdd7-a083-4c93-9948-1ede8ed239f5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.933633 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/419c82c1-8186-4352-a26c-e3114a250a46-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-7pf2j\" (UID: \"419c82c1-8186-4352-a26c-e3114a250a46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.933712 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67xt7\" (UniqueName: \"kubernetes.io/projected/419c82c1-8186-4352-a26c-e3114a250a46-kube-api-access-67xt7\") pod \"kube-state-metrics-777cb5bd5d-7pf2j\" (UID: \"419c82c1-8186-4352-a26c-e3114a250a46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.933739 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/419c82c1-8186-4352-a26c-e3114a250a46-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-7pf2j\" (UID: \"419c82c1-8186-4352-a26c-e3114a250a46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" Jan 27 15:48:37 crc kubenswrapper[4966]: I0127 15:48:37.933756 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/419c82c1-8186-4352-a26c-e3114a250a46-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-7pf2j\" (UID: \"419c82c1-8186-4352-a26c-e3114a250a46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.036807 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/3da274f3-ced1-41a4-bce9-f9d00380d63d-node-exporter-textfile\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.036875 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3da274f3-ced1-41a4-bce9-f9d00380d63d-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.036922 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wsbj\" (UniqueName: \"kubernetes.io/projected/3da274f3-ced1-41a4-bce9-f9d00380d63d-kube-api-access-5wsbj\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.036961 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3da274f3-ced1-41a4-bce9-f9d00380d63d-metrics-client-ca\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.037017 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67xt7\" (UniqueName: \"kubernetes.io/projected/419c82c1-8186-4352-a26c-e3114a250a46-kube-api-access-67xt7\") pod \"kube-state-metrics-777cb5bd5d-7pf2j\" (UID: \"419c82c1-8186-4352-a26c-e3114a250a46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.037040 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3da274f3-ced1-41a4-bce9-f9d00380d63d-sys\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.037076 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/419c82c1-8186-4352-a26c-e3114a250a46-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-7pf2j\" (UID: \"419c82c1-8186-4352-a26c-e3114a250a46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.037098 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/419c82c1-8186-4352-a26c-e3114a250a46-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-7pf2j\" (UID: \"419c82c1-8186-4352-a26c-e3114a250a46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.037122 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d83fbdd7-a083-4c93-9948-1ede8ed239f5-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-tjpnq\" (UID: \"d83fbdd7-a083-4c93-9948-1ede8ed239f5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.037145 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/3da274f3-ced1-41a4-bce9-f9d00380d63d-root\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.037170 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d83fbdd7-a083-4c93-9948-1ede8ed239f5-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-tjpnq\" (UID: \"d83fbdd7-a083-4c93-9948-1ede8ed239f5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.037196 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/3da274f3-ced1-41a4-bce9-f9d00380d63d-node-exporter-tls\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.037220 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jhrp\" (UniqueName: \"kubernetes.io/projected/d83fbdd7-a083-4c93-9948-1ede8ed239f5-kube-api-access-4jhrp\") pod \"openshift-state-metrics-566fddb674-tjpnq\" (UID: \"d83fbdd7-a083-4c93-9948-1ede8ed239f5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.037243 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/419c82c1-8186-4352-a26c-e3114a250a46-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-7pf2j\" (UID: \"419c82c1-8186-4352-a26c-e3114a250a46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.037268 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/3da274f3-ced1-41a4-bce9-f9d00380d63d-node-exporter-wtmp\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.037293 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/419c82c1-8186-4352-a26c-e3114a250a46-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-7pf2j\" (UID: \"419c82c1-8186-4352-a26c-e3114a250a46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.037320 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d83fbdd7-a083-4c93-9948-1ede8ed239f5-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-tjpnq\" (UID: \"d83fbdd7-a083-4c93-9948-1ede8ed239f5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.037345 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/419c82c1-8186-4352-a26c-e3114a250a46-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-7pf2j\" (UID: \"419c82c1-8186-4352-a26c-e3114a250a46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.038311 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/419c82c1-8186-4352-a26c-e3114a250a46-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-7pf2j\" (UID: \"419c82c1-8186-4352-a26c-e3114a250a46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.038869 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/419c82c1-8186-4352-a26c-e3114a250a46-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-7pf2j\" (UID: \"419c82c1-8186-4352-a26c-e3114a250a46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.039534 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/419c82c1-8186-4352-a26c-e3114a250a46-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-7pf2j\" (UID: \"419c82c1-8186-4352-a26c-e3114a250a46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.045462 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d83fbdd7-a083-4c93-9948-1ede8ed239f5-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-tjpnq\" (UID: \"d83fbdd7-a083-4c93-9948-1ede8ed239f5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.045598 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d83fbdd7-a083-4c93-9948-1ede8ed239f5-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-tjpnq\" (UID: \"d83fbdd7-a083-4c93-9948-1ede8ed239f5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq" Jan 27 15:48:38 crc kubenswrapper[4966]: E0127 15:48:38.045673 4966 secret.go:188] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Jan 27 15:48:38 crc kubenswrapper[4966]: E0127 15:48:38.045801 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/419c82c1-8186-4352-a26c-e3114a250a46-kube-state-metrics-tls podName:419c82c1-8186-4352-a26c-e3114a250a46 nodeName:}" failed. No retries permitted until 2026-01-27 15:48:38.545779982 +0000 UTC m=+384.848573470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/419c82c1-8186-4352-a26c-e3114a250a46-kube-state-metrics-tls") pod "kube-state-metrics-777cb5bd5d-7pf2j" (UID: "419c82c1-8186-4352-a26c-e3114a250a46") : secret "kube-state-metrics-tls" not found Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.047142 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/419c82c1-8186-4352-a26c-e3114a250a46-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-7pf2j\" (UID: \"419c82c1-8186-4352-a26c-e3114a250a46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.048348 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d83fbdd7-a083-4c93-9948-1ede8ed239f5-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-tjpnq\" (UID: \"d83fbdd7-a083-4c93-9948-1ede8ed239f5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.056672 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67xt7\" (UniqueName: \"kubernetes.io/projected/419c82c1-8186-4352-a26c-e3114a250a46-kube-api-access-67xt7\") pod \"kube-state-metrics-777cb5bd5d-7pf2j\" (UID: \"419c82c1-8186-4352-a26c-e3114a250a46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.057460 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jhrp\" (UniqueName: \"kubernetes.io/projected/d83fbdd7-a083-4c93-9948-1ede8ed239f5-kube-api-access-4jhrp\") pod \"openshift-state-metrics-566fddb674-tjpnq\" (UID: \"d83fbdd7-a083-4c93-9948-1ede8ed239f5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.087459 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.138320 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/3da274f3-ced1-41a4-bce9-f9d00380d63d-node-exporter-textfile\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.138368 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3da274f3-ced1-41a4-bce9-f9d00380d63d-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.138390 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wsbj\" (UniqueName: \"kubernetes.io/projected/3da274f3-ced1-41a4-bce9-f9d00380d63d-kube-api-access-5wsbj\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.138418 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3da274f3-ced1-41a4-bce9-f9d00380d63d-metrics-client-ca\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.138460 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3da274f3-ced1-41a4-bce9-f9d00380d63d-sys\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.138498 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/3da274f3-ced1-41a4-bce9-f9d00380d63d-root\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.138515 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/3da274f3-ced1-41a4-bce9-f9d00380d63d-node-exporter-tls\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.138535 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/3da274f3-ced1-41a4-bce9-f9d00380d63d-node-exporter-wtmp\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.138721 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/3da274f3-ced1-41a4-bce9-f9d00380d63d-node-exporter-wtmp\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.138765 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3da274f3-ced1-41a4-bce9-f9d00380d63d-sys\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.138852 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/3da274f3-ced1-41a4-bce9-f9d00380d63d-node-exporter-textfile\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.138926 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/3da274f3-ced1-41a4-bce9-f9d00380d63d-root\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: E0127 15:48:38.138996 4966 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Jan 27 15:48:38 crc kubenswrapper[4966]: E0127 15:48:38.139039 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3da274f3-ced1-41a4-bce9-f9d00380d63d-node-exporter-tls podName:3da274f3-ced1-41a4-bce9-f9d00380d63d nodeName:}" failed. No retries permitted until 2026-01-27 15:48:38.639024473 +0000 UTC m=+384.941817961 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/3da274f3-ced1-41a4-bce9-f9d00380d63d-node-exporter-tls") pod "node-exporter-v2696" (UID: "3da274f3-ced1-41a4-bce9-f9d00380d63d") : secret "node-exporter-tls" not found Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.139632 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3da274f3-ced1-41a4-bce9-f9d00380d63d-metrics-client-ca\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.143477 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3da274f3-ced1-41a4-bce9-f9d00380d63d-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.153841 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wsbj\" (UniqueName: \"kubernetes.io/projected/3da274f3-ced1-41a4-bce9-f9d00380d63d-kube-api-access-5wsbj\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.517043 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq"] Jan 27 15:48:38 crc kubenswrapper[4966]: W0127 15:48:38.522056 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd83fbdd7_a083_4c93_9948_1ede8ed239f5.slice/crio-18fe51f2b3e4a043eb3fbd239dc89d32b9c6fc9d6f9d6b064658bc2de9f5b3ec WatchSource:0}: Error finding container 18fe51f2b3e4a043eb3fbd239dc89d32b9c6fc9d6f9d6b064658bc2de9f5b3ec: Status 404 returned error can't find the container with id 18fe51f2b3e4a043eb3fbd239dc89d32b9c6fc9d6f9d6b064658bc2de9f5b3ec Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.644551 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/419c82c1-8186-4352-a26c-e3114a250a46-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-7pf2j\" (UID: \"419c82c1-8186-4352-a26c-e3114a250a46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.644608 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/3da274f3-ced1-41a4-bce9-f9d00380d63d-node-exporter-tls\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.649552 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/419c82c1-8186-4352-a26c-e3114a250a46-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-7pf2j\" (UID: \"419c82c1-8186-4352-a26c-e3114a250a46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.649725 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/3da274f3-ced1-41a4-bce9-f9d00380d63d-node-exporter-tls\") pod \"node-exporter-v2696\" (UID: \"3da274f3-ced1-41a4-bce9-f9d00380d63d\") " pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.730889 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.780976 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-v2696" Jan 27 15:48:38 crc kubenswrapper[4966]: W0127 15:48:38.821852 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da274f3_ced1_41a4_bce9_f9d00380d63d.slice/crio-04d56fc09a220a9966b02d2f1df4f9633196670ecee2b7d7133a0c3bd1e98d7a WatchSource:0}: Error finding container 04d56fc09a220a9966b02d2f1df4f9633196670ecee2b7d7133a0c3bd1e98d7a: Status 404 returned error can't find the container with id 04d56fc09a220a9966b02d2f1df4f9633196670ecee2b7d7133a0c3bd1e98d7a Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.865447 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.867810 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.870997 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.871305 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.877349 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.877431 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.877633 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.877680 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.879159 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.879403 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-qvzsk" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.887955 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.894687 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.950452 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8751867e-cf32-4125-9bd2-cfb117d85792-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.950487 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8751867e-cf32-4125-9bd2-cfb117d85792-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.950521 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8751867e-cf32-4125-9bd2-cfb117d85792-config-out\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.950539 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k98w6\" (UniqueName: \"kubernetes.io/projected/8751867e-cf32-4125-9bd2-cfb117d85792-kube-api-access-k98w6\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.950556 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8751867e-cf32-4125-9bd2-cfb117d85792-web-config\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.950568 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/8751867e-cf32-4125-9bd2-cfb117d85792-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.950589 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/8751867e-cf32-4125-9bd2-cfb117d85792-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.950687 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8751867e-cf32-4125-9bd2-cfb117d85792-config-volume\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.950703 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8751867e-cf32-4125-9bd2-cfb117d85792-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.950725 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/8751867e-cf32-4125-9bd2-cfb117d85792-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.950744 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8751867e-cf32-4125-9bd2-cfb117d85792-tls-assets\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:38 crc kubenswrapper[4966]: I0127 15:48:38.950759 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8751867e-cf32-4125-9bd2-cfb117d85792-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.051526 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8751867e-cf32-4125-9bd2-cfb117d85792-config-volume\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.051568 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8751867e-cf32-4125-9bd2-cfb117d85792-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.051596 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/8751867e-cf32-4125-9bd2-cfb117d85792-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.051621 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8751867e-cf32-4125-9bd2-cfb117d85792-tls-assets\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.051660 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8751867e-cf32-4125-9bd2-cfb117d85792-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.051684 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8751867e-cf32-4125-9bd2-cfb117d85792-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.051707 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8751867e-cf32-4125-9bd2-cfb117d85792-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.051748 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8751867e-cf32-4125-9bd2-cfb117d85792-config-out\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.051768 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k98w6\" (UniqueName: \"kubernetes.io/projected/8751867e-cf32-4125-9bd2-cfb117d85792-kube-api-access-k98w6\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.051788 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8751867e-cf32-4125-9bd2-cfb117d85792-web-config\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.051807 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/8751867e-cf32-4125-9bd2-cfb117d85792-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.051834 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/8751867e-cf32-4125-9bd2-cfb117d85792-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.052438 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/8751867e-cf32-4125-9bd2-cfb117d85792-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.053140 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8751867e-cf32-4125-9bd2-cfb117d85792-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.053559 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8751867e-cf32-4125-9bd2-cfb117d85792-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.056791 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8751867e-cf32-4125-9bd2-cfb117d85792-config-out\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.057479 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8751867e-cf32-4125-9bd2-cfb117d85792-web-config\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.058150 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/8751867e-cf32-4125-9bd2-cfb117d85792-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.058343 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8751867e-cf32-4125-9bd2-cfb117d85792-tls-assets\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.058560 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8751867e-cf32-4125-9bd2-cfb117d85792-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.059011 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8751867e-cf32-4125-9bd2-cfb117d85792-config-volume\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.059147 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/8751867e-cf32-4125-9bd2-cfb117d85792-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.059581 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8751867e-cf32-4125-9bd2-cfb117d85792-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.067974 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k98w6\" (UniqueName: \"kubernetes.io/projected/8751867e-cf32-4125-9bd2-cfb117d85792-kube-api-access-k98w6\") pod \"alertmanager-main-0\" (UID: \"8751867e-cf32-4125-9bd2-cfb117d85792\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.193631 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j"] Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.193966 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 27 15:48:39 crc kubenswrapper[4966]: W0127 15:48:39.204814 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod419c82c1_8186_4352_a26c_e3114a250a46.slice/crio-0c556ddea79bd7644644565b1c444c060e0c340570ea16fc30be034487a12ac8 WatchSource:0}: Error finding container 0c556ddea79bd7644644565b1c444c060e0c340570ea16fc30be034487a12ac8: Status 404 returned error can't find the container with id 0c556ddea79bd7644644565b1c444c060e0c340570ea16fc30be034487a12ac8 Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.318802 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq" event={"ID":"d83fbdd7-a083-4c93-9948-1ede8ed239f5","Type":"ContainerStarted","Data":"78abbfe0cd94028678af6d59727133225e0ed4dda76aa0990662c5e2db5a2907"} Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.318864 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq" event={"ID":"d83fbdd7-a083-4c93-9948-1ede8ed239f5","Type":"ContainerStarted","Data":"40466e4d1b0fa560d97aafc3ea836e489439fa31bc15cb9ea387ac06ba6d1d50"} Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.318882 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq" event={"ID":"d83fbdd7-a083-4c93-9948-1ede8ed239f5","Type":"ContainerStarted","Data":"18fe51f2b3e4a043eb3fbd239dc89d32b9c6fc9d6f9d6b064658bc2de9f5b3ec"} Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.319869 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" event={"ID":"419c82c1-8186-4352-a26c-e3114a250a46","Type":"ContainerStarted","Data":"0c556ddea79bd7644644565b1c444c060e0c340570ea16fc30be034487a12ac8"} Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.320583 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-v2696" event={"ID":"3da274f3-ced1-41a4-bce9-f9d00380d63d","Type":"ContainerStarted","Data":"04d56fc09a220a9966b02d2f1df4f9633196670ecee2b7d7133a0c3bd1e98d7a"} Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.607274 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.786298 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-78c6ff45cc-gspnf"] Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.788517 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.790610 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.792207 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.792463 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.792757 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.792964 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.799261 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-b8cgd" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.799640 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-65j1nsg0ih0na" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.824013 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-78c6ff45cc-gspnf"] Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.971991 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx6px\" (UniqueName: \"kubernetes.io/projected/76f00c13-2195-40be-829a-ce9e9c94a795-kube-api-access-sx6px\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.972043 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/76f00c13-2195-40be-829a-ce9e9c94a795-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.972081 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/76f00c13-2195-40be-829a-ce9e9c94a795-metrics-client-ca\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.972101 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/76f00c13-2195-40be-829a-ce9e9c94a795-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.972680 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/76f00c13-2195-40be-829a-ce9e9c94a795-secret-thanos-querier-tls\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.972779 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/76f00c13-2195-40be-829a-ce9e9c94a795-secret-grpc-tls\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.972944 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/76f00c13-2195-40be-829a-ce9e9c94a795-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:39 crc kubenswrapper[4966]: I0127 15:48:39.973030 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/76f00c13-2195-40be-829a-ce9e9c94a795-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.074406 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx6px\" (UniqueName: \"kubernetes.io/projected/76f00c13-2195-40be-829a-ce9e9c94a795-kube-api-access-sx6px\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.074470 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/76f00c13-2195-40be-829a-ce9e9c94a795-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.074519 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/76f00c13-2195-40be-829a-ce9e9c94a795-metrics-client-ca\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.074551 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/76f00c13-2195-40be-829a-ce9e9c94a795-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.074579 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/76f00c13-2195-40be-829a-ce9e9c94a795-secret-thanos-querier-tls\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.074646 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/76f00c13-2195-40be-829a-ce9e9c94a795-secret-grpc-tls\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.074687 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/76f00c13-2195-40be-829a-ce9e9c94a795-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.074723 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/76f00c13-2195-40be-829a-ce9e9c94a795-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.075695 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/76f00c13-2195-40be-829a-ce9e9c94a795-metrics-client-ca\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.080725 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/76f00c13-2195-40be-829a-ce9e9c94a795-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.081359 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/76f00c13-2195-40be-829a-ce9e9c94a795-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.084138 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/76f00c13-2195-40be-829a-ce9e9c94a795-secret-grpc-tls\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.085418 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/76f00c13-2195-40be-829a-ce9e9c94a795-secret-thanos-querier-tls\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.085830 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/76f00c13-2195-40be-829a-ce9e9c94a795-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.089701 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/76f00c13-2195-40be-829a-ce9e9c94a795-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.092945 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx6px\" (UniqueName: \"kubernetes.io/projected/76f00c13-2195-40be-829a-ce9e9c94a795-kube-api-access-sx6px\") pod \"thanos-querier-78c6ff45cc-gspnf\" (UID: \"76f00c13-2195-40be-829a-ce9e9c94a795\") " pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.113447 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.119414 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.119464 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.119503 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.119877 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fa991efa8c264472d6ff0c3eb9586659e3c6d4cca2ccc3928e23ac1cf4a47b67"} pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.119937 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" containerID="cri-o://fa991efa8c264472d6ff0c3eb9586659e3c6d4cca2ccc3928e23ac1cf4a47b67" gracePeriod=600 Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.332539 4966 generic.go:334] "Generic (PLEG): container finished" podID="75889828-fc5d-4516-a3c4-db3affd4f810" containerID="fa991efa8c264472d6ff0c3eb9586659e3c6d4cca2ccc3928e23ac1cf4a47b67" exitCode=0 Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.332654 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerDied","Data":"fa991efa8c264472d6ff0c3eb9586659e3c6d4cca2ccc3928e23ac1cf4a47b67"} Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.332724 4966 scope.go:117] "RemoveContainer" containerID="3b82ca1db6cc7b8841c93a45ccb2b124f7c9bd3d29d5b6412cd5312dce2fb664" Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.335434 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8751867e-cf32-4125-9bd2-cfb117d85792","Type":"ContainerStarted","Data":"ac8f652c67ad459a2fe63d0ff38eadcae0bc9075473c5c09a461a689da3c40eb"} Jan 27 15:48:40 crc kubenswrapper[4966]: I0127 15:48:40.836218 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-78c6ff45cc-gspnf"] Jan 27 15:48:41 crc kubenswrapper[4966]: I0127 15:48:41.343957 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq" event={"ID":"d83fbdd7-a083-4c93-9948-1ede8ed239f5","Type":"ContainerStarted","Data":"f10e5750fea50e026241782ece3515be0e1cc50f1e63ea18993255122d9e35de"} Jan 27 15:48:41 crc kubenswrapper[4966]: I0127 15:48:41.346932 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"2f252fee4f97cee252f6da079f9b6faf80ef04117d5518af7975a7228534e6c0"} Jan 27 15:48:41 crc kubenswrapper[4966]: I0127 15:48:41.348319 4966 generic.go:334] "Generic (PLEG): container finished" podID="3da274f3-ced1-41a4-bce9-f9d00380d63d" containerID="e8301a9513df27bce70d2640a8fe0293e718ac3c0575fc4b4654b35bc3a12f7d" exitCode=0 Jan 27 15:48:41 crc kubenswrapper[4966]: I0127 15:48:41.348432 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-v2696" event={"ID":"3da274f3-ced1-41a4-bce9-f9d00380d63d","Type":"ContainerDied","Data":"e8301a9513df27bce70d2640a8fe0293e718ac3c0575fc4b4654b35bc3a12f7d"} Jan 27 15:48:41 crc kubenswrapper[4966]: I0127 15:48:41.350187 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" event={"ID":"76f00c13-2195-40be-829a-ce9e9c94a795","Type":"ContainerStarted","Data":"f61544eb932150d0503bbecdef719f88e1650cd0b5ae8339811d7494af9037b2"} Jan 27 15:48:41 crc kubenswrapper[4966]: I0127 15:48:41.369140 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-tjpnq" podStartSLOduration=2.742824414 podStartE2EDuration="4.369116563s" podCreationTimestamp="2026-01-27 15:48:37 +0000 UTC" firstStartedPulling="2026-01-27 15:48:38.828033832 +0000 UTC m=+385.130827320" lastFinishedPulling="2026-01-27 15:48:40.454325981 +0000 UTC m=+386.757119469" observedRunningTime="2026-01-27 15:48:41.36593998 +0000 UTC m=+387.668733488" watchObservedRunningTime="2026-01-27 15:48:41.369116563 +0000 UTC m=+387.671910051" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.365254 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" event={"ID":"419c82c1-8186-4352-a26c-e3114a250a46","Type":"ContainerStarted","Data":"606f68dda0d3a4971df88dfe0629c369b85a2fa6ba9b4416c523674c9ea796af"} Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.365880 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" event={"ID":"419c82c1-8186-4352-a26c-e3114a250a46","Type":"ContainerStarted","Data":"c9e63ff341ebd82145b51f2cdddc5a730112fec9930f24b2ae0d1085e6b7a8c3"} Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.365917 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" event={"ID":"419c82c1-8186-4352-a26c-e3114a250a46","Type":"ContainerStarted","Data":"153fe4319b6ad653d8b327a0c2dde3f327e1f0f8639b777cee8bd3aecb724be1"} Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.367679 4966 generic.go:334] "Generic (PLEG): container finished" podID="8751867e-cf32-4125-9bd2-cfb117d85792" containerID="beafc6e9d78a518b7239c87f45fc133ea2ba75d3797f5b4a8f29cd91277992db" exitCode=0 Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.367771 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8751867e-cf32-4125-9bd2-cfb117d85792","Type":"ContainerDied","Data":"beafc6e9d78a518b7239c87f45fc133ea2ba75d3797f5b4a8f29cd91277992db"} Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.375866 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-v2696" event={"ID":"3da274f3-ced1-41a4-bce9-f9d00380d63d","Type":"ContainerStarted","Data":"759e55ee4847e14a0ddcf7aa3cd1c8e4c9f320eb7616865c3b0902ff7202f952"} Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.375934 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-v2696" event={"ID":"3da274f3-ced1-41a4-bce9-f9d00380d63d","Type":"ContainerStarted","Data":"1cae768fa45d9a5e1dc7755ec55d061718dcfebe2ab8a2c212da6b428f0ad851"} Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.384925 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7pf2j" podStartSLOduration=3.054001393 podStartE2EDuration="5.384885746s" podCreationTimestamp="2026-01-27 15:48:37 +0000 UTC" firstStartedPulling="2026-01-27 15:48:39.212446944 +0000 UTC m=+385.515240432" lastFinishedPulling="2026-01-27 15:48:41.543331297 +0000 UTC m=+387.846124785" observedRunningTime="2026-01-27 15:48:42.384326679 +0000 UTC m=+388.687120187" watchObservedRunningTime="2026-01-27 15:48:42.384885746 +0000 UTC m=+388.687679254" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.406727 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-v2696" podStartSLOduration=3.785338244 podStartE2EDuration="5.406706014s" podCreationTimestamp="2026-01-27 15:48:37 +0000 UTC" firstStartedPulling="2026-01-27 15:48:38.824321371 +0000 UTC m=+385.127114859" lastFinishedPulling="2026-01-27 15:48:40.445689141 +0000 UTC m=+386.748482629" observedRunningTime="2026-01-27 15:48:42.403740308 +0000 UTC m=+388.706533816" watchObservedRunningTime="2026-01-27 15:48:42.406706014 +0000 UTC m=+388.709499492" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.585568 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-559df6b64-pdjkp"] Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.586786 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.595718 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-559df6b64-pdjkp"] Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.726642 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw4fs\" (UniqueName: \"kubernetes.io/projected/c9adf9a0-e627-47c6-a062-d8625cd43969-kube-api-access-lw4fs\") pod \"console-559df6b64-pdjkp\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.726715 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9adf9a0-e627-47c6-a062-d8625cd43969-console-serving-cert\") pod \"console-559df6b64-pdjkp\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.726777 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-service-ca\") pod \"console-559df6b64-pdjkp\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.726812 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-trusted-ca-bundle\") pod \"console-559df6b64-pdjkp\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.726842 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-console-config\") pod \"console-559df6b64-pdjkp\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.726881 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c9adf9a0-e627-47c6-a062-d8625cd43969-console-oauth-config\") pod \"console-559df6b64-pdjkp\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.726980 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-oauth-serving-cert\") pod \"console-559df6b64-pdjkp\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.827989 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-service-ca\") pod \"console-559df6b64-pdjkp\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.828041 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-trusted-ca-bundle\") pod \"console-559df6b64-pdjkp\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.828096 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-console-config\") pod \"console-559df6b64-pdjkp\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.828200 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c9adf9a0-e627-47c6-a062-d8625cd43969-console-oauth-config\") pod \"console-559df6b64-pdjkp\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.828223 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-oauth-serving-cert\") pod \"console-559df6b64-pdjkp\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.828251 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw4fs\" (UniqueName: \"kubernetes.io/projected/c9adf9a0-e627-47c6-a062-d8625cd43969-kube-api-access-lw4fs\") pod \"console-559df6b64-pdjkp\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.828272 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9adf9a0-e627-47c6-a062-d8625cd43969-console-serving-cert\") pod \"console-559df6b64-pdjkp\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.831662 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-oauth-serving-cert\") pod \"console-559df6b64-pdjkp\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.832002 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-trusted-ca-bundle\") pod \"console-559df6b64-pdjkp\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.832093 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-service-ca\") pod \"console-559df6b64-pdjkp\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.835630 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c9adf9a0-e627-47c6-a062-d8625cd43969-console-oauth-config\") pod \"console-559df6b64-pdjkp\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.836914 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-console-config\") pod \"console-559df6b64-pdjkp\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.839287 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9adf9a0-e627-47c6-a062-d8625cd43969-console-serving-cert\") pod \"console-559df6b64-pdjkp\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.846478 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw4fs\" (UniqueName: \"kubernetes.io/projected/c9adf9a0-e627-47c6-a062-d8625cd43969-kube-api-access-lw4fs\") pod \"console-559df6b64-pdjkp\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:42 crc kubenswrapper[4966]: I0127 15:48:42.915470 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.091213 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-5fcbd5f794-2hhjm"] Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.092144 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.095395 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.095532 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.098117 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.098196 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-skf9v" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.098587 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.100767 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-co2cvmubl60v1" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.107336 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-5fcbd5f794-2hhjm"] Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.232616 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/525a9ae1-69bf-4f75-b283-c0844b828a90-secret-metrics-client-certs\") pod \"metrics-server-5fcbd5f794-2hhjm\" (UID: \"525a9ae1-69bf-4f75-b283-c0844b828a90\") " pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.232657 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/525a9ae1-69bf-4f75-b283-c0844b828a90-secret-metrics-server-tls\") pod \"metrics-server-5fcbd5f794-2hhjm\" (UID: \"525a9ae1-69bf-4f75-b283-c0844b828a90\") " pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.232684 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525a9ae1-69bf-4f75-b283-c0844b828a90-client-ca-bundle\") pod \"metrics-server-5fcbd5f794-2hhjm\" (UID: \"525a9ae1-69bf-4f75-b283-c0844b828a90\") " pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.232711 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/525a9ae1-69bf-4f75-b283-c0844b828a90-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5fcbd5f794-2hhjm\" (UID: \"525a9ae1-69bf-4f75-b283-c0844b828a90\") " pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.232751 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zzrr\" (UniqueName: \"kubernetes.io/projected/525a9ae1-69bf-4f75-b283-c0844b828a90-kube-api-access-9zzrr\") pod \"metrics-server-5fcbd5f794-2hhjm\" (UID: \"525a9ae1-69bf-4f75-b283-c0844b828a90\") " pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.232838 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/525a9ae1-69bf-4f75-b283-c0844b828a90-metrics-server-audit-profiles\") pod \"metrics-server-5fcbd5f794-2hhjm\" (UID: \"525a9ae1-69bf-4f75-b283-c0844b828a90\") " pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.232882 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/525a9ae1-69bf-4f75-b283-c0844b828a90-audit-log\") pod \"metrics-server-5fcbd5f794-2hhjm\" (UID: \"525a9ae1-69bf-4f75-b283-c0844b828a90\") " pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.334720 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/525a9ae1-69bf-4f75-b283-c0844b828a90-audit-log\") pod \"metrics-server-5fcbd5f794-2hhjm\" (UID: \"525a9ae1-69bf-4f75-b283-c0844b828a90\") " pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.334786 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/525a9ae1-69bf-4f75-b283-c0844b828a90-secret-metrics-client-certs\") pod \"metrics-server-5fcbd5f794-2hhjm\" (UID: \"525a9ae1-69bf-4f75-b283-c0844b828a90\") " pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.334810 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/525a9ae1-69bf-4f75-b283-c0844b828a90-secret-metrics-server-tls\") pod \"metrics-server-5fcbd5f794-2hhjm\" (UID: \"525a9ae1-69bf-4f75-b283-c0844b828a90\") " pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.334833 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525a9ae1-69bf-4f75-b283-c0844b828a90-client-ca-bundle\") pod \"metrics-server-5fcbd5f794-2hhjm\" (UID: \"525a9ae1-69bf-4f75-b283-c0844b828a90\") " pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.334859 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/525a9ae1-69bf-4f75-b283-c0844b828a90-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5fcbd5f794-2hhjm\" (UID: \"525a9ae1-69bf-4f75-b283-c0844b828a90\") " pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.334922 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zzrr\" (UniqueName: \"kubernetes.io/projected/525a9ae1-69bf-4f75-b283-c0844b828a90-kube-api-access-9zzrr\") pod \"metrics-server-5fcbd5f794-2hhjm\" (UID: \"525a9ae1-69bf-4f75-b283-c0844b828a90\") " pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.334956 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/525a9ae1-69bf-4f75-b283-c0844b828a90-metrics-server-audit-profiles\") pod \"metrics-server-5fcbd5f794-2hhjm\" (UID: \"525a9ae1-69bf-4f75-b283-c0844b828a90\") " pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.335214 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/525a9ae1-69bf-4f75-b283-c0844b828a90-audit-log\") pod \"metrics-server-5fcbd5f794-2hhjm\" (UID: \"525a9ae1-69bf-4f75-b283-c0844b828a90\") " pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.336291 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/525a9ae1-69bf-4f75-b283-c0844b828a90-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5fcbd5f794-2hhjm\" (UID: \"525a9ae1-69bf-4f75-b283-c0844b828a90\") " pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.337091 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/525a9ae1-69bf-4f75-b283-c0844b828a90-metrics-server-audit-profiles\") pod \"metrics-server-5fcbd5f794-2hhjm\" (UID: \"525a9ae1-69bf-4f75-b283-c0844b828a90\") " pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.340113 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/525a9ae1-69bf-4f75-b283-c0844b828a90-secret-metrics-client-certs\") pod \"metrics-server-5fcbd5f794-2hhjm\" (UID: \"525a9ae1-69bf-4f75-b283-c0844b828a90\") " pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.340277 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/525a9ae1-69bf-4f75-b283-c0844b828a90-secret-metrics-server-tls\") pod \"metrics-server-5fcbd5f794-2hhjm\" (UID: \"525a9ae1-69bf-4f75-b283-c0844b828a90\") " pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.351719 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zzrr\" (UniqueName: \"kubernetes.io/projected/525a9ae1-69bf-4f75-b283-c0844b828a90-kube-api-access-9zzrr\") pod \"metrics-server-5fcbd5f794-2hhjm\" (UID: \"525a9ae1-69bf-4f75-b283-c0844b828a90\") " pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.352229 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525a9ae1-69bf-4f75-b283-c0844b828a90-client-ca-bundle\") pod \"metrics-server-5fcbd5f794-2hhjm\" (UID: \"525a9ae1-69bf-4f75-b283-c0844b828a90\") " pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.412989 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.557641 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-785c968969-bl9x5"] Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.563618 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-785c968969-bl9x5" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.568752 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.569065 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.570242 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-785c968969-bl9x5"] Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.638819 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9bc3bce9-60e2-4ab9-ab45-28e69ba4a877-monitoring-plugin-cert\") pod \"monitoring-plugin-785c968969-bl9x5\" (UID: \"9bc3bce9-60e2-4ab9-ab45-28e69ba4a877\") " pod="openshift-monitoring/monitoring-plugin-785c968969-bl9x5" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.741769 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9bc3bce9-60e2-4ab9-ab45-28e69ba4a877-monitoring-plugin-cert\") pod \"monitoring-plugin-785c968969-bl9x5\" (UID: \"9bc3bce9-60e2-4ab9-ab45-28e69ba4a877\") " pod="openshift-monitoring/monitoring-plugin-785c968969-bl9x5" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.750251 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9bc3bce9-60e2-4ab9-ab45-28e69ba4a877-monitoring-plugin-cert\") pod \"monitoring-plugin-785c968969-bl9x5\" (UID: \"9bc3bce9-60e2-4ab9-ab45-28e69ba4a877\") " pod="openshift-monitoring/monitoring-plugin-785c968969-bl9x5" Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.882294 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-5fcbd5f794-2hhjm"] Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.888452 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-785c968969-bl9x5" Jan 27 15:48:43 crc kubenswrapper[4966]: W0127 15:48:43.893432 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod525a9ae1_69bf_4f75_b283_c0844b828a90.slice/crio-d3a3428d46d729e36f6573e02bd77f24a0821ed2b780abe7c5351eed5112d8c0 WatchSource:0}: Error finding container d3a3428d46d729e36f6573e02bd77f24a0821ed2b780abe7c5351eed5112d8c0: Status 404 returned error can't find the container with id d3a3428d46d729e36f6573e02bd77f24a0821ed2b780abe7c5351eed5112d8c0 Jan 27 15:48:43 crc kubenswrapper[4966]: I0127 15:48:43.933171 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-559df6b64-pdjkp"] Jan 27 15:48:43 crc kubenswrapper[4966]: W0127 15:48:43.947160 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9adf9a0_e627_47c6_a062_d8625cd43969.slice/crio-e350ab2120a11600b99149166f95c24336aefe4ab1d14734e90472570de20e8e WatchSource:0}: Error finding container e350ab2120a11600b99149166f95c24336aefe4ab1d14734e90472570de20e8e: Status 404 returned error can't find the container with id e350ab2120a11600b99149166f95c24336aefe4ab1d14734e90472570de20e8e Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.094586 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.098771 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.115705 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.130494 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.155094 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.155787 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-2gbcn" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.156886 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-38baegrrdmu4" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.157077 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.157154 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.157246 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.157375 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.157502 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.157628 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.173583 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.173714 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.176191 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.248269 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.248473 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832412d6-8f0c-4372-b056-87d49ac6f4bd-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.248580 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.248615 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832412d6-8f0c-4372-b056-87d49ac6f4bd-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.248747 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.248778 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/832412d6-8f0c-4372-b056-87d49ac6f4bd-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.248800 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.248878 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/832412d6-8f0c-4372-b056-87d49ac6f4bd-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.248982 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.249066 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-web-config\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.249104 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/832412d6-8f0c-4372-b056-87d49ac6f4bd-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.249142 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/832412d6-8f0c-4372-b056-87d49ac6f4bd-config-out\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.249207 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/832412d6-8f0c-4372-b056-87d49ac6f4bd-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.249248 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832412d6-8f0c-4372-b056-87d49ac6f4bd-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.249275 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-config\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.249328 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.249397 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.249456 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l6cg\" (UniqueName: \"kubernetes.io/projected/832412d6-8f0c-4372-b056-87d49ac6f4bd-kube-api-access-4l6cg\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.350773 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.350828 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.350858 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l6cg\" (UniqueName: \"kubernetes.io/projected/832412d6-8f0c-4372-b056-87d49ac6f4bd-kube-api-access-4l6cg\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.350884 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.350965 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832412d6-8f0c-4372-b056-87d49ac6f4bd-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.350999 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.351017 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832412d6-8f0c-4372-b056-87d49ac6f4bd-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.351038 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.351057 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.351071 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/832412d6-8f0c-4372-b056-87d49ac6f4bd-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.351092 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/832412d6-8f0c-4372-b056-87d49ac6f4bd-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.351108 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.351129 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-web-config\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.351147 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/832412d6-8f0c-4372-b056-87d49ac6f4bd-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.351168 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/832412d6-8f0c-4372-b056-87d49ac6f4bd-config-out\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.351183 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/832412d6-8f0c-4372-b056-87d49ac6f4bd-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.351202 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832412d6-8f0c-4372-b056-87d49ac6f4bd-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.351219 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-config\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.353420 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/832412d6-8f0c-4372-b056-87d49ac6f4bd-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.355417 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.355456 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/832412d6-8f0c-4372-b056-87d49ac6f4bd-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.367252 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/832412d6-8f0c-4372-b056-87d49ac6f4bd-config-out\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.367761 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.370305 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/832412d6-8f0c-4372-b056-87d49ac6f4bd-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.374853 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832412d6-8f0c-4372-b056-87d49ac6f4bd-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.374958 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.375075 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832412d6-8f0c-4372-b056-87d49ac6f4bd-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.375714 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832412d6-8f0c-4372-b056-87d49ac6f4bd-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.375854 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/832412d6-8f0c-4372-b056-87d49ac6f4bd-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.378454 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-config\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.384545 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-web-config\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.386037 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.386047 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.386295 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.386553 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/832412d6-8f0c-4372-b056-87d49ac6f4bd-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.394701 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l6cg\" (UniqueName: \"kubernetes.io/projected/832412d6-8f0c-4372-b056-87d49ac6f4bd-kube-api-access-4l6cg\") pod \"prometheus-k8s-0\" (UID: \"832412d6-8f0c-4372-b056-87d49ac6f4bd\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.399237 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" event={"ID":"76f00c13-2195-40be-829a-ce9e9c94a795","Type":"ContainerStarted","Data":"7e3e14249453ca59ea4bc100a87b09da8d0a0334e8c48e165ee1809b14af3ba2"} Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.399277 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" event={"ID":"76f00c13-2195-40be-829a-ce9e9c94a795","Type":"ContainerStarted","Data":"f1e3fd8c17466cf139e85176e021c7182d1bdf22a126a2d10cc697ba4dc7c733"} Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.399289 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" event={"ID":"76f00c13-2195-40be-829a-ce9e9c94a795","Type":"ContainerStarted","Data":"ce914b7f7b3923c40a290b52a426de56424ca9c1df4eb65536cedc73f2d833b2"} Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.400533 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" event={"ID":"525a9ae1-69bf-4f75-b283-c0844b828a90","Type":"ContainerStarted","Data":"d3a3428d46d729e36f6573e02bd77f24a0821ed2b780abe7c5351eed5112d8c0"} Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.402597 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-559df6b64-pdjkp" event={"ID":"c9adf9a0-e627-47c6-a062-d8625cd43969","Type":"ContainerStarted","Data":"34c4e38dac65c4744cc3c314d73189f352e39beef739b05bae31d991494e4ca5"} Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.402631 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-559df6b64-pdjkp" event={"ID":"c9adf9a0-e627-47c6-a062-d8625cd43969","Type":"ContainerStarted","Data":"e350ab2120a11600b99149166f95c24336aefe4ab1d14734e90472570de20e8e"} Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.424685 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-559df6b64-pdjkp" podStartSLOduration=2.42463375 podStartE2EDuration="2.42463375s" podCreationTimestamp="2026-01-27 15:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:48:44.422664466 +0000 UTC m=+390.725457974" watchObservedRunningTime="2026-01-27 15:48:44.42463375 +0000 UTC m=+390.727427238" Jan 27 15:48:44 crc kubenswrapper[4966]: I0127 15:48:44.453543 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:48:45 crc kubenswrapper[4966]: I0127 15:48:45.081800 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-785c968969-bl9x5"] Jan 27 15:48:45 crc kubenswrapper[4966]: W0127 15:48:45.168599 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bc3bce9_60e2_4ab9_ab45_28e69ba4a877.slice/crio-6994abff1f215b08b1c3e6f5878b1d7f76c03a560e3c9832cdfac06b15311bbe WatchSource:0}: Error finding container 6994abff1f215b08b1c3e6f5878b1d7f76c03a560e3c9832cdfac06b15311bbe: Status 404 returned error can't find the container with id 6994abff1f215b08b1c3e6f5878b1d7f76c03a560e3c9832cdfac06b15311bbe Jan 27 15:48:45 crc kubenswrapper[4966]: I0127 15:48:45.249124 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 27 15:48:45 crc kubenswrapper[4966]: W0127 15:48:45.255644 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod832412d6_8f0c_4372_b056_87d49ac6f4bd.slice/crio-0e9ea147ad330be4c04c96b88d98274023045888e648856247f6a5321758f2f1 WatchSource:0}: Error finding container 0e9ea147ad330be4c04c96b88d98274023045888e648856247f6a5321758f2f1: Status 404 returned error can't find the container with id 0e9ea147ad330be4c04c96b88d98274023045888e648856247f6a5321758f2f1 Jan 27 15:48:45 crc kubenswrapper[4966]: I0127 15:48:45.410346 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8751867e-cf32-4125-9bd2-cfb117d85792","Type":"ContainerStarted","Data":"ef3e865bd0ad95cf40cf77d34c0730baa3b8696125e986045a5268cbabd5303f"} Jan 27 15:48:45 crc kubenswrapper[4966]: I0127 15:48:45.411424 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-785c968969-bl9x5" event={"ID":"9bc3bce9-60e2-4ab9-ab45-28e69ba4a877","Type":"ContainerStarted","Data":"6994abff1f215b08b1c3e6f5878b1d7f76c03a560e3c9832cdfac06b15311bbe"} Jan 27 15:48:45 crc kubenswrapper[4966]: I0127 15:48:45.413466 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"832412d6-8f0c-4372-b056-87d49ac6f4bd","Type":"ContainerStarted","Data":"0e9ea147ad330be4c04c96b88d98274023045888e648856247f6a5321758f2f1"} Jan 27 15:48:46 crc kubenswrapper[4966]: I0127 15:48:46.436660 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" event={"ID":"76f00c13-2195-40be-829a-ce9e9c94a795","Type":"ContainerStarted","Data":"2a883ec52cd844887f40e268dc8054742c7926615b91cc4dc750689ea530eb0e"} Jan 27 15:48:46 crc kubenswrapper[4966]: I0127 15:48:46.436742 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" event={"ID":"76f00c13-2195-40be-829a-ce9e9c94a795","Type":"ContainerStarted","Data":"b242d5335f1f5f6f73ad0ada542e843b5569496b78583a067e240319766bfc9a"} Jan 27 15:48:46 crc kubenswrapper[4966]: I0127 15:48:46.438503 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" event={"ID":"525a9ae1-69bf-4f75-b283-c0844b828a90","Type":"ContainerStarted","Data":"f013bc67e00e99c0b09bf2f8d1090628c6bfd382bf7a380af3041ab2d9038e94"} Jan 27 15:48:46 crc kubenswrapper[4966]: I0127 15:48:46.440885 4966 generic.go:334] "Generic (PLEG): container finished" podID="832412d6-8f0c-4372-b056-87d49ac6f4bd" containerID="f11eb4fb1d2c3d8c3c13749dfda05636de97c72f6d52e1ada46a463c70cb9443" exitCode=0 Jan 27 15:48:46 crc kubenswrapper[4966]: I0127 15:48:46.440968 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"832412d6-8f0c-4372-b056-87d49ac6f4bd","Type":"ContainerDied","Data":"f11eb4fb1d2c3d8c3c13749dfda05636de97c72f6d52e1ada46a463c70cb9443"} Jan 27 15:48:46 crc kubenswrapper[4966]: I0127 15:48:46.447190 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8751867e-cf32-4125-9bd2-cfb117d85792","Type":"ContainerStarted","Data":"903c3cbc78cf0be72a10e4d5538555e7be2eef2bd51b02e21c4e74ec0f15e2e5"} Jan 27 15:48:46 crc kubenswrapper[4966]: I0127 15:48:46.447224 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8751867e-cf32-4125-9bd2-cfb117d85792","Type":"ContainerStarted","Data":"50a7045f551f7b115c341ece8b61024134b4e70e4d37fb93c08675ffe75fdeaa"} Jan 27 15:48:46 crc kubenswrapper[4966]: I0127 15:48:46.456179 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" podStartSLOduration=1.359981849 podStartE2EDuration="3.456159651s" podCreationTimestamp="2026-01-27 15:48:43 +0000 UTC" firstStartedPulling="2026-01-27 15:48:43.902240667 +0000 UTC m=+390.205034155" lastFinishedPulling="2026-01-27 15:48:45.998418469 +0000 UTC m=+392.301211957" observedRunningTime="2026-01-27 15:48:46.45613775 +0000 UTC m=+392.758931248" watchObservedRunningTime="2026-01-27 15:48:46.456159651 +0000 UTC m=+392.758953149" Jan 27 15:48:47 crc kubenswrapper[4966]: I0127 15:48:47.455752 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-785c968969-bl9x5" event={"ID":"9bc3bce9-60e2-4ab9-ab45-28e69ba4a877","Type":"ContainerStarted","Data":"5d6d13e1147ea7af4f6064a012fea1a9d25541a3ff916caf5fa9123f0c789bc2"} Jan 27 15:48:47 crc kubenswrapper[4966]: I0127 15:48:47.456100 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-785c968969-bl9x5" Jan 27 15:48:47 crc kubenswrapper[4966]: I0127 15:48:47.461311 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8751867e-cf32-4125-9bd2-cfb117d85792","Type":"ContainerStarted","Data":"8b65161d2ca84d023d9d03c76037fbd2009cc96d112d9e48f973ccfd8b9420dd"} Jan 27 15:48:47 crc kubenswrapper[4966]: I0127 15:48:47.461369 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8751867e-cf32-4125-9bd2-cfb117d85792","Type":"ContainerStarted","Data":"32d300ba579acfbab0ccce86291875274d8e3742f48ebc9235a3cf6725bc70cf"} Jan 27 15:48:47 crc kubenswrapper[4966]: I0127 15:48:47.461383 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8751867e-cf32-4125-9bd2-cfb117d85792","Type":"ContainerStarted","Data":"12d9a3d215c8d4f8f46b0d50c4efe25b0b20eb7dbd5abe510ae360571c66a501"} Jan 27 15:48:47 crc kubenswrapper[4966]: I0127 15:48:47.463351 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-785c968969-bl9x5" Jan 27 15:48:47 crc kubenswrapper[4966]: I0127 15:48:47.465577 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" event={"ID":"76f00c13-2195-40be-829a-ce9e9c94a795","Type":"ContainerStarted","Data":"fc8cfed27e184d8c359dd770288f0f89c38b03ef1520710cad2e8c112c7f28da"} Jan 27 15:48:47 crc kubenswrapper[4966]: I0127 15:48:47.465769 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:47 crc kubenswrapper[4966]: I0127 15:48:47.482844 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-785c968969-bl9x5" podStartSLOduration=2.47860234 podStartE2EDuration="4.482818803s" podCreationTimestamp="2026-01-27 15:48:43 +0000 UTC" firstStartedPulling="2026-01-27 15:48:45.17050311 +0000 UTC m=+391.473296598" lastFinishedPulling="2026-01-27 15:48:47.174719573 +0000 UTC m=+393.477513061" observedRunningTime="2026-01-27 15:48:47.476524779 +0000 UTC m=+393.779318267" watchObservedRunningTime="2026-01-27 15:48:47.482818803 +0000 UTC m=+393.785612301" Jan 27 15:48:47 crc kubenswrapper[4966]: I0127 15:48:47.523279 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.931852011 podStartE2EDuration="9.523261821s" podCreationTimestamp="2026-01-27 15:48:38 +0000 UTC" firstStartedPulling="2026-01-27 15:48:39.628719248 +0000 UTC m=+385.931512736" lastFinishedPulling="2026-01-27 15:48:45.220129058 +0000 UTC m=+391.522922546" observedRunningTime="2026-01-27 15:48:47.520563264 +0000 UTC m=+393.823356782" watchObservedRunningTime="2026-01-27 15:48:47.523261821 +0000 UTC m=+393.826055309" Jan 27 15:48:47 crc kubenswrapper[4966]: I0127 15:48:47.553749 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" podStartSLOduration=3.734318213 podStartE2EDuration="8.553730587s" podCreationTimestamp="2026-01-27 15:48:39 +0000 UTC" firstStartedPulling="2026-01-27 15:48:41.184004847 +0000 UTC m=+387.486798345" lastFinishedPulling="2026-01-27 15:48:46.003417231 +0000 UTC m=+392.306210719" observedRunningTime="2026-01-27 15:48:47.551718522 +0000 UTC m=+393.854512040" watchObservedRunningTime="2026-01-27 15:48:47.553730587 +0000 UTC m=+393.856524085" Jan 27 15:48:48 crc kubenswrapper[4966]: I0127 15:48:48.484070 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" Jan 27 15:48:50 crc kubenswrapper[4966]: I0127 15:48:50.483499 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"832412d6-8f0c-4372-b056-87d49ac6f4bd","Type":"ContainerStarted","Data":"c02d797f04d8fb3fbae1ea7dd10d6a88b9ab901b0f64e5fd3e6d97e175aa9949"} Jan 27 15:48:50 crc kubenswrapper[4966]: I0127 15:48:50.562232 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" Jan 27 15:48:50 crc kubenswrapper[4966]: I0127 15:48:50.657863 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kctp8"] Jan 27 15:48:51 crc kubenswrapper[4966]: I0127 15:48:51.495787 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"832412d6-8f0c-4372-b056-87d49ac6f4bd","Type":"ContainerStarted","Data":"bb7e691a01fbf288d9e4879c0ab2477841e954d858e13982184f07bb0b45e693"} Jan 27 15:48:51 crc kubenswrapper[4966]: I0127 15:48:51.496348 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"832412d6-8f0c-4372-b056-87d49ac6f4bd","Type":"ContainerStarted","Data":"6128308d867364c05318a9ce004f1dc1cfe0aa6ee69e21f06e0c6d086d40bad6"} Jan 27 15:48:51 crc kubenswrapper[4966]: I0127 15:48:51.496364 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"832412d6-8f0c-4372-b056-87d49ac6f4bd","Type":"ContainerStarted","Data":"e6cf0dc75245c9d7cfa115c7441aa1cc94e8d698846ec2da64f386f339a0f6e8"} Jan 27 15:48:51 crc kubenswrapper[4966]: I0127 15:48:51.496374 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"832412d6-8f0c-4372-b056-87d49ac6f4bd","Type":"ContainerStarted","Data":"b684f202c36e7964a64a440978c5038a59f8a16b252e3cdc2c6b860b0527c5c1"} Jan 27 15:48:51 crc kubenswrapper[4966]: I0127 15:48:51.496384 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"832412d6-8f0c-4372-b056-87d49ac6f4bd","Type":"ContainerStarted","Data":"0e060d482952755c117100d4ca309841bbc444b6a17ba2a78f37b41fa5f82bb8"} Jan 27 15:48:51 crc kubenswrapper[4966]: I0127 15:48:51.737638 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.0158105 podStartE2EDuration="7.737613003s" podCreationTimestamp="2026-01-27 15:48:44 +0000 UTC" firstStartedPulling="2026-01-27 15:48:46.443050997 +0000 UTC m=+392.745844485" lastFinishedPulling="2026-01-27 15:48:50.16485346 +0000 UTC m=+396.467646988" observedRunningTime="2026-01-27 15:48:51.733988145 +0000 UTC m=+398.036781723" watchObservedRunningTime="2026-01-27 15:48:51.737613003 +0000 UTC m=+398.040406531" Jan 27 15:48:52 crc kubenswrapper[4966]: I0127 15:48:52.915780 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:52 crc kubenswrapper[4966]: I0127 15:48:52.916529 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:52 crc kubenswrapper[4966]: I0127 15:48:52.920793 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:53 crc kubenswrapper[4966]: I0127 15:48:53.511792 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:48:53 crc kubenswrapper[4966]: I0127 15:48:53.573402 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-sgfdr"] Jan 27 15:48:54 crc kubenswrapper[4966]: I0127 15:48:54.454155 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:49:03 crc kubenswrapper[4966]: I0127 15:49:03.413789 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:49:03 crc kubenswrapper[4966]: I0127 15:49:03.414394 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:49:15 crc kubenswrapper[4966]: I0127 15:49:15.711459 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" podUID="b8111aeb-2c95-4953-a2d0-586c5fcd4940" containerName="registry" containerID="cri-o://e828f758870823d7eaf0c64e9c0421f9471b6d951d5d3e4c9fd9f876ab9fa8c3" gracePeriod=30 Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.608815 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.660569 4966 generic.go:334] "Generic (PLEG): container finished" podID="b8111aeb-2c95-4953-a2d0-586c5fcd4940" containerID="e828f758870823d7eaf0c64e9c0421f9471b6d951d5d3e4c9fd9f876ab9fa8c3" exitCode=0 Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.660609 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" event={"ID":"b8111aeb-2c95-4953-a2d0-586c5fcd4940","Type":"ContainerDied","Data":"e828f758870823d7eaf0c64e9c0421f9471b6d951d5d3e4c9fd9f876ab9fa8c3"} Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.660636 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" event={"ID":"b8111aeb-2c95-4953-a2d0-586c5fcd4940","Type":"ContainerDied","Data":"83df70eba59f7c93336512b6f83ee62943058226ccb4520356f4b4c0c8feb67e"} Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.660654 4966 scope.go:117] "RemoveContainer" containerID="e828f758870823d7eaf0c64e9c0421f9471b6d951d5d3e4c9fd9f876ab9fa8c3" Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.660742 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kctp8" Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.679916 4966 scope.go:117] "RemoveContainer" containerID="e828f758870823d7eaf0c64e9c0421f9471b6d951d5d3e4c9fd9f876ab9fa8c3" Jan 27 15:49:16 crc kubenswrapper[4966]: E0127 15:49:16.680403 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e828f758870823d7eaf0c64e9c0421f9471b6d951d5d3e4c9fd9f876ab9fa8c3\": container with ID starting with e828f758870823d7eaf0c64e9c0421f9471b6d951d5d3e4c9fd9f876ab9fa8c3 not found: ID does not exist" containerID="e828f758870823d7eaf0c64e9c0421f9471b6d951d5d3e4c9fd9f876ab9fa8c3" Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.680446 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e828f758870823d7eaf0c64e9c0421f9471b6d951d5d3e4c9fd9f876ab9fa8c3"} err="failed to get container status \"e828f758870823d7eaf0c64e9c0421f9471b6d951d5d3e4c9fd9f876ab9fa8c3\": rpc error: code = NotFound desc = could not find container \"e828f758870823d7eaf0c64e9c0421f9471b6d951d5d3e4c9fd9f876ab9fa8c3\": container with ID starting with e828f758870823d7eaf0c64e9c0421f9471b6d951d5d3e4c9fd9f876ab9fa8c3 not found: ID does not exist" Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.728490 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8729\" (UniqueName: \"kubernetes.io/projected/b8111aeb-2c95-4953-a2d0-586c5fcd4940-kube-api-access-f8729\") pod \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.728633 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b8111aeb-2c95-4953-a2d0-586c5fcd4940-registry-certificates\") pod \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.728673 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b8111aeb-2c95-4953-a2d0-586c5fcd4940-installation-pull-secrets\") pod \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.728792 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.728826 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b8111aeb-2c95-4953-a2d0-586c5fcd4940-ca-trust-extracted\") pod \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.728859 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b8111aeb-2c95-4953-a2d0-586c5fcd4940-registry-tls\") pod \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.728930 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8111aeb-2c95-4953-a2d0-586c5fcd4940-trusted-ca\") pod \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.728955 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b8111aeb-2c95-4953-a2d0-586c5fcd4940-bound-sa-token\") pod \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\" (UID: \"b8111aeb-2c95-4953-a2d0-586c5fcd4940\") " Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.748396 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8111aeb-2c95-4953-a2d0-586c5fcd4940-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "b8111aeb-2c95-4953-a2d0-586c5fcd4940" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.749563 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8111aeb-2c95-4953-a2d0-586c5fcd4940-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b8111aeb-2c95-4953-a2d0-586c5fcd4940" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.756132 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8111aeb-2c95-4953-a2d0-586c5fcd4940-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "b8111aeb-2c95-4953-a2d0-586c5fcd4940" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.757591 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8111aeb-2c95-4953-a2d0-586c5fcd4940-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "b8111aeb-2c95-4953-a2d0-586c5fcd4940" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.762324 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8111aeb-2c95-4953-a2d0-586c5fcd4940-kube-api-access-f8729" (OuterVolumeSpecName: "kube-api-access-f8729") pod "b8111aeb-2c95-4953-a2d0-586c5fcd4940" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940"). InnerVolumeSpecName "kube-api-access-f8729". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.762702 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8111aeb-2c95-4953-a2d0-586c5fcd4940-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "b8111aeb-2c95-4953-a2d0-586c5fcd4940" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.762958 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8111aeb-2c95-4953-a2d0-586c5fcd4940-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "b8111aeb-2c95-4953-a2d0-586c5fcd4940" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.789342 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "b8111aeb-2c95-4953-a2d0-586c5fcd4940" (UID: "b8111aeb-2c95-4953-a2d0-586c5fcd4940"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.831062 4966 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b8111aeb-2c95-4953-a2d0-586c5fcd4940-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.831106 4966 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b8111aeb-2c95-4953-a2d0-586c5fcd4940-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.831119 4966 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b8111aeb-2c95-4953-a2d0-586c5fcd4940-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.831131 4966 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b8111aeb-2c95-4953-a2d0-586c5fcd4940-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.831144 4966 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8111aeb-2c95-4953-a2d0-586c5fcd4940-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.831156 4966 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b8111aeb-2c95-4953-a2d0-586c5fcd4940-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.831170 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8729\" (UniqueName: \"kubernetes.io/projected/b8111aeb-2c95-4953-a2d0-586c5fcd4940-kube-api-access-f8729\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:16 crc kubenswrapper[4966]: I0127 15:49:16.998061 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kctp8"] Jan 27 15:49:17 crc kubenswrapper[4966]: I0127 15:49:17.004603 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kctp8"] Jan 27 15:49:18 crc kubenswrapper[4966]: I0127 15:49:18.529108 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8111aeb-2c95-4953-a2d0-586c5fcd4940" path="/var/lib/kubelet/pods/b8111aeb-2c95-4953-a2d0-586c5fcd4940/volumes" Jan 27 15:49:18 crc kubenswrapper[4966]: I0127 15:49:18.650936 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-sgfdr" podUID="a3c5438a-013d-48da-8a1b-8dd23e17bce6" containerName="console" containerID="cri-o://1177a8183442b9fd017b60aed037627b7cb0f85365bafd92772ecc78a083f7da" gracePeriod=15 Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.229169 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-sgfdr_a3c5438a-013d-48da-8a1b-8dd23e17bce6/console/0.log" Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.229623 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.367783 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3c5438a-013d-48da-8a1b-8dd23e17bce6-console-serving-cert\") pod \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.367832 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-service-ca\") pod \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.367868 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-848k7\" (UniqueName: \"kubernetes.io/projected/a3c5438a-013d-48da-8a1b-8dd23e17bce6-kube-api-access-848k7\") pod \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.367929 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-trusted-ca-bundle\") pod \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.368000 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a3c5438a-013d-48da-8a1b-8dd23e17bce6-console-oauth-config\") pod \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.368038 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-oauth-serving-cert\") pod \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.368061 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-console-config\") pod \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\" (UID: \"a3c5438a-013d-48da-8a1b-8dd23e17bce6\") " Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.368997 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a3c5438a-013d-48da-8a1b-8dd23e17bce6" (UID: "a3c5438a-013d-48da-8a1b-8dd23e17bce6"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.369218 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-console-config" (OuterVolumeSpecName: "console-config") pod "a3c5438a-013d-48da-8a1b-8dd23e17bce6" (UID: "a3c5438a-013d-48da-8a1b-8dd23e17bce6"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.369299 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a3c5438a-013d-48da-8a1b-8dd23e17bce6" (UID: "a3c5438a-013d-48da-8a1b-8dd23e17bce6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.369564 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-service-ca" (OuterVolumeSpecName: "service-ca") pod "a3c5438a-013d-48da-8a1b-8dd23e17bce6" (UID: "a3c5438a-013d-48da-8a1b-8dd23e17bce6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.374264 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3c5438a-013d-48da-8a1b-8dd23e17bce6-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a3c5438a-013d-48da-8a1b-8dd23e17bce6" (UID: "a3c5438a-013d-48da-8a1b-8dd23e17bce6"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.374333 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3c5438a-013d-48da-8a1b-8dd23e17bce6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a3c5438a-013d-48da-8a1b-8dd23e17bce6" (UID: "a3c5438a-013d-48da-8a1b-8dd23e17bce6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.375298 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3c5438a-013d-48da-8a1b-8dd23e17bce6-kube-api-access-848k7" (OuterVolumeSpecName: "kube-api-access-848k7") pod "a3c5438a-013d-48da-8a1b-8dd23e17bce6" (UID: "a3c5438a-013d-48da-8a1b-8dd23e17bce6"). InnerVolumeSpecName "kube-api-access-848k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.469828 4966 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a3c5438a-013d-48da-8a1b-8dd23e17bce6-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.469884 4966 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.469930 4966 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.469950 4966 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3c5438a-013d-48da-8a1b-8dd23e17bce6-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.469969 4966 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.469986 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-848k7\" (UniqueName: \"kubernetes.io/projected/a3c5438a-013d-48da-8a1b-8dd23e17bce6-kube-api-access-848k7\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.470006 4966 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3c5438a-013d-48da-8a1b-8dd23e17bce6-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.683958 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-sgfdr_a3c5438a-013d-48da-8a1b-8dd23e17bce6/console/0.log" Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.684314 4966 generic.go:334] "Generic (PLEG): container finished" podID="a3c5438a-013d-48da-8a1b-8dd23e17bce6" containerID="1177a8183442b9fd017b60aed037627b7cb0f85365bafd92772ecc78a083f7da" exitCode=2 Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.684359 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sgfdr" event={"ID":"a3c5438a-013d-48da-8a1b-8dd23e17bce6","Type":"ContainerDied","Data":"1177a8183442b9fd017b60aed037627b7cb0f85365bafd92772ecc78a083f7da"} Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.684410 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sgfdr" event={"ID":"a3c5438a-013d-48da-8a1b-8dd23e17bce6","Type":"ContainerDied","Data":"129ad2048ba4772209bd451a32cd7fe01d80d8b11b9d5159af33d2d313d2b7aa"} Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.684434 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sgfdr" Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.684448 4966 scope.go:117] "RemoveContainer" containerID="1177a8183442b9fd017b60aed037627b7cb0f85365bafd92772ecc78a083f7da" Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.703556 4966 scope.go:117] "RemoveContainer" containerID="1177a8183442b9fd017b60aed037627b7cb0f85365bafd92772ecc78a083f7da" Jan 27 15:49:19 crc kubenswrapper[4966]: E0127 15:49:19.704119 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1177a8183442b9fd017b60aed037627b7cb0f85365bafd92772ecc78a083f7da\": container with ID starting with 1177a8183442b9fd017b60aed037627b7cb0f85365bafd92772ecc78a083f7da not found: ID does not exist" containerID="1177a8183442b9fd017b60aed037627b7cb0f85365bafd92772ecc78a083f7da" Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.704174 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1177a8183442b9fd017b60aed037627b7cb0f85365bafd92772ecc78a083f7da"} err="failed to get container status \"1177a8183442b9fd017b60aed037627b7cb0f85365bafd92772ecc78a083f7da\": rpc error: code = NotFound desc = could not find container \"1177a8183442b9fd017b60aed037627b7cb0f85365bafd92772ecc78a083f7da\": container with ID starting with 1177a8183442b9fd017b60aed037627b7cb0f85365bafd92772ecc78a083f7da not found: ID does not exist" Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.716393 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-sgfdr"] Jan 27 15:49:19 crc kubenswrapper[4966]: I0127 15:49:19.723803 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-sgfdr"] Jan 27 15:49:20 crc kubenswrapper[4966]: I0127 15:49:20.537232 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3c5438a-013d-48da-8a1b-8dd23e17bce6" path="/var/lib/kubelet/pods/a3c5438a-013d-48da-8a1b-8dd23e17bce6/volumes" Jan 27 15:49:23 crc kubenswrapper[4966]: I0127 15:49:23.419715 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:49:23 crc kubenswrapper[4966]: I0127 15:49:23.424247 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 15:49:44 crc kubenswrapper[4966]: I0127 15:49:44.453785 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:49:44 crc kubenswrapper[4966]: I0127 15:49:44.490010 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:49:44 crc kubenswrapper[4966]: I0127 15:49:44.881352 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 15:50:10 crc kubenswrapper[4966]: I0127 15:50:10.818420 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5c557ffddd-h86q9"] Jan 27 15:50:10 crc kubenswrapper[4966]: E0127 15:50:10.819219 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c5438a-013d-48da-8a1b-8dd23e17bce6" containerName="console" Jan 27 15:50:10 crc kubenswrapper[4966]: I0127 15:50:10.819238 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c5438a-013d-48da-8a1b-8dd23e17bce6" containerName="console" Jan 27 15:50:10 crc kubenswrapper[4966]: E0127 15:50:10.819267 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8111aeb-2c95-4953-a2d0-586c5fcd4940" containerName="registry" Jan 27 15:50:10 crc kubenswrapper[4966]: I0127 15:50:10.819275 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8111aeb-2c95-4953-a2d0-586c5fcd4940" containerName="registry" Jan 27 15:50:10 crc kubenswrapper[4966]: I0127 15:50:10.819411 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3c5438a-013d-48da-8a1b-8dd23e17bce6" containerName="console" Jan 27 15:50:10 crc kubenswrapper[4966]: I0127 15:50:10.819424 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8111aeb-2c95-4953-a2d0-586c5fcd4940" containerName="registry" Jan 27 15:50:10 crc kubenswrapper[4966]: I0127 15:50:10.820280 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:10 crc kubenswrapper[4966]: I0127 15:50:10.832845 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c557ffddd-h86q9"] Jan 27 15:50:10 crc kubenswrapper[4966]: I0127 15:50:10.931551 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-service-ca\") pod \"console-5c557ffddd-h86q9\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:10 crc kubenswrapper[4966]: I0127 15:50:10.931611 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-trusted-ca-bundle\") pod \"console-5c557ffddd-h86q9\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:10 crc kubenswrapper[4966]: I0127 15:50:10.931650 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr4hk\" (UniqueName: \"kubernetes.io/projected/3e677fe7-a980-4979-92bc-966eed6ddf11-kube-api-access-cr4hk\") pod \"console-5c557ffddd-h86q9\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:10 crc kubenswrapper[4966]: I0127 15:50:10.931674 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3e677fe7-a980-4979-92bc-966eed6ddf11-console-serving-cert\") pod \"console-5c557ffddd-h86q9\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:10 crc kubenswrapper[4966]: I0127 15:50:10.931828 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3e677fe7-a980-4979-92bc-966eed6ddf11-console-oauth-config\") pod \"console-5c557ffddd-h86q9\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:10 crc kubenswrapper[4966]: I0127 15:50:10.931906 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-oauth-serving-cert\") pod \"console-5c557ffddd-h86q9\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:10 crc kubenswrapper[4966]: I0127 15:50:10.931941 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-console-config\") pod \"console-5c557ffddd-h86q9\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:11 crc kubenswrapper[4966]: I0127 15:50:11.033525 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-trusted-ca-bundle\") pod \"console-5c557ffddd-h86q9\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:11 crc kubenswrapper[4966]: I0127 15:50:11.033585 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr4hk\" (UniqueName: \"kubernetes.io/projected/3e677fe7-a980-4979-92bc-966eed6ddf11-kube-api-access-cr4hk\") pod \"console-5c557ffddd-h86q9\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:11 crc kubenswrapper[4966]: I0127 15:50:11.033624 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3e677fe7-a980-4979-92bc-966eed6ddf11-console-serving-cert\") pod \"console-5c557ffddd-h86q9\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:11 crc kubenswrapper[4966]: I0127 15:50:11.033692 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3e677fe7-a980-4979-92bc-966eed6ddf11-console-oauth-config\") pod \"console-5c557ffddd-h86q9\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:11 crc kubenswrapper[4966]: I0127 15:50:11.033733 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-oauth-serving-cert\") pod \"console-5c557ffddd-h86q9\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:11 crc kubenswrapper[4966]: I0127 15:50:11.033768 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-console-config\") pod \"console-5c557ffddd-h86q9\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:11 crc kubenswrapper[4966]: I0127 15:50:11.033848 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-service-ca\") pod \"console-5c557ffddd-h86q9\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:11 crc kubenswrapper[4966]: I0127 15:50:11.035189 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-service-ca\") pod \"console-5c557ffddd-h86q9\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:11 crc kubenswrapper[4966]: I0127 15:50:11.035665 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-console-config\") pod \"console-5c557ffddd-h86q9\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:11 crc kubenswrapper[4966]: I0127 15:50:11.036051 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-trusted-ca-bundle\") pod \"console-5c557ffddd-h86q9\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:11 crc kubenswrapper[4966]: I0127 15:50:11.036098 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-oauth-serving-cert\") pod \"console-5c557ffddd-h86q9\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:11 crc kubenswrapper[4966]: I0127 15:50:11.041581 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3e677fe7-a980-4979-92bc-966eed6ddf11-console-serving-cert\") pod \"console-5c557ffddd-h86q9\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:11 crc kubenswrapper[4966]: I0127 15:50:11.047413 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3e677fe7-a980-4979-92bc-966eed6ddf11-console-oauth-config\") pod \"console-5c557ffddd-h86q9\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:11 crc kubenswrapper[4966]: I0127 15:50:11.055300 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr4hk\" (UniqueName: \"kubernetes.io/projected/3e677fe7-a980-4979-92bc-966eed6ddf11-kube-api-access-cr4hk\") pod \"console-5c557ffddd-h86q9\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:11 crc kubenswrapper[4966]: I0127 15:50:11.138645 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:11 crc kubenswrapper[4966]: I0127 15:50:11.436560 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c557ffddd-h86q9"] Jan 27 15:50:12 crc kubenswrapper[4966]: I0127 15:50:12.030807 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c557ffddd-h86q9" event={"ID":"3e677fe7-a980-4979-92bc-966eed6ddf11","Type":"ContainerStarted","Data":"2186769a4a7fcf311a8cafb27ac230391483567b2f5c63f2636ed85b666c45b5"} Jan 27 15:50:12 crc kubenswrapper[4966]: I0127 15:50:12.031293 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c557ffddd-h86q9" event={"ID":"3e677fe7-a980-4979-92bc-966eed6ddf11","Type":"ContainerStarted","Data":"43fcb1f5ad85758fb5ab12f1ee2ad327bfb07879fbc6d81a592d2d8e7b181421"} Jan 27 15:50:12 crc kubenswrapper[4966]: I0127 15:50:12.057608 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5c557ffddd-h86q9" podStartSLOduration=2.057586768 podStartE2EDuration="2.057586768s" podCreationTimestamp="2026-01-27 15:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:50:12.055262712 +0000 UTC m=+478.358056230" watchObservedRunningTime="2026-01-27 15:50:12.057586768 +0000 UTC m=+478.360380296" Jan 27 15:50:21 crc kubenswrapper[4966]: I0127 15:50:21.139631 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:21 crc kubenswrapper[4966]: I0127 15:50:21.141096 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:21 crc kubenswrapper[4966]: I0127 15:50:21.146749 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:22 crc kubenswrapper[4966]: I0127 15:50:22.110309 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:50:22 crc kubenswrapper[4966]: I0127 15:50:22.192429 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-559df6b64-pdjkp"] Jan 27 15:50:40 crc kubenswrapper[4966]: I0127 15:50:40.120126 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:50:40 crc kubenswrapper[4966]: I0127 15:50:40.120760 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:50:47 crc kubenswrapper[4966]: I0127 15:50:47.258011 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-559df6b64-pdjkp" podUID="c9adf9a0-e627-47c6-a062-d8625cd43969" containerName="console" containerID="cri-o://34c4e38dac65c4744cc3c314d73189f352e39beef739b05bae31d991494e4ca5" gracePeriod=15 Jan 27 15:50:47 crc kubenswrapper[4966]: I0127 15:50:47.650082 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-559df6b64-pdjkp_c9adf9a0-e627-47c6-a062-d8625cd43969/console/0.log" Jan 27 15:50:47 crc kubenswrapper[4966]: I0127 15:50:47.650393 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:50:47 crc kubenswrapper[4966]: I0127 15:50:47.712617 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-trusted-ca-bundle\") pod \"c9adf9a0-e627-47c6-a062-d8625cd43969\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " Jan 27 15:50:47 crc kubenswrapper[4966]: I0127 15:50:47.712690 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c9adf9a0-e627-47c6-a062-d8625cd43969-console-oauth-config\") pod \"c9adf9a0-e627-47c6-a062-d8625cd43969\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " Jan 27 15:50:47 crc kubenswrapper[4966]: I0127 15:50:47.712766 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-oauth-serving-cert\") pod \"c9adf9a0-e627-47c6-a062-d8625cd43969\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " Jan 27 15:50:47 crc kubenswrapper[4966]: I0127 15:50:47.712800 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-console-config\") pod \"c9adf9a0-e627-47c6-a062-d8625cd43969\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " Jan 27 15:50:47 crc kubenswrapper[4966]: I0127 15:50:47.712821 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9adf9a0-e627-47c6-a062-d8625cd43969-console-serving-cert\") pod \"c9adf9a0-e627-47c6-a062-d8625cd43969\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " Jan 27 15:50:47 crc kubenswrapper[4966]: I0127 15:50:47.712861 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-service-ca\") pod \"c9adf9a0-e627-47c6-a062-d8625cd43969\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " Jan 27 15:50:47 crc kubenswrapper[4966]: I0127 15:50:47.712963 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw4fs\" (UniqueName: \"kubernetes.io/projected/c9adf9a0-e627-47c6-a062-d8625cd43969-kube-api-access-lw4fs\") pod \"c9adf9a0-e627-47c6-a062-d8625cd43969\" (UID: \"c9adf9a0-e627-47c6-a062-d8625cd43969\") " Jan 27 15:50:47 crc kubenswrapper[4966]: I0127 15:50:47.713512 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c9adf9a0-e627-47c6-a062-d8625cd43969" (UID: "c9adf9a0-e627-47c6-a062-d8625cd43969"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:50:47 crc kubenswrapper[4966]: I0127 15:50:47.713538 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-service-ca" (OuterVolumeSpecName: "service-ca") pod "c9adf9a0-e627-47c6-a062-d8625cd43969" (UID: "c9adf9a0-e627-47c6-a062-d8625cd43969"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:50:47 crc kubenswrapper[4966]: I0127 15:50:47.713524 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-console-config" (OuterVolumeSpecName: "console-config") pod "c9adf9a0-e627-47c6-a062-d8625cd43969" (UID: "c9adf9a0-e627-47c6-a062-d8625cd43969"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:50:47 crc kubenswrapper[4966]: I0127 15:50:47.714032 4966 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:50:47 crc kubenswrapper[4966]: I0127 15:50:47.714090 4966 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:50:47 crc kubenswrapper[4966]: I0127 15:50:47.714092 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c9adf9a0-e627-47c6-a062-d8625cd43969" (UID: "c9adf9a0-e627-47c6-a062-d8625cd43969"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:50:47 crc kubenswrapper[4966]: I0127 15:50:47.714104 4966 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:50:47 crc kubenswrapper[4966]: I0127 15:50:47.718217 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9adf9a0-e627-47c6-a062-d8625cd43969-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c9adf9a0-e627-47c6-a062-d8625cd43969" (UID: "c9adf9a0-e627-47c6-a062-d8625cd43969"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:50:47 crc kubenswrapper[4966]: I0127 15:50:47.718216 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9adf9a0-e627-47c6-a062-d8625cd43969-kube-api-access-lw4fs" (OuterVolumeSpecName: "kube-api-access-lw4fs") pod "c9adf9a0-e627-47c6-a062-d8625cd43969" (UID: "c9adf9a0-e627-47c6-a062-d8625cd43969"). InnerVolumeSpecName "kube-api-access-lw4fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:50:47 crc kubenswrapper[4966]: I0127 15:50:47.718326 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9adf9a0-e627-47c6-a062-d8625cd43969-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c9adf9a0-e627-47c6-a062-d8625cd43969" (UID: "c9adf9a0-e627-47c6-a062-d8625cd43969"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:50:48 crc kubenswrapper[4966]: I0127 15:50:47.815956 4966 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c9adf9a0-e627-47c6-a062-d8625cd43969-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:50:48 crc kubenswrapper[4966]: I0127 15:50:47.816012 4966 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c9adf9a0-e627-47c6-a062-d8625cd43969-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:50:48 crc kubenswrapper[4966]: I0127 15:50:47.816043 4966 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9adf9a0-e627-47c6-a062-d8625cd43969-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:50:48 crc kubenswrapper[4966]: I0127 15:50:47.816068 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw4fs\" (UniqueName: \"kubernetes.io/projected/c9adf9a0-e627-47c6-a062-d8625cd43969-kube-api-access-lw4fs\") on node \"crc\" DevicePath \"\"" Jan 27 15:50:48 crc kubenswrapper[4966]: I0127 15:50:48.295135 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-559df6b64-pdjkp_c9adf9a0-e627-47c6-a062-d8625cd43969/console/0.log" Jan 27 15:50:48 crc kubenswrapper[4966]: I0127 15:50:48.295186 4966 generic.go:334] "Generic (PLEG): container finished" podID="c9adf9a0-e627-47c6-a062-d8625cd43969" containerID="34c4e38dac65c4744cc3c314d73189f352e39beef739b05bae31d991494e4ca5" exitCode=2 Jan 27 15:50:48 crc kubenswrapper[4966]: I0127 15:50:48.295218 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-559df6b64-pdjkp" event={"ID":"c9adf9a0-e627-47c6-a062-d8625cd43969","Type":"ContainerDied","Data":"34c4e38dac65c4744cc3c314d73189f352e39beef739b05bae31d991494e4ca5"} Jan 27 15:50:48 crc kubenswrapper[4966]: I0127 15:50:48.295247 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-559df6b64-pdjkp" event={"ID":"c9adf9a0-e627-47c6-a062-d8625cd43969","Type":"ContainerDied","Data":"e350ab2120a11600b99149166f95c24336aefe4ab1d14734e90472570de20e8e"} Jan 27 15:50:48 crc kubenswrapper[4966]: I0127 15:50:48.295268 4966 scope.go:117] "RemoveContainer" containerID="34c4e38dac65c4744cc3c314d73189f352e39beef739b05bae31d991494e4ca5" Jan 27 15:50:48 crc kubenswrapper[4966]: I0127 15:50:48.295301 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-559df6b64-pdjkp" Jan 27 15:50:48 crc kubenswrapper[4966]: I0127 15:50:48.372956 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-559df6b64-pdjkp"] Jan 27 15:50:48 crc kubenswrapper[4966]: I0127 15:50:48.380072 4966 scope.go:117] "RemoveContainer" containerID="34c4e38dac65c4744cc3c314d73189f352e39beef739b05bae31d991494e4ca5" Jan 27 15:50:48 crc kubenswrapper[4966]: I0127 15:50:48.386785 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-559df6b64-pdjkp"] Jan 27 15:50:48 crc kubenswrapper[4966]: E0127 15:50:48.388106 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34c4e38dac65c4744cc3c314d73189f352e39beef739b05bae31d991494e4ca5\": container with ID starting with 34c4e38dac65c4744cc3c314d73189f352e39beef739b05bae31d991494e4ca5 not found: ID does not exist" containerID="34c4e38dac65c4744cc3c314d73189f352e39beef739b05bae31d991494e4ca5" Jan 27 15:50:48 crc kubenswrapper[4966]: I0127 15:50:48.388143 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34c4e38dac65c4744cc3c314d73189f352e39beef739b05bae31d991494e4ca5"} err="failed to get container status \"34c4e38dac65c4744cc3c314d73189f352e39beef739b05bae31d991494e4ca5\": rpc error: code = NotFound desc = could not find container \"34c4e38dac65c4744cc3c314d73189f352e39beef739b05bae31d991494e4ca5\": container with ID starting with 34c4e38dac65c4744cc3c314d73189f352e39beef739b05bae31d991494e4ca5 not found: ID does not exist" Jan 27 15:50:48 crc kubenswrapper[4966]: I0127 15:50:48.527619 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9adf9a0-e627-47c6-a062-d8625cd43969" path="/var/lib/kubelet/pods/c9adf9a0-e627-47c6-a062-d8625cd43969/volumes" Jan 27 15:51:10 crc kubenswrapper[4966]: I0127 15:51:10.119557 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:51:10 crc kubenswrapper[4966]: I0127 15:51:10.120254 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:51:40 crc kubenswrapper[4966]: I0127 15:51:40.119371 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:51:40 crc kubenswrapper[4966]: I0127 15:51:40.119994 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:51:40 crc kubenswrapper[4966]: I0127 15:51:40.120046 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 15:51:40 crc kubenswrapper[4966]: I0127 15:51:40.120666 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2f252fee4f97cee252f6da079f9b6faf80ef04117d5518af7975a7228534e6c0"} pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:51:40 crc kubenswrapper[4966]: I0127 15:51:40.120729 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" containerID="cri-o://2f252fee4f97cee252f6da079f9b6faf80ef04117d5518af7975a7228534e6c0" gracePeriod=600 Jan 27 15:51:40 crc kubenswrapper[4966]: I0127 15:51:40.679474 4966 generic.go:334] "Generic (PLEG): container finished" podID="75889828-fc5d-4516-a3c4-db3affd4f810" containerID="2f252fee4f97cee252f6da079f9b6faf80ef04117d5518af7975a7228534e6c0" exitCode=0 Jan 27 15:51:40 crc kubenswrapper[4966]: I0127 15:51:40.679589 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerDied","Data":"2f252fee4f97cee252f6da079f9b6faf80ef04117d5518af7975a7228534e6c0"} Jan 27 15:51:40 crc kubenswrapper[4966]: I0127 15:51:40.679955 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"a029a7ea4e898768651cb0cd5395b77c8220f6bb9b0dc0c1269341b2b5716b1b"} Jan 27 15:51:40 crc kubenswrapper[4966]: I0127 15:51:40.680002 4966 scope.go:117] "RemoveContainer" containerID="fa991efa8c264472d6ff0c3eb9586659e3c6d4cca2ccc3928e23ac1cf4a47b67" Jan 27 15:52:14 crc kubenswrapper[4966]: I0127 15:52:14.756743 4966 scope.go:117] "RemoveContainer" containerID="21693d3ffc8ee0aeeea062b80798940b1010af01c4730a7485e81d77f03e4c81" Jan 27 15:53:40 crc kubenswrapper[4966]: I0127 15:53:40.119969 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:53:40 crc kubenswrapper[4966]: I0127 15:53:40.120786 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:54:02 crc kubenswrapper[4966]: I0127 15:54:02.121627 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r"] Jan 27 15:54:02 crc kubenswrapper[4966]: E0127 15:54:02.122466 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9adf9a0-e627-47c6-a062-d8625cd43969" containerName="console" Jan 27 15:54:02 crc kubenswrapper[4966]: I0127 15:54:02.122493 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9adf9a0-e627-47c6-a062-d8625cd43969" containerName="console" Jan 27 15:54:02 crc kubenswrapper[4966]: I0127 15:54:02.122639 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9adf9a0-e627-47c6-a062-d8625cd43969" containerName="console" Jan 27 15:54:02 crc kubenswrapper[4966]: I0127 15:54:02.123666 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r" Jan 27 15:54:02 crc kubenswrapper[4966]: I0127 15:54:02.125524 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 15:54:02 crc kubenswrapper[4966]: I0127 15:54:02.133553 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r"] Jan 27 15:54:02 crc kubenswrapper[4966]: I0127 15:54:02.186984 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d93eed5-fedf-4b4e-b036-6fa9454b22a5-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r\" (UID: \"5d93eed5-fedf-4b4e-b036-6fa9454b22a5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r" Jan 27 15:54:02 crc kubenswrapper[4966]: I0127 15:54:02.187064 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d93eed5-fedf-4b4e-b036-6fa9454b22a5-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r\" (UID: \"5d93eed5-fedf-4b4e-b036-6fa9454b22a5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r" Jan 27 15:54:02 crc kubenswrapper[4966]: I0127 15:54:02.187111 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btk9s\" (UniqueName: \"kubernetes.io/projected/5d93eed5-fedf-4b4e-b036-6fa9454b22a5-kube-api-access-btk9s\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r\" (UID: \"5d93eed5-fedf-4b4e-b036-6fa9454b22a5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r" Jan 27 15:54:02 crc kubenswrapper[4966]: I0127 15:54:02.287854 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d93eed5-fedf-4b4e-b036-6fa9454b22a5-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r\" (UID: \"5d93eed5-fedf-4b4e-b036-6fa9454b22a5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r" Jan 27 15:54:02 crc kubenswrapper[4966]: I0127 15:54:02.288312 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d93eed5-fedf-4b4e-b036-6fa9454b22a5-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r\" (UID: \"5d93eed5-fedf-4b4e-b036-6fa9454b22a5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r" Jan 27 15:54:02 crc kubenswrapper[4966]: I0127 15:54:02.288433 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btk9s\" (UniqueName: \"kubernetes.io/projected/5d93eed5-fedf-4b4e-b036-6fa9454b22a5-kube-api-access-btk9s\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r\" (UID: \"5d93eed5-fedf-4b4e-b036-6fa9454b22a5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r" Jan 27 15:54:02 crc kubenswrapper[4966]: I0127 15:54:02.288779 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d93eed5-fedf-4b4e-b036-6fa9454b22a5-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r\" (UID: \"5d93eed5-fedf-4b4e-b036-6fa9454b22a5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r" Jan 27 15:54:02 crc kubenswrapper[4966]: I0127 15:54:02.290359 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d93eed5-fedf-4b4e-b036-6fa9454b22a5-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r\" (UID: \"5d93eed5-fedf-4b4e-b036-6fa9454b22a5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r" Jan 27 15:54:02 crc kubenswrapper[4966]: I0127 15:54:02.306366 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btk9s\" (UniqueName: \"kubernetes.io/projected/5d93eed5-fedf-4b4e-b036-6fa9454b22a5-kube-api-access-btk9s\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r\" (UID: \"5d93eed5-fedf-4b4e-b036-6fa9454b22a5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r" Jan 27 15:54:02 crc kubenswrapper[4966]: I0127 15:54:02.491354 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r" Jan 27 15:54:02 crc kubenswrapper[4966]: I0127 15:54:02.681280 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r"] Jan 27 15:54:02 crc kubenswrapper[4966]: I0127 15:54:02.704381 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r" event={"ID":"5d93eed5-fedf-4b4e-b036-6fa9454b22a5","Type":"ContainerStarted","Data":"cb00245f02a5acb1b0a5a49c5371a0d6aee648346d78c9bbcdf97d6daf0590eb"} Jan 27 15:54:03 crc kubenswrapper[4966]: I0127 15:54:03.713253 4966 generic.go:334] "Generic (PLEG): container finished" podID="5d93eed5-fedf-4b4e-b036-6fa9454b22a5" containerID="945f0f46697004cf87d09171617db803115685b5d861d341b1bd8e822b8f310e" exitCode=0 Jan 27 15:54:03 crc kubenswrapper[4966]: I0127 15:54:03.713300 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r" event={"ID":"5d93eed5-fedf-4b4e-b036-6fa9454b22a5","Type":"ContainerDied","Data":"945f0f46697004cf87d09171617db803115685b5d861d341b1bd8e822b8f310e"} Jan 27 15:54:03 crc kubenswrapper[4966]: I0127 15:54:03.715480 4966 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:54:05 crc kubenswrapper[4966]: I0127 15:54:05.730413 4966 generic.go:334] "Generic (PLEG): container finished" podID="5d93eed5-fedf-4b4e-b036-6fa9454b22a5" containerID="fd5ada7a382274ac8424c002e72b5a2510abf381b8275a504bb2e1b965ced4e1" exitCode=0 Jan 27 15:54:05 crc kubenswrapper[4966]: I0127 15:54:05.730469 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r" event={"ID":"5d93eed5-fedf-4b4e-b036-6fa9454b22a5","Type":"ContainerDied","Data":"fd5ada7a382274ac8424c002e72b5a2510abf381b8275a504bb2e1b965ced4e1"} Jan 27 15:54:06 crc kubenswrapper[4966]: I0127 15:54:06.737539 4966 generic.go:334] "Generic (PLEG): container finished" podID="5d93eed5-fedf-4b4e-b036-6fa9454b22a5" containerID="04b6e0d156bf9773cbbf7695e6922e18d03464fcabd403941971ac6aa6cd9d94" exitCode=0 Jan 27 15:54:06 crc kubenswrapper[4966]: I0127 15:54:06.737578 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r" event={"ID":"5d93eed5-fedf-4b4e-b036-6fa9454b22a5","Type":"ContainerDied","Data":"04b6e0d156bf9773cbbf7695e6922e18d03464fcabd403941971ac6aa6cd9d94"} Jan 27 15:54:08 crc kubenswrapper[4966]: I0127 15:54:08.043445 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r" Jan 27 15:54:08 crc kubenswrapper[4966]: I0127 15:54:08.075424 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d93eed5-fedf-4b4e-b036-6fa9454b22a5-util\") pod \"5d93eed5-fedf-4b4e-b036-6fa9454b22a5\" (UID: \"5d93eed5-fedf-4b4e-b036-6fa9454b22a5\") " Jan 27 15:54:08 crc kubenswrapper[4966]: I0127 15:54:08.075473 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btk9s\" (UniqueName: \"kubernetes.io/projected/5d93eed5-fedf-4b4e-b036-6fa9454b22a5-kube-api-access-btk9s\") pod \"5d93eed5-fedf-4b4e-b036-6fa9454b22a5\" (UID: \"5d93eed5-fedf-4b4e-b036-6fa9454b22a5\") " Jan 27 15:54:08 crc kubenswrapper[4966]: I0127 15:54:08.075588 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d93eed5-fedf-4b4e-b036-6fa9454b22a5-bundle\") pod \"5d93eed5-fedf-4b4e-b036-6fa9454b22a5\" (UID: \"5d93eed5-fedf-4b4e-b036-6fa9454b22a5\") " Jan 27 15:54:08 crc kubenswrapper[4966]: I0127 15:54:08.078031 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d93eed5-fedf-4b4e-b036-6fa9454b22a5-bundle" (OuterVolumeSpecName: "bundle") pod "5d93eed5-fedf-4b4e-b036-6fa9454b22a5" (UID: "5d93eed5-fedf-4b4e-b036-6fa9454b22a5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:54:08 crc kubenswrapper[4966]: I0127 15:54:08.080738 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d93eed5-fedf-4b4e-b036-6fa9454b22a5-kube-api-access-btk9s" (OuterVolumeSpecName: "kube-api-access-btk9s") pod "5d93eed5-fedf-4b4e-b036-6fa9454b22a5" (UID: "5d93eed5-fedf-4b4e-b036-6fa9454b22a5"). InnerVolumeSpecName "kube-api-access-btk9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:54:08 crc kubenswrapper[4966]: I0127 15:54:08.176940 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btk9s\" (UniqueName: \"kubernetes.io/projected/5d93eed5-fedf-4b4e-b036-6fa9454b22a5-kube-api-access-btk9s\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:08 crc kubenswrapper[4966]: I0127 15:54:08.176970 4966 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d93eed5-fedf-4b4e-b036-6fa9454b22a5-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:08 crc kubenswrapper[4966]: I0127 15:54:08.486038 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d93eed5-fedf-4b4e-b036-6fa9454b22a5-util" (OuterVolumeSpecName: "util") pod "5d93eed5-fedf-4b4e-b036-6fa9454b22a5" (UID: "5d93eed5-fedf-4b4e-b036-6fa9454b22a5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:54:08 crc kubenswrapper[4966]: I0127 15:54:08.581789 4966 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d93eed5-fedf-4b4e-b036-6fa9454b22a5-util\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:08 crc kubenswrapper[4966]: I0127 15:54:08.750318 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r" event={"ID":"5d93eed5-fedf-4b4e-b036-6fa9454b22a5","Type":"ContainerDied","Data":"cb00245f02a5acb1b0a5a49c5371a0d6aee648346d78c9bbcdf97d6daf0590eb"} Jan 27 15:54:08 crc kubenswrapper[4966]: I0127 15:54:08.750363 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb00245f02a5acb1b0a5a49c5371a0d6aee648346d78c9bbcdf97d6daf0590eb" Jan 27 15:54:08 crc kubenswrapper[4966]: I0127 15:54:08.750363 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r" Jan 27 15:54:10 crc kubenswrapper[4966]: I0127 15:54:10.120157 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:54:10 crc kubenswrapper[4966]: I0127 15:54:10.120233 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.064615 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-glbg8"] Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.066376 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovn-controller" containerID="cri-o://859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9" gracePeriod=30 Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.066499 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovn-acl-logging" containerID="cri-o://d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b" gracePeriod=30 Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.066465 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="kube-rbac-proxy-node" containerID="cri-o://01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162" gracePeriod=30 Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.066426 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="sbdb" containerID="cri-o://a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6" gracePeriod=30 Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.066606 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="northd" containerID="cri-o://70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e" gracePeriod=30 Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.066587 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="nbdb" containerID="cri-o://a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942" gracePeriod=30 Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.066494 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3" gracePeriod=30 Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.121974 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovnkube-controller" containerID="cri-o://6b348f76c6afdfb4c2ef04e847af8d9f695c74115e83f8ca62e06a7f00deddae" gracePeriod=30 Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.794106 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5xktc_43e2b070-838d-4a18-9a86-1683f64b641c/kube-multus/1.log" Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.794999 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5xktc_43e2b070-838d-4a18-9a86-1683f64b641c/kube-multus/0.log" Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.795113 4966 generic.go:334] "Generic (PLEG): container finished" podID="43e2b070-838d-4a18-9a86-1683f64b641c" containerID="5c9cd0e94c2c0c257d9004d0cb7ddb379b63e0af8b4de162939674ce4fd270c4" exitCode=2 Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.795250 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5xktc" event={"ID":"43e2b070-838d-4a18-9a86-1683f64b641c","Type":"ContainerDied","Data":"5c9cd0e94c2c0c257d9004d0cb7ddb379b63e0af8b4de162939674ce4fd270c4"} Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.795343 4966 scope.go:117] "RemoveContainer" containerID="7ad505bba6225d4dd1d2f1fca90221f9f287df281c04f03dab7c78e12c44e117" Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.796131 4966 scope.go:117] "RemoveContainer" containerID="5c9cd0e94c2c0c257d9004d0cb7ddb379b63e0af8b4de162939674ce4fd270c4" Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.802683 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glbg8_4a25d116-d49b-4533-bac7-74bee93062b1/ovnkube-controller/3.log" Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.806127 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glbg8_4a25d116-d49b-4533-bac7-74bee93062b1/ovn-acl-logging/0.log" Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.807001 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glbg8_4a25d116-d49b-4533-bac7-74bee93062b1/ovn-controller/0.log" Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.807637 4966 generic.go:334] "Generic (PLEG): container finished" podID="4a25d116-d49b-4533-bac7-74bee93062b1" containerID="6b348f76c6afdfb4c2ef04e847af8d9f695c74115e83f8ca62e06a7f00deddae" exitCode=0 Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.807686 4966 generic.go:334] "Generic (PLEG): container finished" podID="4a25d116-d49b-4533-bac7-74bee93062b1" containerID="a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6" exitCode=0 Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.807702 4966 generic.go:334] "Generic (PLEG): container finished" podID="4a25d116-d49b-4533-bac7-74bee93062b1" containerID="a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942" exitCode=0 Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.807720 4966 generic.go:334] "Generic (PLEG): container finished" podID="4a25d116-d49b-4533-bac7-74bee93062b1" containerID="70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e" exitCode=0 Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.807735 4966 generic.go:334] "Generic (PLEG): container finished" podID="4a25d116-d49b-4533-bac7-74bee93062b1" containerID="d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b" exitCode=143 Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.807749 4966 generic.go:334] "Generic (PLEG): container finished" podID="4a25d116-d49b-4533-bac7-74bee93062b1" containerID="859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9" exitCode=143 Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.807782 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerDied","Data":"6b348f76c6afdfb4c2ef04e847af8d9f695c74115e83f8ca62e06a7f00deddae"} Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.807857 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerDied","Data":"a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6"} Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.807880 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerDied","Data":"a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942"} Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.807928 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerDied","Data":"70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e"} Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.807947 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerDied","Data":"d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b"} Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.808055 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerDied","Data":"859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9"} Jan 27 15:54:13 crc kubenswrapper[4966]: I0127 15:54:13.845879 4966 scope.go:117] "RemoveContainer" containerID="8cd9525b2970d82e7d4eba30544d239545ef06bbe528b57245859c04e24024cb" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.271137 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glbg8_4a25d116-d49b-4533-bac7-74bee93062b1/ovn-acl-logging/0.log" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.271732 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glbg8_4a25d116-d49b-4533-bac7-74bee93062b1/ovn-controller/0.log" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.272276 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.335564 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zpccn"] Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.336693 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovnkube-controller" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.336739 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovnkube-controller" Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.336756 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovnkube-controller" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.336762 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovnkube-controller" Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.336769 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovnkube-controller" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.336776 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovnkube-controller" Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.336794 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d93eed5-fedf-4b4e-b036-6fa9454b22a5" containerName="extract" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.336800 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d93eed5-fedf-4b4e-b036-6fa9454b22a5" containerName="extract" Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.336812 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.336818 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.336831 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovn-controller" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.336838 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovn-controller" Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.336854 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="kube-rbac-proxy-node" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.336860 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="kube-rbac-proxy-node" Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.336873 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="kubecfg-setup" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.336879 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="kubecfg-setup" Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.336911 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d93eed5-fedf-4b4e-b036-6fa9454b22a5" containerName="util" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.336918 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d93eed5-fedf-4b4e-b036-6fa9454b22a5" containerName="util" Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.336924 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovn-acl-logging" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.336930 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovn-acl-logging" Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.336939 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d93eed5-fedf-4b4e-b036-6fa9454b22a5" containerName="pull" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.336945 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d93eed5-fedf-4b4e-b036-6fa9454b22a5" containerName="pull" Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.336960 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="nbdb" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.336966 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="nbdb" Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.336976 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="northd" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.336982 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="northd" Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.336998 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="sbdb" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.337005 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="sbdb" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.337324 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovnkube-controller" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.337338 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovnkube-controller" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.337346 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovnkube-controller" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.337358 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d93eed5-fedf-4b4e-b036-6fa9454b22a5" containerName="extract" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.337371 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="nbdb" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.337382 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovn-controller" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.337399 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="sbdb" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.337408 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="northd" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.337419 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovn-acl-logging" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.337431 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.337441 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="kube-rbac-proxy-node" Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.337637 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovnkube-controller" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.337645 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovnkube-controller" Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.337661 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovnkube-controller" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.337667 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovnkube-controller" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.337868 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovnkube-controller" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.337877 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" containerName="ovnkube-controller" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.341452 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.372668 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-run-openvswitch\") pod \"4a25d116-d49b-4533-bac7-74bee93062b1\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.372707 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-var-lib-openvswitch\") pod \"4a25d116-d49b-4533-bac7-74bee93062b1\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.372736 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4a25d116-d49b-4533-bac7-74bee93062b1-ovnkube-script-lib\") pod \"4a25d116-d49b-4533-bac7-74bee93062b1\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.372759 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-run-netns\") pod \"4a25d116-d49b-4533-bac7-74bee93062b1\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.372783 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4a25d116-d49b-4533-bac7-74bee93062b1-env-overrides\") pod \"4a25d116-d49b-4533-bac7-74bee93062b1\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.372797 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4a25d116-d49b-4533-bac7-74bee93062b1-ovnkube-config\") pod \"4a25d116-d49b-4533-bac7-74bee93062b1\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.372815 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-cni-bin\") pod \"4a25d116-d49b-4533-bac7-74bee93062b1\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.372830 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgdlv\" (UniqueName: \"kubernetes.io/projected/4a25d116-d49b-4533-bac7-74bee93062b1-kube-api-access-fgdlv\") pod \"4a25d116-d49b-4533-bac7-74bee93062b1\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.372842 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-kubelet\") pod \"4a25d116-d49b-4533-bac7-74bee93062b1\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.372870 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-run-systemd\") pod \"4a25d116-d49b-4533-bac7-74bee93062b1\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.372888 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-systemd-units\") pod \"4a25d116-d49b-4533-bac7-74bee93062b1\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.372918 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-run-ovn\") pod \"4a25d116-d49b-4533-bac7-74bee93062b1\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.372936 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-log-socket\") pod \"4a25d116-d49b-4533-bac7-74bee93062b1\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.372931 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4a25d116-d49b-4533-bac7-74bee93062b1" (UID: "4a25d116-d49b-4533-bac7-74bee93062b1"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.372967 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-etc-openvswitch\") pod \"4a25d116-d49b-4533-bac7-74bee93062b1\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.372986 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-run-ovn-kubernetes\") pod \"4a25d116-d49b-4533-bac7-74bee93062b1\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373007 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-slash\") pod \"4a25d116-d49b-4533-bac7-74bee93062b1\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373023 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4a25d116-d49b-4533-bac7-74bee93062b1\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373055 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-cni-netd\") pod \"4a25d116-d49b-4533-bac7-74bee93062b1\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373077 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-node-log\") pod \"4a25d116-d49b-4533-bac7-74bee93062b1\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373095 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4a25d116-d49b-4533-bac7-74bee93062b1-ovn-node-metrics-cert\") pod \"4a25d116-d49b-4533-bac7-74bee93062b1\" (UID: \"4a25d116-d49b-4533-bac7-74bee93062b1\") " Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373129 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4a25d116-d49b-4533-bac7-74bee93062b1" (UID: "4a25d116-d49b-4533-bac7-74bee93062b1"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373113 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4a25d116-d49b-4533-bac7-74bee93062b1" (UID: "4a25d116-d49b-4533-bac7-74bee93062b1"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373203 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a25d116-d49b-4533-bac7-74bee93062b1-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4a25d116-d49b-4533-bac7-74bee93062b1" (UID: "4a25d116-d49b-4533-bac7-74bee93062b1"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373209 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4a25d116-d49b-4533-bac7-74bee93062b1" (UID: "4a25d116-d49b-4533-bac7-74bee93062b1"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373247 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f910c8e0-1852-4c7b-8869-2e955366c062-env-overrides\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373256 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-slash" (OuterVolumeSpecName: "host-slash") pod "4a25d116-d49b-4533-bac7-74bee93062b1" (UID: "4a25d116-d49b-4533-bac7-74bee93062b1"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373270 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsw8c\" (UniqueName: \"kubernetes.io/projected/f910c8e0-1852-4c7b-8869-2e955366c062-kube-api-access-lsw8c\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373298 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-host-cni-netd\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373053 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4a25d116-d49b-4533-bac7-74bee93062b1" (UID: "4a25d116-d49b-4533-bac7-74bee93062b1"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373275 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-log-socket" (OuterVolumeSpecName: "log-socket") pod "4a25d116-d49b-4533-bac7-74bee93062b1" (UID: "4a25d116-d49b-4533-bac7-74bee93062b1"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373340 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4a25d116-d49b-4533-bac7-74bee93062b1" (UID: "4a25d116-d49b-4533-bac7-74bee93062b1"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373260 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a25d116-d49b-4533-bac7-74bee93062b1-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4a25d116-d49b-4533-bac7-74bee93062b1" (UID: "4a25d116-d49b-4533-bac7-74bee93062b1"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373326 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-run-systemd\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373288 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4a25d116-d49b-4533-bac7-74bee93062b1" (UID: "4a25d116-d49b-4533-bac7-74bee93062b1"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373371 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-run-ovn\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373298 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4a25d116-d49b-4533-bac7-74bee93062b1" (UID: "4a25d116-d49b-4533-bac7-74bee93062b1"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373377 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-node-log" (OuterVolumeSpecName: "node-log") pod "4a25d116-d49b-4533-bac7-74bee93062b1" (UID: "4a25d116-d49b-4533-bac7-74bee93062b1"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373393 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f910c8e0-1852-4c7b-8869-2e955366c062-ovnkube-config\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373295 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4a25d116-d49b-4533-bac7-74bee93062b1" (UID: "4a25d116-d49b-4533-bac7-74bee93062b1"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373413 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-systemd-units\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373303 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4a25d116-d49b-4533-bac7-74bee93062b1" (UID: "4a25d116-d49b-4533-bac7-74bee93062b1"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373363 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4a25d116-d49b-4533-bac7-74bee93062b1" (UID: "4a25d116-d49b-4533-bac7-74bee93062b1"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373477 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-host-run-netns\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373529 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-etc-openvswitch\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373554 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-host-run-ovn-kubernetes\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373591 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-host-kubelet\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373673 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374150 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-log-socket\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.373961 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a25d116-d49b-4533-bac7-74bee93062b1-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4a25d116-d49b-4533-bac7-74bee93062b1" (UID: "4a25d116-d49b-4533-bac7-74bee93062b1"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374219 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-host-slash\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374249 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f910c8e0-1852-4c7b-8869-2e955366c062-ovnkube-script-lib\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374364 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f910c8e0-1852-4c7b-8869-2e955366c062-ovn-node-metrics-cert\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374387 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-run-openvswitch\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374430 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-var-lib-openvswitch\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374454 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-node-log\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374474 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-host-cni-bin\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374598 4966 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4a25d116-d49b-4533-bac7-74bee93062b1-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374608 4966 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4a25d116-d49b-4533-bac7-74bee93062b1-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374617 4966 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374625 4966 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374636 4966 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374645 4966 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374655 4966 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374664 4966 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374674 4966 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374683 4966 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374692 4966 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374700 4966 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374708 4966 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374717 4966 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374727 4966 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374739 4966 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4a25d116-d49b-4533-bac7-74bee93062b1-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.374748 4966 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.381344 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a25d116-d49b-4533-bac7-74bee93062b1-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4a25d116-d49b-4533-bac7-74bee93062b1" (UID: "4a25d116-d49b-4533-bac7-74bee93062b1"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.381519 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a25d116-d49b-4533-bac7-74bee93062b1-kube-api-access-fgdlv" (OuterVolumeSpecName: "kube-api-access-fgdlv") pod "4a25d116-d49b-4533-bac7-74bee93062b1" (UID: "4a25d116-d49b-4533-bac7-74bee93062b1"). InnerVolumeSpecName "kube-api-access-fgdlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.390608 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4a25d116-d49b-4533-bac7-74bee93062b1" (UID: "4a25d116-d49b-4533-bac7-74bee93062b1"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.475660 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsw8c\" (UniqueName: \"kubernetes.io/projected/f910c8e0-1852-4c7b-8869-2e955366c062-kube-api-access-lsw8c\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.475710 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-host-cni-netd\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.475736 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-run-systemd\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.475750 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-run-ovn\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.475769 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f910c8e0-1852-4c7b-8869-2e955366c062-ovnkube-config\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.475785 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-systemd-units\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.475801 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-host-run-netns\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.475820 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-etc-openvswitch\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.475816 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-host-cni-netd\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.475859 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-host-run-ovn-kubernetes\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.475834 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-host-run-ovn-kubernetes\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.475868 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-run-systemd\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.475932 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-host-run-netns\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.475956 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-host-kubelet\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.475883 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-run-ovn\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.475937 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-host-kubelet\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.475974 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-etc-openvswitch\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.475989 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-systemd-units\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476026 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-log-socket\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476044 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476079 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-host-slash\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476093 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-log-socket\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476097 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f910c8e0-1852-4c7b-8869-2e955366c062-ovnkube-script-lib\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476110 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476122 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-host-slash\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476163 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f910c8e0-1852-4c7b-8869-2e955366c062-ovn-node-metrics-cert\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476183 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-run-openvswitch\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476206 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-var-lib-openvswitch\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476222 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-node-log\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476238 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-host-cni-bin\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476268 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f910c8e0-1852-4c7b-8869-2e955366c062-env-overrides\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476316 4966 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4a25d116-d49b-4533-bac7-74bee93062b1-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476328 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgdlv\" (UniqueName: \"kubernetes.io/projected/4a25d116-d49b-4533-bac7-74bee93062b1-kube-api-access-fgdlv\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476337 4966 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4a25d116-d49b-4533-bac7-74bee93062b1-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476704 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-var-lib-openvswitch\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476744 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f910c8e0-1852-4c7b-8869-2e955366c062-env-overrides\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476743 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-run-openvswitch\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476764 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-node-log\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476782 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f910c8e0-1852-4c7b-8869-2e955366c062-host-cni-bin\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.476829 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f910c8e0-1852-4c7b-8869-2e955366c062-ovnkube-config\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.477136 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f910c8e0-1852-4c7b-8869-2e955366c062-ovnkube-script-lib\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.499653 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f910c8e0-1852-4c7b-8869-2e955366c062-ovn-node-metrics-cert\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.518388 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsw8c\" (UniqueName: \"kubernetes.io/projected/f910c8e0-1852-4c7b-8869-2e955366c062-kube-api-access-lsw8c\") pod \"ovnkube-node-zpccn\" (UID: \"f910c8e0-1852-4c7b-8869-2e955366c062\") " pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.657040 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.817569 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glbg8_4a25d116-d49b-4533-bac7-74bee93062b1/ovn-acl-logging/0.log" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.818027 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glbg8_4a25d116-d49b-4533-bac7-74bee93062b1/ovn-controller/0.log" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.818438 4966 generic.go:334] "Generic (PLEG): container finished" podID="4a25d116-d49b-4533-bac7-74bee93062b1" containerID="dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3" exitCode=0 Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.818464 4966 generic.go:334] "Generic (PLEG): container finished" podID="4a25d116-d49b-4533-bac7-74bee93062b1" containerID="01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162" exitCode=0 Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.818471 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerDied","Data":"dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3"} Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.818521 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerDied","Data":"01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162"} Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.818534 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" event={"ID":"4a25d116-d49b-4533-bac7-74bee93062b1","Type":"ContainerDied","Data":"de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28"} Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.818536 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-glbg8" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.818559 4966 scope.go:117] "RemoveContainer" containerID="6b348f76c6afdfb4c2ef04e847af8d9f695c74115e83f8ca62e06a7f00deddae" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.820622 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5xktc_43e2b070-838d-4a18-9a86-1683f64b641c/kube-multus/1.log" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.820686 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5xktc" event={"ID":"43e2b070-838d-4a18-9a86-1683f64b641c","Type":"ContainerStarted","Data":"5d9afa6daf430c73de36afcaf38915b33e7ab2dcb56c3db9ab68f2f18fe12303"} Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.827240 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" event={"ID":"f910c8e0-1852-4c7b-8869-2e955366c062","Type":"ContainerStarted","Data":"ff0cf55f5a86e8ef32e9bf49bad5ccc9c2836ca53c59dce93ea9ac024ba6e774"} Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.837114 4966 scope.go:117] "RemoveContainer" containerID="a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.840763 4966 scope.go:117] "RemoveContainer" containerID="a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.852377 4966 scope.go:117] "RemoveContainer" containerID="a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942" Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.869841 4966 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_nbdb_ovnkube-node-glbg8_openshift-ovn-kubernetes_4a25d116-d49b-4533-bac7-74bee93062b1_0 in pod sandbox de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28 from index: no such id: 'a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942'" containerID="a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.870155 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942"} err="rpc error: code = Unknown desc = failed to delete container k8s_nbdb_ovnkube-node-glbg8_openshift-ovn-kubernetes_4a25d116-d49b-4533-bac7-74bee93062b1_0 in pod sandbox de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28 from index: no such id: 'a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942'" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.869883 4966 scope.go:117] "RemoveContainer" containerID="a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.870242 4966 scope.go:117] "RemoveContainer" containerID="70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e" Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.870970 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\": container with ID starting with a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6 not found: ID does not exist" containerID="a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6" Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.871005 4966 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\": rpc error: code = NotFound desc = could not find container \"a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\": container with ID starting with a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6 not found: ID does not exist" containerID="a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.871022 4966 scope.go:117] "RemoveContainer" containerID="70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.874732 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-glbg8"] Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.886300 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-glbg8"] Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.933016 4966 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_northd_ovnkube-node-glbg8_openshift-ovn-kubernetes_4a25d116-d49b-4533-bac7-74bee93062b1_0 in pod sandbox de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28 from index: no such id: '70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e'" containerID="70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.933073 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e"} err="rpc error: code = Unknown desc = failed to delete container k8s_northd_ovnkube-node-glbg8_openshift-ovn-kubernetes_4a25d116-d49b-4533-bac7-74bee93062b1_0 in pod sandbox de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28 from index: no such id: '70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e'" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.933111 4966 scope.go:117] "RemoveContainer" containerID="dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.933170 4966 scope.go:117] "RemoveContainer" containerID="01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.978071 4966 scope.go:117] "RemoveContainer" containerID="dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.986280 4966 scope.go:117] "RemoveContainer" containerID="01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162" Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.986689 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\": container with ID starting with 01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162 not found: ID does not exist" containerID="01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.986718 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162"} err="failed to get container status \"01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\": rpc error: code = NotFound desc = could not find container \"01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\": container with ID starting with 01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162 not found: ID does not exist" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.986766 4966 scope.go:117] "RemoveContainer" containerID="d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b" Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.988616 4966 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-rbac-proxy-ovn-metrics_ovnkube-node-glbg8_openshift-ovn-kubernetes_4a25d116-d49b-4533-bac7-74bee93062b1_0 in pod sandbox de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28 from index: no such id: 'dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3'" containerID="dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3" Jan 27 15:54:14 crc kubenswrapper[4966]: E0127 15:54:14.988646 4966 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-rbac-proxy-ovn-metrics_ovnkube-node-glbg8_openshift-ovn-kubernetes_4a25d116-d49b-4533-bac7-74bee93062b1_0 in pod sandbox de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28 from index: no such id: 'dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3'" containerID="dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3" Jan 27 15:54:14 crc kubenswrapper[4966]: I0127 15:54:14.988666 4966 scope.go:117] "RemoveContainer" containerID="d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.006515 4966 scope.go:117] "RemoveContainer" containerID="859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9" Jan 27 15:54:15 crc kubenswrapper[4966]: E0127 15:54:15.006662 4966 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_ovn-acl-logging_ovnkube-node-glbg8_openshift-ovn-kubernetes_4a25d116-d49b-4533-bac7-74bee93062b1_0 in pod sandbox de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28 from index: no such id: 'd412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b'" containerID="d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b" Jan 27 15:54:15 crc kubenswrapper[4966]: E0127 15:54:15.006686 4966 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_ovn-acl-logging_ovnkube-node-glbg8_openshift-ovn-kubernetes_4a25d116-d49b-4533-bac7-74bee93062b1_0 in pod sandbox de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28 from index: no such id: 'd412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b'" containerID="d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.006704 4966 scope.go:117] "RemoveContainer" containerID="859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.030786 4966 scope.go:117] "RemoveContainer" containerID="7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6" Jan 27 15:54:15 crc kubenswrapper[4966]: E0127 15:54:15.030993 4966 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_ovn-controller_ovnkube-node-glbg8_openshift-ovn-kubernetes_4a25d116-d49b-4533-bac7-74bee93062b1_0 in pod sandbox de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28 from index: no such id: '859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9'" containerID="859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9" Jan 27 15:54:15 crc kubenswrapper[4966]: E0127 15:54:15.031030 4966 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_ovn-controller_ovnkube-node-glbg8_openshift-ovn-kubernetes_4a25d116-d49b-4533-bac7-74bee93062b1_0 in pod sandbox de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28 from index: no such id: '859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9'" containerID="859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.031058 4966 scope.go:117] "RemoveContainer" containerID="7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.047287 4966 scope.go:117] "RemoveContainer" containerID="6b348f76c6afdfb4c2ef04e847af8d9f695c74115e83f8ca62e06a7f00deddae" Jan 27 15:54:15 crc kubenswrapper[4966]: E0127 15:54:15.047728 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b348f76c6afdfb4c2ef04e847af8d9f695c74115e83f8ca62e06a7f00deddae\": container with ID starting with 6b348f76c6afdfb4c2ef04e847af8d9f695c74115e83f8ca62e06a7f00deddae not found: ID does not exist" containerID="6b348f76c6afdfb4c2ef04e847af8d9f695c74115e83f8ca62e06a7f00deddae" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.047788 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b348f76c6afdfb4c2ef04e847af8d9f695c74115e83f8ca62e06a7f00deddae"} err="failed to get container status \"6b348f76c6afdfb4c2ef04e847af8d9f695c74115e83f8ca62e06a7f00deddae\": rpc error: code = NotFound desc = could not find container \"6b348f76c6afdfb4c2ef04e847af8d9f695c74115e83f8ca62e06a7f00deddae\": container with ID starting with 6b348f76c6afdfb4c2ef04e847af8d9f695c74115e83f8ca62e06a7f00deddae not found: ID does not exist" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.047819 4966 scope.go:117] "RemoveContainer" containerID="a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6" Jan 27 15:54:15 crc kubenswrapper[4966]: E0127 15:54:15.048090 4966 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_kubecfg-setup_ovnkube-node-glbg8_openshift-ovn-kubernetes_4a25d116-d49b-4533-bac7-74bee93062b1_0 in pod sandbox de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28 from index: no such id: '7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6'" containerID="7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6" Jan 27 15:54:15 crc kubenswrapper[4966]: E0127 15:54:15.048147 4966 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_kubecfg-setup_ovnkube-node-glbg8_openshift-ovn-kubernetes_4a25d116-d49b-4533-bac7-74bee93062b1_0 in pod sandbox de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28 from index: no such id: '7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6'" containerID="7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.048283 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6"} err="failed to get container status \"a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\": rpc error: code = NotFound desc = could not find container \"a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\": container with ID starting with a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6 not found: ID does not exist" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.048312 4966 scope.go:117] "RemoveContainer" containerID="a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942" Jan 27 15:54:15 crc kubenswrapper[4966]: E0127 15:54:15.049918 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\": container with ID starting with a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942 not found: ID does not exist" containerID="a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.049945 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942"} err="failed to get container status \"a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\": rpc error: code = NotFound desc = could not find container \"a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\": container with ID starting with a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942 not found: ID does not exist" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.049967 4966 scope.go:117] "RemoveContainer" containerID="70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e" Jan 27 15:54:15 crc kubenswrapper[4966]: E0127 15:54:15.050784 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\": container with ID starting with 70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e not found: ID does not exist" containerID="70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.050839 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e"} err="failed to get container status \"70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\": rpc error: code = NotFound desc = could not find container \"70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\": container with ID starting with 70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e not found: ID does not exist" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.050857 4966 scope.go:117] "RemoveContainer" containerID="dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3" Jan 27 15:54:15 crc kubenswrapper[4966]: E0127 15:54:15.051118 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\": container with ID starting with dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3 not found: ID does not exist" containerID="dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.051142 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3"} err="failed to get container status \"dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\": rpc error: code = NotFound desc = could not find container \"dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\": container with ID starting with dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3 not found: ID does not exist" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.051157 4966 scope.go:117] "RemoveContainer" containerID="01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.051508 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162"} err="failed to get container status \"01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\": rpc error: code = NotFound desc = could not find container \"01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\": container with ID starting with 01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162 not found: ID does not exist" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.051551 4966 scope.go:117] "RemoveContainer" containerID="d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b" Jan 27 15:54:15 crc kubenswrapper[4966]: E0127 15:54:15.051813 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\": container with ID starting with d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b not found: ID does not exist" containerID="d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.051835 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b"} err="failed to get container status \"d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\": rpc error: code = NotFound desc = could not find container \"d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\": container with ID starting with d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b not found: ID does not exist" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.051852 4966 scope.go:117] "RemoveContainer" containerID="859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9" Jan 27 15:54:15 crc kubenswrapper[4966]: E0127 15:54:15.052149 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\": container with ID starting with 859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9 not found: ID does not exist" containerID="859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.052171 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9"} err="failed to get container status \"859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\": rpc error: code = NotFound desc = could not find container \"859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\": container with ID starting with 859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9 not found: ID does not exist" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.052200 4966 scope.go:117] "RemoveContainer" containerID="7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6" Jan 27 15:54:15 crc kubenswrapper[4966]: E0127 15:54:15.052441 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\": container with ID starting with 7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6 not found: ID does not exist" containerID="7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.052461 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6"} err="failed to get container status \"7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\": rpc error: code = NotFound desc = could not find container \"7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\": container with ID starting with 7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6 not found: ID does not exist" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.052472 4966 scope.go:117] "RemoveContainer" containerID="6b348f76c6afdfb4c2ef04e847af8d9f695c74115e83f8ca62e06a7f00deddae" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.052757 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b348f76c6afdfb4c2ef04e847af8d9f695c74115e83f8ca62e06a7f00deddae"} err="failed to get container status \"6b348f76c6afdfb4c2ef04e847af8d9f695c74115e83f8ca62e06a7f00deddae\": rpc error: code = NotFound desc = could not find container \"6b348f76c6afdfb4c2ef04e847af8d9f695c74115e83f8ca62e06a7f00deddae\": container with ID starting with 6b348f76c6afdfb4c2ef04e847af8d9f695c74115e83f8ca62e06a7f00deddae not found: ID does not exist" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.052774 4966 scope.go:117] "RemoveContainer" containerID="a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.053394 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6"} err="failed to get container status \"a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\": rpc error: code = NotFound desc = could not find container \"a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6\": container with ID starting with a6236429f027b98b5af25484334b387e859bebdbce93f87070cfd1ceaa28e1c6 not found: ID does not exist" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.053425 4966 scope.go:117] "RemoveContainer" containerID="a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.053815 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942"} err="failed to get container status \"a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\": rpc error: code = NotFound desc = could not find container \"a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942\": container with ID starting with a1b2f436c72cfd9a9976db9f13c3bd80e1fcffd8755d88804aeb4819758a3942 not found: ID does not exist" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.053837 4966 scope.go:117] "RemoveContainer" containerID="70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.054369 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e"} err="failed to get container status \"70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\": rpc error: code = NotFound desc = could not find container \"70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e\": container with ID starting with 70ee39868d927dd1caa9ce077c26f1df721f35af78a122d219925e87fce23a7e not found: ID does not exist" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.054403 4966 scope.go:117] "RemoveContainer" containerID="dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.056748 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3"} err="failed to get container status \"dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\": rpc error: code = NotFound desc = could not find container \"dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3\": container with ID starting with dd32135c0f089d2768c9a78f57b54852f22c5d0c504d3a84ae020f84976e8ce3 not found: ID does not exist" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.056781 4966 scope.go:117] "RemoveContainer" containerID="01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.057102 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162"} err="failed to get container status \"01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\": rpc error: code = NotFound desc = could not find container \"01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162\": container with ID starting with 01c543fc2cdf8b332aa99cdcdedf7d342ed9cc4ed8d496e019a1ffe55d9f7162 not found: ID does not exist" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.057120 4966 scope.go:117] "RemoveContainer" containerID="d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.057385 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b"} err="failed to get container status \"d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\": rpc error: code = NotFound desc = could not find container \"d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b\": container with ID starting with d412eb2a9ec059b52b21516754c83ad1274248a4352a81186c2a3f37df31ee1b not found: ID does not exist" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.057405 4966 scope.go:117] "RemoveContainer" containerID="859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.057721 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9"} err="failed to get container status \"859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\": rpc error: code = NotFound desc = could not find container \"859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9\": container with ID starting with 859a69a14568dda96cfb3802ab8a6ca5e54c2a51d18607e689569f7f52f70ab9 not found: ID does not exist" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.057747 4966 scope.go:117] "RemoveContainer" containerID="7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.058677 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6"} err="failed to get container status \"7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\": rpc error: code = NotFound desc = could not find container \"7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6\": container with ID starting with 7e0cc5466c84914b53ce6284023204a40367e89bda42c5fc1c512383934591a6 not found: ID does not exist" Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.832986 4966 generic.go:334] "Generic (PLEG): container finished" podID="f910c8e0-1852-4c7b-8869-2e955366c062" containerID="bae3dfb424f49c9a2d1736c00746383cc81de044ace3d904ecf611f93657edc0" exitCode=0 Jan 27 15:54:15 crc kubenswrapper[4966]: I0127 15:54:15.833021 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" event={"ID":"f910c8e0-1852-4c7b-8869-2e955366c062","Type":"ContainerDied","Data":"bae3dfb424f49c9a2d1736c00746383cc81de044ace3d904ecf611f93657edc0"} Jan 27 15:54:16 crc kubenswrapper[4966]: I0127 15:54:16.527151 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a25d116-d49b-4533-bac7-74bee93062b1" path="/var/lib/kubelet/pods/4a25d116-d49b-4533-bac7-74bee93062b1/volumes" Jan 27 15:54:16 crc kubenswrapper[4966]: E0127 15:54:16.640887 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice/crio-de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice\": RecentStats: unable to find data in memory cache]" Jan 27 15:54:16 crc kubenswrapper[4966]: I0127 15:54:16.840618 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" event={"ID":"f910c8e0-1852-4c7b-8869-2e955366c062","Type":"ContainerStarted","Data":"62c3c2a3ebe7e0f79ab7f8a8cccbaf7aceb7a5618ed04b4a5571561ca9b4c576"} Jan 27 15:54:16 crc kubenswrapper[4966]: I0127 15:54:16.840657 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" event={"ID":"f910c8e0-1852-4c7b-8869-2e955366c062","Type":"ContainerStarted","Data":"a5b754eb22f303df0a30bfeb9d72551af13b8e42b29e5eb9b7dc3754488f753c"} Jan 27 15:54:16 crc kubenswrapper[4966]: I0127 15:54:16.840665 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" event={"ID":"f910c8e0-1852-4c7b-8869-2e955366c062","Type":"ContainerStarted","Data":"d40562989eb9930a05a7040f6319a1aa4fd5db0302ed78f31248d0d62fdcef55"} Jan 27 15:54:17 crc kubenswrapper[4966]: I0127 15:54:17.849685 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" event={"ID":"f910c8e0-1852-4c7b-8869-2e955366c062","Type":"ContainerStarted","Data":"9de2df90f38f3f4548a3bc613a071fe65cc50a6527df65deecda4256cbfa5ae8"} Jan 27 15:54:17 crc kubenswrapper[4966]: I0127 15:54:17.850039 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" event={"ID":"f910c8e0-1852-4c7b-8869-2e955366c062","Type":"ContainerStarted","Data":"58140fffcc66234716743b1c09ffd567f434ad86593490b3f19fa41b66ed41e5"} Jan 27 15:54:17 crc kubenswrapper[4966]: I0127 15:54:17.850050 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" event={"ID":"f910c8e0-1852-4c7b-8869-2e955366c062","Type":"ContainerStarted","Data":"8c9b1807b57497741732b9641948c9e09f5ffec0d8a923dd3260c685a7d2c073"} Jan 27 15:54:19 crc kubenswrapper[4966]: I0127 15:54:19.879930 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-vwbv2"] Jan 27 15:54:19 crc kubenswrapper[4966]: I0127 15:54:19.880978 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwbv2" Jan 27 15:54:19 crc kubenswrapper[4966]: I0127 15:54:19.887020 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 27 15:54:19 crc kubenswrapper[4966]: I0127 15:54:19.887037 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 27 15:54:19 crc kubenswrapper[4966]: I0127 15:54:19.887778 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-69qk2" Jan 27 15:54:19 crc kubenswrapper[4966]: I0127 15:54:19.953473 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcxx7\" (UniqueName: \"kubernetes.io/projected/15e87d6f-3d12-45d6-9d4c-e23919de2787-kube-api-access-lcxx7\") pod \"obo-prometheus-operator-68bc856cb9-vwbv2\" (UID: \"15e87d6f-3d12-45d6-9d4c-e23919de2787\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwbv2" Jan 27 15:54:19 crc kubenswrapper[4966]: I0127 15:54:19.999502 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d"] Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.000372 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.003291 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-cmklw" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.006099 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.013440 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89"] Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.014119 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.054452 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/81566a49-33d9-4ca2-baa8-1944c4769bf5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-847885c9f7-s4b89\" (UID: \"81566a49-33d9-4ca2-baa8-1944c4769bf5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.054708 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/81566a49-33d9-4ca2-baa8-1944c4769bf5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-847885c9f7-s4b89\" (UID: \"81566a49-33d9-4ca2-baa8-1944c4769bf5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.054791 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcxx7\" (UniqueName: \"kubernetes.io/projected/15e87d6f-3d12-45d6-9d4c-e23919de2787-kube-api-access-lcxx7\") pod \"obo-prometheus-operator-68bc856cb9-vwbv2\" (UID: \"15e87d6f-3d12-45d6-9d4c-e23919de2787\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwbv2" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.054879 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f59b8d2-65c5-447c-b71c-6aa014c7e531-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d\" (UID: \"8f59b8d2-65c5-447c-b71c-6aa014c7e531\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.055008 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f59b8d2-65c5-447c-b71c-6aa014c7e531-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d\" (UID: \"8f59b8d2-65c5-447c-b71c-6aa014c7e531\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.076595 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcxx7\" (UniqueName: \"kubernetes.io/projected/15e87d6f-3d12-45d6-9d4c-e23919de2787-kube-api-access-lcxx7\") pod \"obo-prometheus-operator-68bc856cb9-vwbv2\" (UID: \"15e87d6f-3d12-45d6-9d4c-e23919de2787\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwbv2" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.155835 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f59b8d2-65c5-447c-b71c-6aa014c7e531-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d\" (UID: \"8f59b8d2-65c5-447c-b71c-6aa014c7e531\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.156078 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/81566a49-33d9-4ca2-baa8-1944c4769bf5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-847885c9f7-s4b89\" (UID: \"81566a49-33d9-4ca2-baa8-1944c4769bf5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.156153 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/81566a49-33d9-4ca2-baa8-1944c4769bf5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-847885c9f7-s4b89\" (UID: \"81566a49-33d9-4ca2-baa8-1944c4769bf5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.156241 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f59b8d2-65c5-447c-b71c-6aa014c7e531-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d\" (UID: \"8f59b8d2-65c5-447c-b71c-6aa014c7e531\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.158705 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f59b8d2-65c5-447c-b71c-6aa014c7e531-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d\" (UID: \"8f59b8d2-65c5-447c-b71c-6aa014c7e531\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.158848 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/81566a49-33d9-4ca2-baa8-1944c4769bf5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-847885c9f7-s4b89\" (UID: \"81566a49-33d9-4ca2-baa8-1944c4769bf5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.159582 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f59b8d2-65c5-447c-b71c-6aa014c7e531-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d\" (UID: \"8f59b8d2-65c5-447c-b71c-6aa014c7e531\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.159589 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/81566a49-33d9-4ca2-baa8-1944c4769bf5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-847885c9f7-s4b89\" (UID: \"81566a49-33d9-4ca2-baa8-1944c4769bf5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.193781 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9876t"] Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.194767 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.196432 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.196499 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwbv2" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.196587 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-dnl5q" Jan 27 15:54:20 crc kubenswrapper[4966]: E0127 15:54:20.221683 4966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwbv2_openshift-operators_15e87d6f-3d12-45d6-9d4c-e23919de2787_0(e9b8ef300fcd66b25a8db0f773b6a05b5f34d54412d4197ca671d29615a816d9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 15:54:20 crc kubenswrapper[4966]: E0127 15:54:20.221762 4966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwbv2_openshift-operators_15e87d6f-3d12-45d6-9d4c-e23919de2787_0(e9b8ef300fcd66b25a8db0f773b6a05b5f34d54412d4197ca671d29615a816d9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwbv2" Jan 27 15:54:20 crc kubenswrapper[4966]: E0127 15:54:20.221791 4966 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwbv2_openshift-operators_15e87d6f-3d12-45d6-9d4c-e23919de2787_0(e9b8ef300fcd66b25a8db0f773b6a05b5f34d54412d4197ca671d29615a816d9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwbv2" Jan 27 15:54:20 crc kubenswrapper[4966]: E0127 15:54:20.221851 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-vwbv2_openshift-operators(15e87d6f-3d12-45d6-9d4c-e23919de2787)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-vwbv2_openshift-operators(15e87d6f-3d12-45d6-9d4c-e23919de2787)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwbv2_openshift-operators_15e87d6f-3d12-45d6-9d4c-e23919de2787_0(e9b8ef300fcd66b25a8db0f773b6a05b5f34d54412d4197ca671d29615a816d9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwbv2" podUID="15e87d6f-3d12-45d6-9d4c-e23919de2787" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.257590 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0443c8da-0b0f-4632-b990-f83e403a8b82-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9876t\" (UID: \"0443c8da-0b0f-4632-b990-f83e403a8b82\") " pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.257694 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvbhr\" (UniqueName: \"kubernetes.io/projected/0443c8da-0b0f-4632-b990-f83e403a8b82-kube-api-access-rvbhr\") pod \"observability-operator-59bdc8b94-9876t\" (UID: \"0443c8da-0b0f-4632-b990-f83e403a8b82\") " pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.314193 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.328572 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89" Jan 27 15:54:20 crc kubenswrapper[4966]: E0127 15:54:20.349081 4966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d_openshift-operators_8f59b8d2-65c5-447c-b71c-6aa014c7e531_0(83ad1df6f04966cc4f9b7e6ef26f249060ade3ca0558d08098b20ae66524468a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 15:54:20 crc kubenswrapper[4966]: E0127 15:54:20.349160 4966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d_openshift-operators_8f59b8d2-65c5-447c-b71c-6aa014c7e531_0(83ad1df6f04966cc4f9b7e6ef26f249060ade3ca0558d08098b20ae66524468a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d" Jan 27 15:54:20 crc kubenswrapper[4966]: E0127 15:54:20.349186 4966 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d_openshift-operators_8f59b8d2-65c5-447c-b71c-6aa014c7e531_0(83ad1df6f04966cc4f9b7e6ef26f249060ade3ca0558d08098b20ae66524468a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d" Jan 27 15:54:20 crc kubenswrapper[4966]: E0127 15:54:20.349239 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d_openshift-operators(8f59b8d2-65c5-447c-b71c-6aa014c7e531)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d_openshift-operators(8f59b8d2-65c5-447c-b71c-6aa014c7e531)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d_openshift-operators_8f59b8d2-65c5-447c-b71c-6aa014c7e531_0(83ad1df6f04966cc4f9b7e6ef26f249060ade3ca0558d08098b20ae66524468a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d" podUID="8f59b8d2-65c5-447c-b71c-6aa014c7e531" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.359323 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0443c8da-0b0f-4632-b990-f83e403a8b82-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9876t\" (UID: \"0443c8da-0b0f-4632-b990-f83e403a8b82\") " pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.359592 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvbhr\" (UniqueName: \"kubernetes.io/projected/0443c8da-0b0f-4632-b990-f83e403a8b82-kube-api-access-rvbhr\") pod \"observability-operator-59bdc8b94-9876t\" (UID: \"0443c8da-0b0f-4632-b990-f83e403a8b82\") " pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.362662 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0443c8da-0b0f-4632-b990-f83e403a8b82-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9876t\" (UID: \"0443c8da-0b0f-4632-b990-f83e403a8b82\") " pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 15:54:20 crc kubenswrapper[4966]: E0127 15:54:20.369974 4966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-847885c9f7-s4b89_openshift-operators_81566a49-33d9-4ca2-baa8-1944c4769bf5_0(269a80a6795cd03be6f7358762915f2a02ed2cfba82fc1857f047ebe35656e19): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 15:54:20 crc kubenswrapper[4966]: E0127 15:54:20.370045 4966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-847885c9f7-s4b89_openshift-operators_81566a49-33d9-4ca2-baa8-1944c4769bf5_0(269a80a6795cd03be6f7358762915f2a02ed2cfba82fc1857f047ebe35656e19): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89" Jan 27 15:54:20 crc kubenswrapper[4966]: E0127 15:54:20.370072 4966 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-847885c9f7-s4b89_openshift-operators_81566a49-33d9-4ca2-baa8-1944c4769bf5_0(269a80a6795cd03be6f7358762915f2a02ed2cfba82fc1857f047ebe35656e19): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89" Jan 27 15:54:20 crc kubenswrapper[4966]: E0127 15:54:20.370122 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-847885c9f7-s4b89_openshift-operators(81566a49-33d9-4ca2-baa8-1944c4769bf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-847885c9f7-s4b89_openshift-operators(81566a49-33d9-4ca2-baa8-1944c4769bf5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-847885c9f7-s4b89_openshift-operators_81566a49-33d9-4ca2-baa8-1944c4769bf5_0(269a80a6795cd03be6f7358762915f2a02ed2cfba82fc1857f047ebe35656e19): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89" podUID="81566a49-33d9-4ca2-baa8-1944c4769bf5" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.378526 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvbhr\" (UniqueName: \"kubernetes.io/projected/0443c8da-0b0f-4632-b990-f83e403a8b82-kube-api-access-rvbhr\") pod \"observability-operator-59bdc8b94-9876t\" (UID: \"0443c8da-0b0f-4632-b990-f83e403a8b82\") " pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.402957 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-x6l4k"] Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.403911 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.411168 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-mtnhw" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.461184 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8sk2\" (UniqueName: \"kubernetes.io/projected/b9f6b9a4-ded2-467b-9e87-6fafa667f709-kube-api-access-l8sk2\") pod \"perses-operator-5bf474d74f-x6l4k\" (UID: \"b9f6b9a4-ded2-467b-9e87-6fafa667f709\") " pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.461243 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b9f6b9a4-ded2-467b-9e87-6fafa667f709-openshift-service-ca\") pod \"perses-operator-5bf474d74f-x6l4k\" (UID: \"b9f6b9a4-ded2-467b-9e87-6fafa667f709\") " pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.528063 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 15:54:20 crc kubenswrapper[4966]: E0127 15:54:20.552369 4966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9876t_openshift-operators_0443c8da-0b0f-4632-b990-f83e403a8b82_0(02dac210d1d924064af330dd352c1a5a3aa4447b4502836c6d141f6d1e77b470): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 15:54:20 crc kubenswrapper[4966]: E0127 15:54:20.552428 4966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9876t_openshift-operators_0443c8da-0b0f-4632-b990-f83e403a8b82_0(02dac210d1d924064af330dd352c1a5a3aa4447b4502836c6d141f6d1e77b470): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 15:54:20 crc kubenswrapper[4966]: E0127 15:54:20.552448 4966 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9876t_openshift-operators_0443c8da-0b0f-4632-b990-f83e403a8b82_0(02dac210d1d924064af330dd352c1a5a3aa4447b4502836c6d141f6d1e77b470): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 15:54:20 crc kubenswrapper[4966]: E0127 15:54:20.552486 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-9876t_openshift-operators(0443c8da-0b0f-4632-b990-f83e403a8b82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-9876t_openshift-operators(0443c8da-0b0f-4632-b990-f83e403a8b82)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9876t_openshift-operators_0443c8da-0b0f-4632-b990-f83e403a8b82_0(02dac210d1d924064af330dd352c1a5a3aa4447b4502836c6d141f6d1e77b470): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-9876t" podUID="0443c8da-0b0f-4632-b990-f83e403a8b82" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.562577 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8sk2\" (UniqueName: \"kubernetes.io/projected/b9f6b9a4-ded2-467b-9e87-6fafa667f709-kube-api-access-l8sk2\") pod \"perses-operator-5bf474d74f-x6l4k\" (UID: \"b9f6b9a4-ded2-467b-9e87-6fafa667f709\") " pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.562760 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b9f6b9a4-ded2-467b-9e87-6fafa667f709-openshift-service-ca\") pod \"perses-operator-5bf474d74f-x6l4k\" (UID: \"b9f6b9a4-ded2-467b-9e87-6fafa667f709\") " pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.563601 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b9f6b9a4-ded2-467b-9e87-6fafa667f709-openshift-service-ca\") pod \"perses-operator-5bf474d74f-x6l4k\" (UID: \"b9f6b9a4-ded2-467b-9e87-6fafa667f709\") " pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.579175 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8sk2\" (UniqueName: \"kubernetes.io/projected/b9f6b9a4-ded2-467b-9e87-6fafa667f709-kube-api-access-l8sk2\") pod \"perses-operator-5bf474d74f-x6l4k\" (UID: \"b9f6b9a4-ded2-467b-9e87-6fafa667f709\") " pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.720445 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" Jan 27 15:54:20 crc kubenswrapper[4966]: E0127 15:54:20.752231 4966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-x6l4k_openshift-operators_b9f6b9a4-ded2-467b-9e87-6fafa667f709_0(14f818186bd072ad3ba081f18899d909a7ae9d0930ee02e0d03b6b4ed1cbc75c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 15:54:20 crc kubenswrapper[4966]: E0127 15:54:20.752376 4966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-x6l4k_openshift-operators_b9f6b9a4-ded2-467b-9e87-6fafa667f709_0(14f818186bd072ad3ba081f18899d909a7ae9d0930ee02e0d03b6b4ed1cbc75c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" Jan 27 15:54:20 crc kubenswrapper[4966]: E0127 15:54:20.752441 4966 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-x6l4k_openshift-operators_b9f6b9a4-ded2-467b-9e87-6fafa667f709_0(14f818186bd072ad3ba081f18899d909a7ae9d0930ee02e0d03b6b4ed1cbc75c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" Jan 27 15:54:20 crc kubenswrapper[4966]: E0127 15:54:20.752543 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-x6l4k_openshift-operators(b9f6b9a4-ded2-467b-9e87-6fafa667f709)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-x6l4k_openshift-operators(b9f6b9a4-ded2-467b-9e87-6fafa667f709)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-x6l4k_openshift-operators_b9f6b9a4-ded2-467b-9e87-6fafa667f709_0(14f818186bd072ad3ba081f18899d909a7ae9d0930ee02e0d03b6b4ed1cbc75c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" podUID="b9f6b9a4-ded2-467b-9e87-6fafa667f709" Jan 27 15:54:20 crc kubenswrapper[4966]: I0127 15:54:20.868548 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" event={"ID":"f910c8e0-1852-4c7b-8869-2e955366c062","Type":"ContainerStarted","Data":"a943b0644126663f6a385c24a19a1a8f671ad92a891620b000b2ad57747cdfb2"} Jan 27 15:54:22 crc kubenswrapper[4966]: E0127 15:54:22.596273 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice/crio-de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28\": RecentStats: unable to find data in memory cache]" Jan 27 15:54:22 crc kubenswrapper[4966]: I0127 15:54:22.881757 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" event={"ID":"f910c8e0-1852-4c7b-8869-2e955366c062","Type":"ContainerStarted","Data":"ba5c09d69801274b974cbc03d8c993af28b10eeedb5c2685e668fb3a61a4c3f2"} Jan 27 15:54:22 crc kubenswrapper[4966]: I0127 15:54:22.882825 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:22 crc kubenswrapper[4966]: I0127 15:54:22.882853 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:22 crc kubenswrapper[4966]: I0127 15:54:22.882910 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:22 crc kubenswrapper[4966]: I0127 15:54:22.921100 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:22 crc kubenswrapper[4966]: I0127 15:54:22.924914 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:22 crc kubenswrapper[4966]: I0127 15:54:22.959059 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" podStartSLOduration=8.959044208 podStartE2EDuration="8.959044208s" podCreationTimestamp="2026-01-27 15:54:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:54:22.919638693 +0000 UTC m=+729.222432201" watchObservedRunningTime="2026-01-27 15:54:22.959044208 +0000 UTC m=+729.261837696" Jan 27 15:54:23 crc kubenswrapper[4966]: I0127 15:54:23.154690 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9876t"] Jan 27 15:54:23 crc kubenswrapper[4966]: I0127 15:54:23.155107 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 15:54:23 crc kubenswrapper[4966]: I0127 15:54:23.155548 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 15:54:23 crc kubenswrapper[4966]: I0127 15:54:23.166571 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-vwbv2"] Jan 27 15:54:23 crc kubenswrapper[4966]: I0127 15:54:23.166699 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwbv2" Jan 27 15:54:23 crc kubenswrapper[4966]: I0127 15:54:23.167202 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwbv2" Jan 27 15:54:23 crc kubenswrapper[4966]: I0127 15:54:23.186357 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89"] Jan 27 15:54:23 crc kubenswrapper[4966]: I0127 15:54:23.186486 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89" Jan 27 15:54:23 crc kubenswrapper[4966]: I0127 15:54:23.195604 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89" Jan 27 15:54:23 crc kubenswrapper[4966]: I0127 15:54:23.219959 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d"] Jan 27 15:54:23 crc kubenswrapper[4966]: I0127 15:54:23.220107 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d" Jan 27 15:54:23 crc kubenswrapper[4966]: I0127 15:54:23.220577 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d" Jan 27 15:54:23 crc kubenswrapper[4966]: E0127 15:54:23.225303 4966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9876t_openshift-operators_0443c8da-0b0f-4632-b990-f83e403a8b82_0(6a0f34640739568449de48905a2447b6a5becfbdbf72ce3302ce2970b6574458): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 15:54:23 crc kubenswrapper[4966]: E0127 15:54:23.225373 4966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9876t_openshift-operators_0443c8da-0b0f-4632-b990-f83e403a8b82_0(6a0f34640739568449de48905a2447b6a5becfbdbf72ce3302ce2970b6574458): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 15:54:23 crc kubenswrapper[4966]: E0127 15:54:23.225401 4966 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9876t_openshift-operators_0443c8da-0b0f-4632-b990-f83e403a8b82_0(6a0f34640739568449de48905a2447b6a5becfbdbf72ce3302ce2970b6574458): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 15:54:23 crc kubenswrapper[4966]: E0127 15:54:23.225453 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-9876t_openshift-operators(0443c8da-0b0f-4632-b990-f83e403a8b82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-9876t_openshift-operators(0443c8da-0b0f-4632-b990-f83e403a8b82)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9876t_openshift-operators_0443c8da-0b0f-4632-b990-f83e403a8b82_0(6a0f34640739568449de48905a2447b6a5becfbdbf72ce3302ce2970b6574458): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-9876t" podUID="0443c8da-0b0f-4632-b990-f83e403a8b82" Jan 27 15:54:23 crc kubenswrapper[4966]: I0127 15:54:23.230634 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-x6l4k"] Jan 27 15:54:23 crc kubenswrapper[4966]: I0127 15:54:23.230788 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" Jan 27 15:54:23 crc kubenswrapper[4966]: I0127 15:54:23.231199 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" Jan 27 15:54:23 crc kubenswrapper[4966]: E0127 15:54:23.232143 4966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwbv2_openshift-operators_15e87d6f-3d12-45d6-9d4c-e23919de2787_0(f313f2bc47f2333580f1e30c4c6fd1d7a1e8d3b2a7037b4c34e4e0f87be6cf60): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 15:54:23 crc kubenswrapper[4966]: E0127 15:54:23.232216 4966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwbv2_openshift-operators_15e87d6f-3d12-45d6-9d4c-e23919de2787_0(f313f2bc47f2333580f1e30c4c6fd1d7a1e8d3b2a7037b4c34e4e0f87be6cf60): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwbv2" Jan 27 15:54:23 crc kubenswrapper[4966]: E0127 15:54:23.232243 4966 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwbv2_openshift-operators_15e87d6f-3d12-45d6-9d4c-e23919de2787_0(f313f2bc47f2333580f1e30c4c6fd1d7a1e8d3b2a7037b4c34e4e0f87be6cf60): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwbv2" Jan 27 15:54:23 crc kubenswrapper[4966]: E0127 15:54:23.232290 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-vwbv2_openshift-operators(15e87d6f-3d12-45d6-9d4c-e23919de2787)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-vwbv2_openshift-operators(15e87d6f-3d12-45d6-9d4c-e23919de2787)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwbv2_openshift-operators_15e87d6f-3d12-45d6-9d4c-e23919de2787_0(f313f2bc47f2333580f1e30c4c6fd1d7a1e8d3b2a7037b4c34e4e0f87be6cf60): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwbv2" podUID="15e87d6f-3d12-45d6-9d4c-e23919de2787" Jan 27 15:54:23 crc kubenswrapper[4966]: E0127 15:54:23.292043 4966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-847885c9f7-s4b89_openshift-operators_81566a49-33d9-4ca2-baa8-1944c4769bf5_0(e4dd5b60628ed0a95b7acdb078d6c94fb61807208b257646353eb21407174892): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 15:54:23 crc kubenswrapper[4966]: E0127 15:54:23.292133 4966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-847885c9f7-s4b89_openshift-operators_81566a49-33d9-4ca2-baa8-1944c4769bf5_0(e4dd5b60628ed0a95b7acdb078d6c94fb61807208b257646353eb21407174892): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89" Jan 27 15:54:23 crc kubenswrapper[4966]: E0127 15:54:23.292162 4966 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-847885c9f7-s4b89_openshift-operators_81566a49-33d9-4ca2-baa8-1944c4769bf5_0(e4dd5b60628ed0a95b7acdb078d6c94fb61807208b257646353eb21407174892): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89" Jan 27 15:54:23 crc kubenswrapper[4966]: E0127 15:54:23.292214 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-847885c9f7-s4b89_openshift-operators(81566a49-33d9-4ca2-baa8-1944c4769bf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-847885c9f7-s4b89_openshift-operators(81566a49-33d9-4ca2-baa8-1944c4769bf5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-847885c9f7-s4b89_openshift-operators_81566a49-33d9-4ca2-baa8-1944c4769bf5_0(e4dd5b60628ed0a95b7acdb078d6c94fb61807208b257646353eb21407174892): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89" podUID="81566a49-33d9-4ca2-baa8-1944c4769bf5" Jan 27 15:54:23 crc kubenswrapper[4966]: E0127 15:54:23.309328 4966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-x6l4k_openshift-operators_b9f6b9a4-ded2-467b-9e87-6fafa667f709_0(430e56253522225578950f65553f28021a459d63e8d2dc23c205be0efe3a3092): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 15:54:23 crc kubenswrapper[4966]: E0127 15:54:23.309386 4966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-x6l4k_openshift-operators_b9f6b9a4-ded2-467b-9e87-6fafa667f709_0(430e56253522225578950f65553f28021a459d63e8d2dc23c205be0efe3a3092): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" Jan 27 15:54:23 crc kubenswrapper[4966]: E0127 15:54:23.309408 4966 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-x6l4k_openshift-operators_b9f6b9a4-ded2-467b-9e87-6fafa667f709_0(430e56253522225578950f65553f28021a459d63e8d2dc23c205be0efe3a3092): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" Jan 27 15:54:23 crc kubenswrapper[4966]: E0127 15:54:23.309447 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-x6l4k_openshift-operators(b9f6b9a4-ded2-467b-9e87-6fafa667f709)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-x6l4k_openshift-operators(b9f6b9a4-ded2-467b-9e87-6fafa667f709)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-x6l4k_openshift-operators_b9f6b9a4-ded2-467b-9e87-6fafa667f709_0(430e56253522225578950f65553f28021a459d63e8d2dc23c205be0efe3a3092): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" podUID="b9f6b9a4-ded2-467b-9e87-6fafa667f709" Jan 27 15:54:23 crc kubenswrapper[4966]: E0127 15:54:23.309665 4966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d_openshift-operators_8f59b8d2-65c5-447c-b71c-6aa014c7e531_0(dee13ea2f623812ef099ae7327a8ef2ede90944442f79113a06bad82f016fe08): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 15:54:23 crc kubenswrapper[4966]: E0127 15:54:23.309697 4966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d_openshift-operators_8f59b8d2-65c5-447c-b71c-6aa014c7e531_0(dee13ea2f623812ef099ae7327a8ef2ede90944442f79113a06bad82f016fe08): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d" Jan 27 15:54:23 crc kubenswrapper[4966]: E0127 15:54:23.309715 4966 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d_openshift-operators_8f59b8d2-65c5-447c-b71c-6aa014c7e531_0(dee13ea2f623812ef099ae7327a8ef2ede90944442f79113a06bad82f016fe08): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d" Jan 27 15:54:23 crc kubenswrapper[4966]: E0127 15:54:23.309747 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d_openshift-operators(8f59b8d2-65c5-447c-b71c-6aa014c7e531)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d_openshift-operators(8f59b8d2-65c5-447c-b71c-6aa014c7e531)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d_openshift-operators_8f59b8d2-65c5-447c-b71c-6aa014c7e531_0(dee13ea2f623812ef099ae7327a8ef2ede90944442f79113a06bad82f016fe08): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d" podUID="8f59b8d2-65c5-447c-b71c-6aa014c7e531" Jan 27 15:54:27 crc kubenswrapper[4966]: I0127 15:54:27.243433 4966 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 15:54:31 crc kubenswrapper[4966]: E0127 15:54:31.770717 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice/crio-de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice\": RecentStats: unable to find data in memory cache]" Jan 27 15:54:32 crc kubenswrapper[4966]: E0127 15:54:32.626972 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice/crio-de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28\": RecentStats: unable to find data in memory cache]" Jan 27 15:54:33 crc kubenswrapper[4966]: I0127 15:54:33.520563 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" Jan 27 15:54:33 crc kubenswrapper[4966]: I0127 15:54:33.521236 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" Jan 27 15:54:33 crc kubenswrapper[4966]: I0127 15:54:33.962640 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-x6l4k"] Jan 27 15:54:33 crc kubenswrapper[4966]: W0127 15:54:33.966056 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9f6b9a4_ded2_467b_9e87_6fafa667f709.slice/crio-1dfca53c417c3b850f0aca42c26d44ecf2c0268c620306cdb668842b4b3225ba WatchSource:0}: Error finding container 1dfca53c417c3b850f0aca42c26d44ecf2c0268c620306cdb668842b4b3225ba: Status 404 returned error can't find the container with id 1dfca53c417c3b850f0aca42c26d44ecf2c0268c620306cdb668842b4b3225ba Jan 27 15:54:34 crc kubenswrapper[4966]: I0127 15:54:34.520049 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 15:54:34 crc kubenswrapper[4966]: I0127 15:54:34.524923 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 15:54:34 crc kubenswrapper[4966]: I0127 15:54:34.947088 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" event={"ID":"b9f6b9a4-ded2-467b-9e87-6fafa667f709","Type":"ContainerStarted","Data":"1dfca53c417c3b850f0aca42c26d44ecf2c0268c620306cdb668842b4b3225ba"} Jan 27 15:54:34 crc kubenswrapper[4966]: I0127 15:54:34.987724 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9876t"] Jan 27 15:54:34 crc kubenswrapper[4966]: W0127 15:54:34.992778 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0443c8da_0b0f_4632_b990_f83e403a8b82.slice/crio-b44b52ae59ec2f0f1de319815115d33c7b58d5db6ce200622506d7fcc8f5ad4a WatchSource:0}: Error finding container b44b52ae59ec2f0f1de319815115d33c7b58d5db6ce200622506d7fcc8f5ad4a: Status 404 returned error can't find the container with id b44b52ae59ec2f0f1de319815115d33c7b58d5db6ce200622506d7fcc8f5ad4a Jan 27 15:54:35 crc kubenswrapper[4966]: I0127 15:54:35.519834 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwbv2" Jan 27 15:54:35 crc kubenswrapper[4966]: I0127 15:54:35.519918 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89" Jan 27 15:54:35 crc kubenswrapper[4966]: I0127 15:54:35.520358 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwbv2" Jan 27 15:54:35 crc kubenswrapper[4966]: I0127 15:54:35.520434 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89" Jan 27 15:54:35 crc kubenswrapper[4966]: I0127 15:54:35.807389 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-vwbv2"] Jan 27 15:54:35 crc kubenswrapper[4966]: I0127 15:54:35.953448 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwbv2" event={"ID":"15e87d6f-3d12-45d6-9d4c-e23919de2787","Type":"ContainerStarted","Data":"115efa2212b47eb1ef2515b86b70e20fc99ac808cbc0eb4038916e6da7bdeb04"} Jan 27 15:54:35 crc kubenswrapper[4966]: I0127 15:54:35.954358 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9876t" event={"ID":"0443c8da-0b0f-4632-b990-f83e403a8b82","Type":"ContainerStarted","Data":"b44b52ae59ec2f0f1de319815115d33c7b58d5db6ce200622506d7fcc8f5ad4a"} Jan 27 15:54:35 crc kubenswrapper[4966]: I0127 15:54:35.978515 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89"] Jan 27 15:54:35 crc kubenswrapper[4966]: W0127 15:54:35.983137 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81566a49_33d9_4ca2_baa8_1944c4769bf5.slice/crio-becd0efd1fce8439869a809fa550bf0d277edfade142fcdabe82fe0437d6dd7c WatchSource:0}: Error finding container becd0efd1fce8439869a809fa550bf0d277edfade142fcdabe82fe0437d6dd7c: Status 404 returned error can't find the container with id becd0efd1fce8439869a809fa550bf0d277edfade142fcdabe82fe0437d6dd7c Jan 27 15:54:36 crc kubenswrapper[4966]: I0127 15:54:36.963278 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89" event={"ID":"81566a49-33d9-4ca2-baa8-1944c4769bf5","Type":"ContainerStarted","Data":"becd0efd1fce8439869a809fa550bf0d277edfade142fcdabe82fe0437d6dd7c"} Jan 27 15:54:37 crc kubenswrapper[4966]: I0127 15:54:37.520421 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d" Jan 27 15:54:37 crc kubenswrapper[4966]: I0127 15:54:37.521338 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d" Jan 27 15:54:40 crc kubenswrapper[4966]: I0127 15:54:40.119858 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:54:40 crc kubenswrapper[4966]: I0127 15:54:40.120242 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:54:40 crc kubenswrapper[4966]: I0127 15:54:40.120295 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 15:54:40 crc kubenswrapper[4966]: I0127 15:54:40.121042 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a029a7ea4e898768651cb0cd5395b77c8220f6bb9b0dc0c1269341b2b5716b1b"} pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:54:40 crc kubenswrapper[4966]: I0127 15:54:40.121105 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" containerID="cri-o://a029a7ea4e898768651cb0cd5395b77c8220f6bb9b0dc0c1269341b2b5716b1b" gracePeriod=600 Jan 27 15:54:41 crc kubenswrapper[4966]: I0127 15:54:41.002948 4966 generic.go:334] "Generic (PLEG): container finished" podID="75889828-fc5d-4516-a3c4-db3affd4f810" containerID="a029a7ea4e898768651cb0cd5395b77c8220f6bb9b0dc0c1269341b2b5716b1b" exitCode=0 Jan 27 15:54:41 crc kubenswrapper[4966]: I0127 15:54:41.003032 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerDied","Data":"a029a7ea4e898768651cb0cd5395b77c8220f6bb9b0dc0c1269341b2b5716b1b"} Jan 27 15:54:41 crc kubenswrapper[4966]: I0127 15:54:41.003089 4966 scope.go:117] "RemoveContainer" containerID="2f252fee4f97cee252f6da079f9b6faf80ef04117d5518af7975a7228534e6c0" Jan 27 15:54:42 crc kubenswrapper[4966]: E0127 15:54:42.766784 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice/crio-de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28\": RecentStats: unable to find data in memory cache]" Jan 27 15:54:44 crc kubenswrapper[4966]: I0127 15:54:44.699980 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zpccn" Jan 27 15:54:45 crc kubenswrapper[4966]: I0127 15:54:45.005886 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d"] Jan 27 15:54:45 crc kubenswrapper[4966]: W0127 15:54:45.019479 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f59b8d2_65c5_447c_b71c_6aa014c7e531.slice/crio-9c596b2dc2a775ddf09e0626f5dc1a78d0f94a87edd4b9b8e2e8d9366bc226cd WatchSource:0}: Error finding container 9c596b2dc2a775ddf09e0626f5dc1a78d0f94a87edd4b9b8e2e8d9366bc226cd: Status 404 returned error can't find the container with id 9c596b2dc2a775ddf09e0626f5dc1a78d0f94a87edd4b9b8e2e8d9366bc226cd Jan 27 15:54:45 crc kubenswrapper[4966]: I0127 15:54:45.045009 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" event={"ID":"b9f6b9a4-ded2-467b-9e87-6fafa667f709","Type":"ContainerStarted","Data":"9c803913d5af3bdbcdd7aa4ae25128af1cb3648bc04c27cf97fad9bba99157d5"} Jan 27 15:54:45 crc kubenswrapper[4966]: I0127 15:54:45.046121 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" Jan 27 15:54:45 crc kubenswrapper[4966]: I0127 15:54:45.048737 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"3d5bc75034aa8f67957594a0b69bd77e9edbe97abc49cd6f918ee618fb39479c"} Jan 27 15:54:45 crc kubenswrapper[4966]: I0127 15:54:45.063089 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d" event={"ID":"8f59b8d2-65c5-447c-b71c-6aa014c7e531","Type":"ContainerStarted","Data":"9c596b2dc2a775ddf09e0626f5dc1a78d0f94a87edd4b9b8e2e8d9366bc226cd"} Jan 27 15:54:45 crc kubenswrapper[4966]: I0127 15:54:45.072756 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" podStartSLOduration=14.262229582 podStartE2EDuration="25.072739155s" podCreationTimestamp="2026-01-27 15:54:20 +0000 UTC" firstStartedPulling="2026-01-27 15:54:33.96733514 +0000 UTC m=+740.270128648" lastFinishedPulling="2026-01-27 15:54:44.777844733 +0000 UTC m=+751.080638221" observedRunningTime="2026-01-27 15:54:45.072468027 +0000 UTC m=+751.375261515" watchObservedRunningTime="2026-01-27 15:54:45.072739155 +0000 UTC m=+751.375532663" Jan 27 15:54:46 crc kubenswrapper[4966]: I0127 15:54:46.070811 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9876t" event={"ID":"0443c8da-0b0f-4632-b990-f83e403a8b82","Type":"ContainerStarted","Data":"21003d8cc66cfa2e9138bf3f651cfa4e728059aa8e17d564ddc296377bd08473"} Jan 27 15:54:46 crc kubenswrapper[4966]: I0127 15:54:46.071993 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 15:54:46 crc kubenswrapper[4966]: I0127 15:54:46.073256 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwbv2" event={"ID":"15e87d6f-3d12-45d6-9d4c-e23919de2787","Type":"ContainerStarted","Data":"8408e9be41e0d3e2a51aa1c784d6d9e5b41523c319a7059c16d8d7011e8db879"} Jan 27 15:54:46 crc kubenswrapper[4966]: I0127 15:54:46.098813 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-9876t" podStartSLOduration=16.237996153 podStartE2EDuration="26.098792307s" podCreationTimestamp="2026-01-27 15:54:20 +0000 UTC" firstStartedPulling="2026-01-27 15:54:34.995465687 +0000 UTC m=+741.298259175" lastFinishedPulling="2026-01-27 15:54:44.856261841 +0000 UTC m=+751.159055329" observedRunningTime="2026-01-27 15:54:46.095934978 +0000 UTC m=+752.398728486" watchObservedRunningTime="2026-01-27 15:54:46.098792307 +0000 UTC m=+752.401585795" Jan 27 15:54:46 crc kubenswrapper[4966]: I0127 15:54:46.126715 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwbv2" podStartSLOduration=18.176028536 podStartE2EDuration="27.126691962s" podCreationTimestamp="2026-01-27 15:54:19 +0000 UTC" firstStartedPulling="2026-01-27 15:54:35.839634277 +0000 UTC m=+742.142427765" lastFinishedPulling="2026-01-27 15:54:44.790297703 +0000 UTC m=+751.093091191" observedRunningTime="2026-01-27 15:54:46.122291823 +0000 UTC m=+752.425085331" watchObservedRunningTime="2026-01-27 15:54:46.126691962 +0000 UTC m=+752.429485460" Jan 27 15:54:46 crc kubenswrapper[4966]: I0127 15:54:46.138247 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 15:54:46 crc kubenswrapper[4966]: E0127 15:54:46.643855 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice/crio-de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice\": RecentStats: unable to find data in memory cache]" Jan 27 15:54:47 crc kubenswrapper[4966]: I0127 15:54:47.080210 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d" event={"ID":"8f59b8d2-65c5-447c-b71c-6aa014c7e531","Type":"ContainerStarted","Data":"cf36202a2b5619b40b388e189f76d7b1086e000f5b41595d0faab810ebe021d5"} Jan 27 15:54:47 crc kubenswrapper[4966]: I0127 15:54:47.082159 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89" event={"ID":"81566a49-33d9-4ca2-baa8-1944c4769bf5","Type":"ContainerStarted","Data":"57e11162fb528914cb95d98763976c7cd91e6b7032e6538260dbb7f453b72088"} Jan 27 15:54:47 crc kubenswrapper[4966]: I0127 15:54:47.100945 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d" podStartSLOduration=27.046069194 podStartE2EDuration="28.100924669s" podCreationTimestamp="2026-01-27 15:54:19 +0000 UTC" firstStartedPulling="2026-01-27 15:54:45.027973582 +0000 UTC m=+751.330767070" lastFinishedPulling="2026-01-27 15:54:46.082829057 +0000 UTC m=+752.385622545" observedRunningTime="2026-01-27 15:54:47.096557341 +0000 UTC m=+753.399350839" watchObservedRunningTime="2026-01-27 15:54:47.100924669 +0000 UTC m=+753.403718177" Jan 27 15:54:47 crc kubenswrapper[4966]: I0127 15:54:47.122544 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-847885c9f7-s4b89" podStartSLOduration=18.029144412 podStartE2EDuration="28.122528586s" podCreationTimestamp="2026-01-27 15:54:19 +0000 UTC" firstStartedPulling="2026-01-27 15:54:35.985025414 +0000 UTC m=+742.287818902" lastFinishedPulling="2026-01-27 15:54:46.078409578 +0000 UTC m=+752.381203076" observedRunningTime="2026-01-27 15:54:47.1191708 +0000 UTC m=+753.421964308" watchObservedRunningTime="2026-01-27 15:54:47.122528586 +0000 UTC m=+753.425322074" Jan 27 15:54:48 crc kubenswrapper[4966]: E0127 15:54:48.173094 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice/crio-de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice\": RecentStats: unable to find data in memory cache]" Jan 27 15:54:48 crc kubenswrapper[4966]: E0127 15:54:48.173176 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice/crio-de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28\": RecentStats: unable to find data in memory cache]" Jan 27 15:54:50 crc kubenswrapper[4966]: I0127 15:54:50.722880 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" Jan 27 15:54:52 crc kubenswrapper[4966]: E0127 15:54:52.792594 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice/crio-de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice\": RecentStats: unable to find data in memory cache]" Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.595518 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-rk7xr"] Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.597441 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rk7xr" Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.600232 4966 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-cnvwf" Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.600492 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.601756 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.603361 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-jmsgj"] Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.604439 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jmsgj" Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.606007 4966 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-62f7d" Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.615126 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-rk7xr"] Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.619995 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-jmsgj"] Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.636210 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-5jnwt"] Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.637149 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.644549 4966 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-qcbgs" Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.648766 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-5jnwt"] Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.738701 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4xrb\" (UniqueName: \"kubernetes.io/projected/d545853c-f504-4f02-a056-06ae19f8d3a4-kube-api-access-g4xrb\") pod \"cert-manager-858654f9db-jmsgj\" (UID: \"d545853c-f504-4f02-a056-06ae19f8d3a4\") " pod="cert-manager/cert-manager-858654f9db-jmsgj" Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.738910 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp82c\" (UniqueName: \"kubernetes.io/projected/4e21f2be-885f-4486-a5c2-056b78ab3ae1-kube-api-access-pp82c\") pod \"cert-manager-cainjector-cf98fcc89-rk7xr\" (UID: \"4e21f2be-885f-4486-a5c2-056b78ab3ae1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-rk7xr" Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.739146 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbn6h\" (UniqueName: \"kubernetes.io/projected/37cde3a9-999c-4c96-a024-1769c058c4c8-kube-api-access-lbn6h\") pod \"cert-manager-webhook-687f57d79b-5jnwt\" (UID: \"37cde3a9-999c-4c96-a024-1769c058c4c8\") " pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.841131 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp82c\" (UniqueName: \"kubernetes.io/projected/4e21f2be-885f-4486-a5c2-056b78ab3ae1-kube-api-access-pp82c\") pod \"cert-manager-cainjector-cf98fcc89-rk7xr\" (UID: \"4e21f2be-885f-4486-a5c2-056b78ab3ae1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-rk7xr" Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.841224 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbn6h\" (UniqueName: \"kubernetes.io/projected/37cde3a9-999c-4c96-a024-1769c058c4c8-kube-api-access-lbn6h\") pod \"cert-manager-webhook-687f57d79b-5jnwt\" (UID: \"37cde3a9-999c-4c96-a024-1769c058c4c8\") " pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.841270 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4xrb\" (UniqueName: \"kubernetes.io/projected/d545853c-f504-4f02-a056-06ae19f8d3a4-kube-api-access-g4xrb\") pod \"cert-manager-858654f9db-jmsgj\" (UID: \"d545853c-f504-4f02-a056-06ae19f8d3a4\") " pod="cert-manager/cert-manager-858654f9db-jmsgj" Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.859808 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp82c\" (UniqueName: \"kubernetes.io/projected/4e21f2be-885f-4486-a5c2-056b78ab3ae1-kube-api-access-pp82c\") pod \"cert-manager-cainjector-cf98fcc89-rk7xr\" (UID: \"4e21f2be-885f-4486-a5c2-056b78ab3ae1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-rk7xr" Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.863339 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbn6h\" (UniqueName: \"kubernetes.io/projected/37cde3a9-999c-4c96-a024-1769c058c4c8-kube-api-access-lbn6h\") pod \"cert-manager-webhook-687f57d79b-5jnwt\" (UID: \"37cde3a9-999c-4c96-a024-1769c058c4c8\") " pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.866915 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4xrb\" (UniqueName: \"kubernetes.io/projected/d545853c-f504-4f02-a056-06ae19f8d3a4-kube-api-access-g4xrb\") pod \"cert-manager-858654f9db-jmsgj\" (UID: \"d545853c-f504-4f02-a056-06ae19f8d3a4\") " pod="cert-manager/cert-manager-858654f9db-jmsgj" Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.922198 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rk7xr" Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.931320 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jmsgj" Jan 27 15:54:56 crc kubenswrapper[4966]: I0127 15:54:56.957262 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" Jan 27 15:54:57 crc kubenswrapper[4966]: I0127 15:54:57.145604 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-rk7xr"] Jan 27 15:54:57 crc kubenswrapper[4966]: I0127 15:54:57.177218 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-jmsgj"] Jan 27 15:54:57 crc kubenswrapper[4966]: I0127 15:54:57.228685 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-5jnwt"] Jan 27 15:54:58 crc kubenswrapper[4966]: I0127 15:54:58.160651 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rk7xr" event={"ID":"4e21f2be-885f-4486-a5c2-056b78ab3ae1","Type":"ContainerStarted","Data":"dd0f1689a59f9a502a7f887cff709d28fc5a32b1f78caf42075fc82c8a2d9013"} Jan 27 15:54:58 crc kubenswrapper[4966]: I0127 15:54:58.164112 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-jmsgj" event={"ID":"d545853c-f504-4f02-a056-06ae19f8d3a4","Type":"ContainerStarted","Data":"781dcfbf99b0bb82cb18db90f954d167406f3d305713e9d75bfc401c5fcc4cb9"} Jan 27 15:54:58 crc kubenswrapper[4966]: I0127 15:54:58.164879 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" event={"ID":"37cde3a9-999c-4c96-a024-1769c058c4c8","Type":"ContainerStarted","Data":"8eacde53517c2652767af5f762361b570746747583fad1d18fb5827fbed9356d"} Jan 27 15:55:01 crc kubenswrapper[4966]: E0127 15:55:01.767504 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice/crio-de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28\": RecentStats: unable to find data in memory cache]" Jan 27 15:55:02 crc kubenswrapper[4966]: I0127 15:55:02.194747 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-jmsgj" event={"ID":"d545853c-f504-4f02-a056-06ae19f8d3a4","Type":"ContainerStarted","Data":"723799e7f9dfa705af9798a2ceb9ac4e17e704bf457ba25d4854f66b23fd7085"} Jan 27 15:55:02 crc kubenswrapper[4966]: I0127 15:55:02.197208 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" event={"ID":"37cde3a9-999c-4c96-a024-1769c058c4c8","Type":"ContainerStarted","Data":"ddb977a8d41018ec4123c6e4bc785fd09ef6d6e5020dc06b779d31018fcf9484"} Jan 27 15:55:02 crc kubenswrapper[4966]: I0127 15:55:02.197336 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" Jan 27 15:55:02 crc kubenswrapper[4966]: I0127 15:55:02.199135 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rk7xr" event={"ID":"4e21f2be-885f-4486-a5c2-056b78ab3ae1","Type":"ContainerStarted","Data":"f68a358013d0771aa618f02e13c9980faca3b3734fe4a70c2a274837c30a0759"} Jan 27 15:55:02 crc kubenswrapper[4966]: I0127 15:55:02.214872 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-jmsgj" podStartSLOduration=2.201369609 podStartE2EDuration="6.214851431s" podCreationTimestamp="2026-01-27 15:54:56 +0000 UTC" firstStartedPulling="2026-01-27 15:54:57.190231275 +0000 UTC m=+763.493024763" lastFinishedPulling="2026-01-27 15:55:01.203713087 +0000 UTC m=+767.506506585" observedRunningTime="2026-01-27 15:55:02.21195234 +0000 UTC m=+768.514745848" watchObservedRunningTime="2026-01-27 15:55:02.214851431 +0000 UTC m=+768.517644919" Jan 27 15:55:02 crc kubenswrapper[4966]: I0127 15:55:02.226737 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" podStartSLOduration=2.19438316 podStartE2EDuration="6.226720793s" podCreationTimestamp="2026-01-27 15:54:56 +0000 UTC" firstStartedPulling="2026-01-27 15:54:57.236513996 +0000 UTC m=+763.539307484" lastFinishedPulling="2026-01-27 15:55:01.268851629 +0000 UTC m=+767.571645117" observedRunningTime="2026-01-27 15:55:02.225827085 +0000 UTC m=+768.528620573" watchObservedRunningTime="2026-01-27 15:55:02.226720793 +0000 UTC m=+768.529514281" Jan 27 15:55:02 crc kubenswrapper[4966]: I0127 15:55:02.270232 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rk7xr" podStartSLOduration=2.215345466 podStartE2EDuration="6.270199115s" podCreationTimestamp="2026-01-27 15:54:56 +0000 UTC" firstStartedPulling="2026-01-27 15:54:57.149988254 +0000 UTC m=+763.452781742" lastFinishedPulling="2026-01-27 15:55:01.204841883 +0000 UTC m=+767.507635391" observedRunningTime="2026-01-27 15:55:02.258629834 +0000 UTC m=+768.561423322" watchObservedRunningTime="2026-01-27 15:55:02.270199115 +0000 UTC m=+768.572992633" Jan 27 15:55:02 crc kubenswrapper[4966]: E0127 15:55:02.820525 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice/crio-de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice\": RecentStats: unable to find data in memory cache]" Jan 27 15:55:06 crc kubenswrapper[4966]: I0127 15:55:06.960252 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" Jan 27 15:55:12 crc kubenswrapper[4966]: E0127 15:55:12.973905 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice/crio-de960984fb88f43e329529bd3fc054cdb7b20549d7d5e819b0fcdd26ede91e28\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a25d116_d49b_4533_bac7_74bee93062b1.slice\": RecentStats: unable to find data in memory cache]" Jan 27 15:55:30 crc kubenswrapper[4966]: I0127 15:55:30.727324 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d"] Jan 27 15:55:30 crc kubenswrapper[4966]: I0127 15:55:30.728994 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d" Jan 27 15:55:30 crc kubenswrapper[4966]: I0127 15:55:30.731708 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 15:55:30 crc kubenswrapper[4966]: I0127 15:55:30.747002 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d"] Jan 27 15:55:30 crc kubenswrapper[4966]: I0127 15:55:30.874279 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m8pj\" (UniqueName: \"kubernetes.io/projected/7274aa23-2136-466c-a16c-172e17d804ce-kube-api-access-4m8pj\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d\" (UID: \"7274aa23-2136-466c-a16c-172e17d804ce\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d" Jan 27 15:55:30 crc kubenswrapper[4966]: I0127 15:55:30.874379 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7274aa23-2136-466c-a16c-172e17d804ce-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d\" (UID: \"7274aa23-2136-466c-a16c-172e17d804ce\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d" Jan 27 15:55:30 crc kubenswrapper[4966]: I0127 15:55:30.874437 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7274aa23-2136-466c-a16c-172e17d804ce-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d\" (UID: \"7274aa23-2136-466c-a16c-172e17d804ce\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d" Jan 27 15:55:30 crc kubenswrapper[4966]: I0127 15:55:30.920094 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc"] Jan 27 15:55:30 crc kubenswrapper[4966]: I0127 15:55:30.921384 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc" Jan 27 15:55:30 crc kubenswrapper[4966]: I0127 15:55:30.931679 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc"] Jan 27 15:55:30 crc kubenswrapper[4966]: I0127 15:55:30.975378 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m8pj\" (UniqueName: \"kubernetes.io/projected/7274aa23-2136-466c-a16c-172e17d804ce-kube-api-access-4m8pj\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d\" (UID: \"7274aa23-2136-466c-a16c-172e17d804ce\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d" Jan 27 15:55:30 crc kubenswrapper[4966]: I0127 15:55:30.975454 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7274aa23-2136-466c-a16c-172e17d804ce-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d\" (UID: \"7274aa23-2136-466c-a16c-172e17d804ce\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d" Jan 27 15:55:30 crc kubenswrapper[4966]: I0127 15:55:30.975502 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7274aa23-2136-466c-a16c-172e17d804ce-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d\" (UID: \"7274aa23-2136-466c-a16c-172e17d804ce\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d" Jan 27 15:55:30 crc kubenswrapper[4966]: I0127 15:55:30.976021 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7274aa23-2136-466c-a16c-172e17d804ce-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d\" (UID: \"7274aa23-2136-466c-a16c-172e17d804ce\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d" Jan 27 15:55:30 crc kubenswrapper[4966]: I0127 15:55:30.976183 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7274aa23-2136-466c-a16c-172e17d804ce-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d\" (UID: \"7274aa23-2136-466c-a16c-172e17d804ce\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d" Jan 27 15:55:30 crc kubenswrapper[4966]: I0127 15:55:30.994888 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m8pj\" (UniqueName: \"kubernetes.io/projected/7274aa23-2136-466c-a16c-172e17d804ce-kube-api-access-4m8pj\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d\" (UID: \"7274aa23-2136-466c-a16c-172e17d804ce\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d" Jan 27 15:55:31 crc kubenswrapper[4966]: I0127 15:55:31.046130 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d" Jan 27 15:55:31 crc kubenswrapper[4966]: I0127 15:55:31.076489 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c3974f8-3174-42c5-b11c-ae9c190dd0c3-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc\" (UID: \"6c3974f8-3174-42c5-b11c-ae9c190dd0c3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc" Jan 27 15:55:31 crc kubenswrapper[4966]: I0127 15:55:31.076582 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgmxw\" (UniqueName: \"kubernetes.io/projected/6c3974f8-3174-42c5-b11c-ae9c190dd0c3-kube-api-access-qgmxw\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc\" (UID: \"6c3974f8-3174-42c5-b11c-ae9c190dd0c3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc" Jan 27 15:55:31 crc kubenswrapper[4966]: I0127 15:55:31.076624 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c3974f8-3174-42c5-b11c-ae9c190dd0c3-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc\" (UID: \"6c3974f8-3174-42c5-b11c-ae9c190dd0c3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc" Jan 27 15:55:31 crc kubenswrapper[4966]: I0127 15:55:31.177751 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c3974f8-3174-42c5-b11c-ae9c190dd0c3-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc\" (UID: \"6c3974f8-3174-42c5-b11c-ae9c190dd0c3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc" Jan 27 15:55:31 crc kubenswrapper[4966]: I0127 15:55:31.178164 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgmxw\" (UniqueName: \"kubernetes.io/projected/6c3974f8-3174-42c5-b11c-ae9c190dd0c3-kube-api-access-qgmxw\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc\" (UID: \"6c3974f8-3174-42c5-b11c-ae9c190dd0c3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc" Jan 27 15:55:31 crc kubenswrapper[4966]: I0127 15:55:31.178198 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c3974f8-3174-42c5-b11c-ae9c190dd0c3-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc\" (UID: \"6c3974f8-3174-42c5-b11c-ae9c190dd0c3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc" Jan 27 15:55:31 crc kubenswrapper[4966]: I0127 15:55:31.178435 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c3974f8-3174-42c5-b11c-ae9c190dd0c3-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc\" (UID: \"6c3974f8-3174-42c5-b11c-ae9c190dd0c3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc" Jan 27 15:55:31 crc kubenswrapper[4966]: I0127 15:55:31.178656 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c3974f8-3174-42c5-b11c-ae9c190dd0c3-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc\" (UID: \"6c3974f8-3174-42c5-b11c-ae9c190dd0c3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc" Jan 27 15:55:31 crc kubenswrapper[4966]: I0127 15:55:31.202872 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgmxw\" (UniqueName: \"kubernetes.io/projected/6c3974f8-3174-42c5-b11c-ae9c190dd0c3-kube-api-access-qgmxw\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc\" (UID: \"6c3974f8-3174-42c5-b11c-ae9c190dd0c3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc" Jan 27 15:55:31 crc kubenswrapper[4966]: I0127 15:55:31.236863 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc" Jan 27 15:55:31 crc kubenswrapper[4966]: I0127 15:55:31.459066 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc"] Jan 27 15:55:31 crc kubenswrapper[4966]: I0127 15:55:31.464074 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d"] Jan 27 15:55:32 crc kubenswrapper[4966]: I0127 15:55:32.411346 4966 generic.go:334] "Generic (PLEG): container finished" podID="6c3974f8-3174-42c5-b11c-ae9c190dd0c3" containerID="e48ab3a296dffdfe6397b68bd9074ce37cd41817f04849e86be5e3befbdf6657" exitCode=0 Jan 27 15:55:32 crc kubenswrapper[4966]: I0127 15:55:32.411424 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc" event={"ID":"6c3974f8-3174-42c5-b11c-ae9c190dd0c3","Type":"ContainerDied","Data":"e48ab3a296dffdfe6397b68bd9074ce37cd41817f04849e86be5e3befbdf6657"} Jan 27 15:55:32 crc kubenswrapper[4966]: I0127 15:55:32.411793 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc" event={"ID":"6c3974f8-3174-42c5-b11c-ae9c190dd0c3","Type":"ContainerStarted","Data":"d80f84c19a3065e3d62f4aaddbca29810d422511bac313d38380d85366cb6c38"} Jan 27 15:55:32 crc kubenswrapper[4966]: I0127 15:55:32.414026 4966 generic.go:334] "Generic (PLEG): container finished" podID="7274aa23-2136-466c-a16c-172e17d804ce" containerID="cb7f379e419d861d190504a32624123ac940d2b659533bf109a89569ed68f36e" exitCode=0 Jan 27 15:55:32 crc kubenswrapper[4966]: I0127 15:55:32.414069 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d" event={"ID":"7274aa23-2136-466c-a16c-172e17d804ce","Type":"ContainerDied","Data":"cb7f379e419d861d190504a32624123ac940d2b659533bf109a89569ed68f36e"} Jan 27 15:55:32 crc kubenswrapper[4966]: I0127 15:55:32.414095 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d" event={"ID":"7274aa23-2136-466c-a16c-172e17d804ce","Type":"ContainerStarted","Data":"cc8da926815ec1afa48746cfe94b6ae4b6a44957e9a7502c9a3cab6d7151fef5"} Jan 27 15:55:34 crc kubenswrapper[4966]: I0127 15:55:34.436447 4966 generic.go:334] "Generic (PLEG): container finished" podID="6c3974f8-3174-42c5-b11c-ae9c190dd0c3" containerID="93b07e0a7b0a189316be4639b65ea371e7030ff61f6b3bcca7c1baf0961448da" exitCode=0 Jan 27 15:55:34 crc kubenswrapper[4966]: I0127 15:55:34.436767 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc" event={"ID":"6c3974f8-3174-42c5-b11c-ae9c190dd0c3","Type":"ContainerDied","Data":"93b07e0a7b0a189316be4639b65ea371e7030ff61f6b3bcca7c1baf0961448da"} Jan 27 15:55:34 crc kubenswrapper[4966]: I0127 15:55:34.440450 4966 generic.go:334] "Generic (PLEG): container finished" podID="7274aa23-2136-466c-a16c-172e17d804ce" containerID="05b2e3035b4e7fa051daa08c8431a3c610e34c0e728951466f06b9c90ddeb334" exitCode=0 Jan 27 15:55:34 crc kubenswrapper[4966]: I0127 15:55:34.440484 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d" event={"ID":"7274aa23-2136-466c-a16c-172e17d804ce","Type":"ContainerDied","Data":"05b2e3035b4e7fa051daa08c8431a3c610e34c0e728951466f06b9c90ddeb334"} Jan 27 15:55:34 crc kubenswrapper[4966]: I0127 15:55:34.489102 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vptgd"] Jan 27 15:55:34 crc kubenswrapper[4966]: I0127 15:55:34.490435 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vptgd" Jan 27 15:55:34 crc kubenswrapper[4966]: I0127 15:55:34.502756 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vptgd"] Jan 27 15:55:34 crc kubenswrapper[4966]: I0127 15:55:34.627528 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e426b946-a63a-4b3c-a448-c2575290ba0f-catalog-content\") pod \"redhat-operators-vptgd\" (UID: \"e426b946-a63a-4b3c-a448-c2575290ba0f\") " pod="openshift-marketplace/redhat-operators-vptgd" Jan 27 15:55:34 crc kubenswrapper[4966]: I0127 15:55:34.628293 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcgsp\" (UniqueName: \"kubernetes.io/projected/e426b946-a63a-4b3c-a448-c2575290ba0f-kube-api-access-rcgsp\") pod \"redhat-operators-vptgd\" (UID: \"e426b946-a63a-4b3c-a448-c2575290ba0f\") " pod="openshift-marketplace/redhat-operators-vptgd" Jan 27 15:55:34 crc kubenswrapper[4966]: I0127 15:55:34.628579 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e426b946-a63a-4b3c-a448-c2575290ba0f-utilities\") pod \"redhat-operators-vptgd\" (UID: \"e426b946-a63a-4b3c-a448-c2575290ba0f\") " pod="openshift-marketplace/redhat-operators-vptgd" Jan 27 15:55:34 crc kubenswrapper[4966]: I0127 15:55:34.729706 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e426b946-a63a-4b3c-a448-c2575290ba0f-catalog-content\") pod \"redhat-operators-vptgd\" (UID: \"e426b946-a63a-4b3c-a448-c2575290ba0f\") " pod="openshift-marketplace/redhat-operators-vptgd" Jan 27 15:55:34 crc kubenswrapper[4966]: I0127 15:55:34.729817 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcgsp\" (UniqueName: \"kubernetes.io/projected/e426b946-a63a-4b3c-a448-c2575290ba0f-kube-api-access-rcgsp\") pod \"redhat-operators-vptgd\" (UID: \"e426b946-a63a-4b3c-a448-c2575290ba0f\") " pod="openshift-marketplace/redhat-operators-vptgd" Jan 27 15:55:34 crc kubenswrapper[4966]: I0127 15:55:34.729843 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e426b946-a63a-4b3c-a448-c2575290ba0f-utilities\") pod \"redhat-operators-vptgd\" (UID: \"e426b946-a63a-4b3c-a448-c2575290ba0f\") " pod="openshift-marketplace/redhat-operators-vptgd" Jan 27 15:55:34 crc kubenswrapper[4966]: I0127 15:55:34.730389 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e426b946-a63a-4b3c-a448-c2575290ba0f-catalog-content\") pod \"redhat-operators-vptgd\" (UID: \"e426b946-a63a-4b3c-a448-c2575290ba0f\") " pod="openshift-marketplace/redhat-operators-vptgd" Jan 27 15:55:34 crc kubenswrapper[4966]: I0127 15:55:34.730457 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e426b946-a63a-4b3c-a448-c2575290ba0f-utilities\") pod \"redhat-operators-vptgd\" (UID: \"e426b946-a63a-4b3c-a448-c2575290ba0f\") " pod="openshift-marketplace/redhat-operators-vptgd" Jan 27 15:55:34 crc kubenswrapper[4966]: I0127 15:55:34.754342 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcgsp\" (UniqueName: \"kubernetes.io/projected/e426b946-a63a-4b3c-a448-c2575290ba0f-kube-api-access-rcgsp\") pod \"redhat-operators-vptgd\" (UID: \"e426b946-a63a-4b3c-a448-c2575290ba0f\") " pod="openshift-marketplace/redhat-operators-vptgd" Jan 27 15:55:34 crc kubenswrapper[4966]: I0127 15:55:34.860936 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vptgd" Jan 27 15:55:35 crc kubenswrapper[4966]: I0127 15:55:35.332365 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vptgd"] Jan 27 15:55:35 crc kubenswrapper[4966]: W0127 15:55:35.348779 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode426b946_a63a_4b3c_a448_c2575290ba0f.slice/crio-7ca98d8848834e7fbabeb960246d9f88eae1c8da00c11793e8ab8ce966f6156f WatchSource:0}: Error finding container 7ca98d8848834e7fbabeb960246d9f88eae1c8da00c11793e8ab8ce966f6156f: Status 404 returned error can't find the container with id 7ca98d8848834e7fbabeb960246d9f88eae1c8da00c11793e8ab8ce966f6156f Jan 27 15:55:35 crc kubenswrapper[4966]: I0127 15:55:35.447965 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vptgd" event={"ID":"e426b946-a63a-4b3c-a448-c2575290ba0f","Type":"ContainerStarted","Data":"7ca98d8848834e7fbabeb960246d9f88eae1c8da00c11793e8ab8ce966f6156f"} Jan 27 15:55:35 crc kubenswrapper[4966]: I0127 15:55:35.451479 4966 generic.go:334] "Generic (PLEG): container finished" podID="6c3974f8-3174-42c5-b11c-ae9c190dd0c3" containerID="712feef975cfb310405174831efd503669f7eb71e79483c834cd446eb50d31ea" exitCode=0 Jan 27 15:55:35 crc kubenswrapper[4966]: I0127 15:55:35.451546 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc" event={"ID":"6c3974f8-3174-42c5-b11c-ae9c190dd0c3","Type":"ContainerDied","Data":"712feef975cfb310405174831efd503669f7eb71e79483c834cd446eb50d31ea"} Jan 27 15:55:35 crc kubenswrapper[4966]: I0127 15:55:35.453588 4966 generic.go:334] "Generic (PLEG): container finished" podID="7274aa23-2136-466c-a16c-172e17d804ce" containerID="8384cd70a1a34d8e4db96830b968e6ca0fab1b8295f90007a6b3c65b8be3a27b" exitCode=0 Jan 27 15:55:35 crc kubenswrapper[4966]: I0127 15:55:35.453638 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d" event={"ID":"7274aa23-2136-466c-a16c-172e17d804ce","Type":"ContainerDied","Data":"8384cd70a1a34d8e4db96830b968e6ca0fab1b8295f90007a6b3c65b8be3a27b"} Jan 27 15:55:36 crc kubenswrapper[4966]: I0127 15:55:36.464840 4966 generic.go:334] "Generic (PLEG): container finished" podID="e426b946-a63a-4b3c-a448-c2575290ba0f" containerID="5b2d63090b1499b203526fd20c8f693b8ae89c5416d125b845260d76bff995ad" exitCode=0 Jan 27 15:55:36 crc kubenswrapper[4966]: I0127 15:55:36.464935 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vptgd" event={"ID":"e426b946-a63a-4b3c-a448-c2575290ba0f","Type":"ContainerDied","Data":"5b2d63090b1499b203526fd20c8f693b8ae89c5416d125b845260d76bff995ad"} Jan 27 15:55:36 crc kubenswrapper[4966]: I0127 15:55:36.802830 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc" Jan 27 15:55:36 crc kubenswrapper[4966]: I0127 15:55:36.813772 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d" Jan 27 15:55:36 crc kubenswrapper[4966]: I0127 15:55:36.960766 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c3974f8-3174-42c5-b11c-ae9c190dd0c3-bundle\") pod \"6c3974f8-3174-42c5-b11c-ae9c190dd0c3\" (UID: \"6c3974f8-3174-42c5-b11c-ae9c190dd0c3\") " Jan 27 15:55:36 crc kubenswrapper[4966]: I0127 15:55:36.960825 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7274aa23-2136-466c-a16c-172e17d804ce-util\") pod \"7274aa23-2136-466c-a16c-172e17d804ce\" (UID: \"7274aa23-2136-466c-a16c-172e17d804ce\") " Jan 27 15:55:36 crc kubenswrapper[4966]: I0127 15:55:36.960861 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgmxw\" (UniqueName: \"kubernetes.io/projected/6c3974f8-3174-42c5-b11c-ae9c190dd0c3-kube-api-access-qgmxw\") pod \"6c3974f8-3174-42c5-b11c-ae9c190dd0c3\" (UID: \"6c3974f8-3174-42c5-b11c-ae9c190dd0c3\") " Jan 27 15:55:36 crc kubenswrapper[4966]: I0127 15:55:36.960987 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m8pj\" (UniqueName: \"kubernetes.io/projected/7274aa23-2136-466c-a16c-172e17d804ce-kube-api-access-4m8pj\") pod \"7274aa23-2136-466c-a16c-172e17d804ce\" (UID: \"7274aa23-2136-466c-a16c-172e17d804ce\") " Jan 27 15:55:36 crc kubenswrapper[4966]: I0127 15:55:36.961057 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c3974f8-3174-42c5-b11c-ae9c190dd0c3-util\") pod \"6c3974f8-3174-42c5-b11c-ae9c190dd0c3\" (UID: \"6c3974f8-3174-42c5-b11c-ae9c190dd0c3\") " Jan 27 15:55:36 crc kubenswrapper[4966]: I0127 15:55:36.961767 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7274aa23-2136-466c-a16c-172e17d804ce-bundle\") pod \"7274aa23-2136-466c-a16c-172e17d804ce\" (UID: \"7274aa23-2136-466c-a16c-172e17d804ce\") " Jan 27 15:55:36 crc kubenswrapper[4966]: I0127 15:55:36.962080 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c3974f8-3174-42c5-b11c-ae9c190dd0c3-bundle" (OuterVolumeSpecName: "bundle") pod "6c3974f8-3174-42c5-b11c-ae9c190dd0c3" (UID: "6c3974f8-3174-42c5-b11c-ae9c190dd0c3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:55:36 crc kubenswrapper[4966]: I0127 15:55:36.962775 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7274aa23-2136-466c-a16c-172e17d804ce-bundle" (OuterVolumeSpecName: "bundle") pod "7274aa23-2136-466c-a16c-172e17d804ce" (UID: "7274aa23-2136-466c-a16c-172e17d804ce"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:55:36 crc kubenswrapper[4966]: I0127 15:55:36.967136 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c3974f8-3174-42c5-b11c-ae9c190dd0c3-kube-api-access-qgmxw" (OuterVolumeSpecName: "kube-api-access-qgmxw") pod "6c3974f8-3174-42c5-b11c-ae9c190dd0c3" (UID: "6c3974f8-3174-42c5-b11c-ae9c190dd0c3"). InnerVolumeSpecName "kube-api-access-qgmxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:55:36 crc kubenswrapper[4966]: I0127 15:55:36.967493 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7274aa23-2136-466c-a16c-172e17d804ce-kube-api-access-4m8pj" (OuterVolumeSpecName: "kube-api-access-4m8pj") pod "7274aa23-2136-466c-a16c-172e17d804ce" (UID: "7274aa23-2136-466c-a16c-172e17d804ce"). InnerVolumeSpecName "kube-api-access-4m8pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:55:36 crc kubenswrapper[4966]: I0127 15:55:36.976516 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7274aa23-2136-466c-a16c-172e17d804ce-util" (OuterVolumeSpecName: "util") pod "7274aa23-2136-466c-a16c-172e17d804ce" (UID: "7274aa23-2136-466c-a16c-172e17d804ce"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:55:36 crc kubenswrapper[4966]: I0127 15:55:36.998251 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c3974f8-3174-42c5-b11c-ae9c190dd0c3-util" (OuterVolumeSpecName: "util") pod "6c3974f8-3174-42c5-b11c-ae9c190dd0c3" (UID: "6c3974f8-3174-42c5-b11c-ae9c190dd0c3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:55:37 crc kubenswrapper[4966]: I0127 15:55:37.063953 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m8pj\" (UniqueName: \"kubernetes.io/projected/7274aa23-2136-466c-a16c-172e17d804ce-kube-api-access-4m8pj\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:37 crc kubenswrapper[4966]: I0127 15:55:37.064018 4966 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c3974f8-3174-42c5-b11c-ae9c190dd0c3-util\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:37 crc kubenswrapper[4966]: I0127 15:55:37.064046 4966 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7274aa23-2136-466c-a16c-172e17d804ce-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:37 crc kubenswrapper[4966]: I0127 15:55:37.064071 4966 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c3974f8-3174-42c5-b11c-ae9c190dd0c3-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:37 crc kubenswrapper[4966]: I0127 15:55:37.064096 4966 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7274aa23-2136-466c-a16c-172e17d804ce-util\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:37 crc kubenswrapper[4966]: I0127 15:55:37.064124 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgmxw\" (UniqueName: \"kubernetes.io/projected/6c3974f8-3174-42c5-b11c-ae9c190dd0c3-kube-api-access-qgmxw\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:37 crc kubenswrapper[4966]: I0127 15:55:37.475676 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d" event={"ID":"7274aa23-2136-466c-a16c-172e17d804ce","Type":"ContainerDied","Data":"cc8da926815ec1afa48746cfe94b6ae4b6a44957e9a7502c9a3cab6d7151fef5"} Jan 27 15:55:37 crc kubenswrapper[4966]: I0127 15:55:37.476124 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc8da926815ec1afa48746cfe94b6ae4b6a44957e9a7502c9a3cab6d7151fef5" Jan 27 15:55:37 crc kubenswrapper[4966]: I0127 15:55:37.476007 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d" Jan 27 15:55:37 crc kubenswrapper[4966]: I0127 15:55:37.490298 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vptgd" event={"ID":"e426b946-a63a-4b3c-a448-c2575290ba0f","Type":"ContainerStarted","Data":"4a12d698332269651321fc75aa4e6d3f8a9538be3445bbe5ff3f5ecb9736e2d0"} Jan 27 15:55:37 crc kubenswrapper[4966]: I0127 15:55:37.495307 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc" event={"ID":"6c3974f8-3174-42c5-b11c-ae9c190dd0c3","Type":"ContainerDied","Data":"d80f84c19a3065e3d62f4aaddbca29810d422511bac313d38380d85366cb6c38"} Jan 27 15:55:37 crc kubenswrapper[4966]: I0127 15:55:37.495338 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d80f84c19a3065e3d62f4aaddbca29810d422511bac313d38380d85366cb6c38" Jan 27 15:55:37 crc kubenswrapper[4966]: I0127 15:55:37.495395 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc" Jan 27 15:55:38 crc kubenswrapper[4966]: I0127 15:55:38.504791 4966 generic.go:334] "Generic (PLEG): container finished" podID="e426b946-a63a-4b3c-a448-c2575290ba0f" containerID="4a12d698332269651321fc75aa4e6d3f8a9538be3445bbe5ff3f5ecb9736e2d0" exitCode=0 Jan 27 15:55:38 crc kubenswrapper[4966]: I0127 15:55:38.504867 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vptgd" event={"ID":"e426b946-a63a-4b3c-a448-c2575290ba0f","Type":"ContainerDied","Data":"4a12d698332269651321fc75aa4e6d3f8a9538be3445bbe5ff3f5ecb9736e2d0"} Jan 27 15:55:39 crc kubenswrapper[4966]: I0127 15:55:39.514467 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vptgd" event={"ID":"e426b946-a63a-4b3c-a448-c2575290ba0f","Type":"ContainerStarted","Data":"c5a31c63c7bf80807ec2969670a3afc06f95c6bc78a3e6c7a7d1fa9f516aa407"} Jan 27 15:55:39 crc kubenswrapper[4966]: I0127 15:55:39.535462 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vptgd" podStartSLOduration=3.061904721 podStartE2EDuration="5.535444444s" podCreationTimestamp="2026-01-27 15:55:34 +0000 UTC" firstStartedPulling="2026-01-27 15:55:36.467042426 +0000 UTC m=+802.769835934" lastFinishedPulling="2026-01-27 15:55:38.940582169 +0000 UTC m=+805.243375657" observedRunningTime="2026-01-27 15:55:39.533815674 +0000 UTC m=+805.836609172" watchObservedRunningTime="2026-01-27 15:55:39.535444444 +0000 UTC m=+805.838237952" Jan 27 15:55:44 crc kubenswrapper[4966]: I0127 15:55:44.861526 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vptgd" Jan 27 15:55:44 crc kubenswrapper[4966]: I0127 15:55:44.862127 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vptgd" Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.830725 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg"] Jan 27 15:55:45 crc kubenswrapper[4966]: E0127 15:55:45.831516 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c3974f8-3174-42c5-b11c-ae9c190dd0c3" containerName="pull" Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.831591 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3974f8-3174-42c5-b11c-ae9c190dd0c3" containerName="pull" Jan 27 15:55:45 crc kubenswrapper[4966]: E0127 15:55:45.831649 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7274aa23-2136-466c-a16c-172e17d804ce" containerName="extract" Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.831696 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="7274aa23-2136-466c-a16c-172e17d804ce" containerName="extract" Jan 27 15:55:45 crc kubenswrapper[4966]: E0127 15:55:45.831753 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7274aa23-2136-466c-a16c-172e17d804ce" containerName="util" Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.831809 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="7274aa23-2136-466c-a16c-172e17d804ce" containerName="util" Jan 27 15:55:45 crc kubenswrapper[4966]: E0127 15:55:45.831859 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7274aa23-2136-466c-a16c-172e17d804ce" containerName="pull" Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.831932 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="7274aa23-2136-466c-a16c-172e17d804ce" containerName="pull" Jan 27 15:55:45 crc kubenswrapper[4966]: E0127 15:55:45.831994 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c3974f8-3174-42c5-b11c-ae9c190dd0c3" containerName="extract" Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.832046 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3974f8-3174-42c5-b11c-ae9c190dd0c3" containerName="extract" Jan 27 15:55:45 crc kubenswrapper[4966]: E0127 15:55:45.832111 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c3974f8-3174-42c5-b11c-ae9c190dd0c3" containerName="util" Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.832162 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3974f8-3174-42c5-b11c-ae9c190dd0c3" containerName="util" Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.832350 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="7274aa23-2136-466c-a16c-172e17d804ce" containerName="extract" Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.832428 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c3974f8-3174-42c5-b11c-ae9c190dd0c3" containerName="extract" Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.833307 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.837295 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.837698 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.838226 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-jpsbn" Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.838798 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.839764 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.847006 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.858597 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg"] Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.911612 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vptgd" podUID="e426b946-a63a-4b3c-a448-c2575290ba0f" containerName="registry-server" probeResult="failure" output=< Jan 27 15:55:45 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 15:55:45 crc kubenswrapper[4966]: > Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.999449 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8ee78ad6-4785-4aee-a8cb-c16b147764d9-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7c74c5b958-9l7lg\" (UID: \"8ee78ad6-4785-4aee-a8cb-c16b147764d9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.999519 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/8ee78ad6-4785-4aee-a8cb-c16b147764d9-manager-config\") pod \"loki-operator-controller-manager-7c74c5b958-9l7lg\" (UID: \"8ee78ad6-4785-4aee-a8cb-c16b147764d9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.999554 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8ee78ad6-4785-4aee-a8cb-c16b147764d9-apiservice-cert\") pod \"loki-operator-controller-manager-7c74c5b958-9l7lg\" (UID: \"8ee78ad6-4785-4aee-a8cb-c16b147764d9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.999577 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8ee78ad6-4785-4aee-a8cb-c16b147764d9-webhook-cert\") pod \"loki-operator-controller-manager-7c74c5b958-9l7lg\" (UID: \"8ee78ad6-4785-4aee-a8cb-c16b147764d9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" Jan 27 15:55:45 crc kubenswrapper[4966]: I0127 15:55:45.999631 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmqv6\" (UniqueName: \"kubernetes.io/projected/8ee78ad6-4785-4aee-a8cb-c16b147764d9-kube-api-access-mmqv6\") pod \"loki-operator-controller-manager-7c74c5b958-9l7lg\" (UID: \"8ee78ad6-4785-4aee-a8cb-c16b147764d9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" Jan 27 15:55:46 crc kubenswrapper[4966]: I0127 15:55:46.101296 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8ee78ad6-4785-4aee-a8cb-c16b147764d9-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7c74c5b958-9l7lg\" (UID: \"8ee78ad6-4785-4aee-a8cb-c16b147764d9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" Jan 27 15:55:46 crc kubenswrapper[4966]: I0127 15:55:46.101369 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/8ee78ad6-4785-4aee-a8cb-c16b147764d9-manager-config\") pod \"loki-operator-controller-manager-7c74c5b958-9l7lg\" (UID: \"8ee78ad6-4785-4aee-a8cb-c16b147764d9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" Jan 27 15:55:46 crc kubenswrapper[4966]: I0127 15:55:46.101409 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8ee78ad6-4785-4aee-a8cb-c16b147764d9-apiservice-cert\") pod \"loki-operator-controller-manager-7c74c5b958-9l7lg\" (UID: \"8ee78ad6-4785-4aee-a8cb-c16b147764d9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" Jan 27 15:55:46 crc kubenswrapper[4966]: I0127 15:55:46.101430 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8ee78ad6-4785-4aee-a8cb-c16b147764d9-webhook-cert\") pod \"loki-operator-controller-manager-7c74c5b958-9l7lg\" (UID: \"8ee78ad6-4785-4aee-a8cb-c16b147764d9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" Jan 27 15:55:46 crc kubenswrapper[4966]: I0127 15:55:46.101485 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmqv6\" (UniqueName: \"kubernetes.io/projected/8ee78ad6-4785-4aee-a8cb-c16b147764d9-kube-api-access-mmqv6\") pod \"loki-operator-controller-manager-7c74c5b958-9l7lg\" (UID: \"8ee78ad6-4785-4aee-a8cb-c16b147764d9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" Jan 27 15:55:46 crc kubenswrapper[4966]: I0127 15:55:46.102816 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/8ee78ad6-4785-4aee-a8cb-c16b147764d9-manager-config\") pod \"loki-operator-controller-manager-7c74c5b958-9l7lg\" (UID: \"8ee78ad6-4785-4aee-a8cb-c16b147764d9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" Jan 27 15:55:46 crc kubenswrapper[4966]: I0127 15:55:46.108211 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8ee78ad6-4785-4aee-a8cb-c16b147764d9-webhook-cert\") pod \"loki-operator-controller-manager-7c74c5b958-9l7lg\" (UID: \"8ee78ad6-4785-4aee-a8cb-c16b147764d9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" Jan 27 15:55:46 crc kubenswrapper[4966]: I0127 15:55:46.110604 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8ee78ad6-4785-4aee-a8cb-c16b147764d9-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7c74c5b958-9l7lg\" (UID: \"8ee78ad6-4785-4aee-a8cb-c16b147764d9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" Jan 27 15:55:46 crc kubenswrapper[4966]: I0127 15:55:46.111384 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8ee78ad6-4785-4aee-a8cb-c16b147764d9-apiservice-cert\") pod \"loki-operator-controller-manager-7c74c5b958-9l7lg\" (UID: \"8ee78ad6-4785-4aee-a8cb-c16b147764d9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" Jan 27 15:55:46 crc kubenswrapper[4966]: I0127 15:55:46.121212 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmqv6\" (UniqueName: \"kubernetes.io/projected/8ee78ad6-4785-4aee-a8cb-c16b147764d9-kube-api-access-mmqv6\") pod \"loki-operator-controller-manager-7c74c5b958-9l7lg\" (UID: \"8ee78ad6-4785-4aee-a8cb-c16b147764d9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" Jan 27 15:55:46 crc kubenswrapper[4966]: I0127 15:55:46.156737 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" Jan 27 15:55:46 crc kubenswrapper[4966]: I0127 15:55:46.591344 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg"] Jan 27 15:55:46 crc kubenswrapper[4966]: W0127 15:55:46.606227 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ee78ad6_4785_4aee_a8cb_c16b147764d9.slice/crio-4fabdc63b5bb9e8b0e447cee4650cadf272d9f063b7f47f50736f4413c1ab1f0 WatchSource:0}: Error finding container 4fabdc63b5bb9e8b0e447cee4650cadf272d9f063b7f47f50736f4413c1ab1f0: Status 404 returned error can't find the container with id 4fabdc63b5bb9e8b0e447cee4650cadf272d9f063b7f47f50736f4413c1ab1f0 Jan 27 15:55:47 crc kubenswrapper[4966]: I0127 15:55:47.563772 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" event={"ID":"8ee78ad6-4785-4aee-a8cb-c16b147764d9","Type":"ContainerStarted","Data":"4fabdc63b5bb9e8b0e447cee4650cadf272d9f063b7f47f50736f4413c1ab1f0"} Jan 27 15:55:50 crc kubenswrapper[4966]: I0127 15:55:50.123144 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-pk8lz"] Jan 27 15:55:50 crc kubenswrapper[4966]: I0127 15:55:50.126464 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-pk8lz" Jan 27 15:55:50 crc kubenswrapper[4966]: I0127 15:55:50.128701 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Jan 27 15:55:50 crc kubenswrapper[4966]: I0127 15:55:50.129002 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Jan 27 15:55:50 crc kubenswrapper[4966]: I0127 15:55:50.132477 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-tmv5n" Jan 27 15:55:50 crc kubenswrapper[4966]: I0127 15:55:50.138072 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-pk8lz"] Jan 27 15:55:50 crc kubenswrapper[4966]: I0127 15:55:50.282719 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xsjt\" (UniqueName: \"kubernetes.io/projected/fe36d099-9929-462a-8275-c58158cafe2c-kube-api-access-2xsjt\") pod \"cluster-logging-operator-79cf69ddc8-pk8lz\" (UID: \"fe36d099-9929-462a-8275-c58158cafe2c\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-pk8lz" Jan 27 15:55:50 crc kubenswrapper[4966]: I0127 15:55:50.384338 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xsjt\" (UniqueName: \"kubernetes.io/projected/fe36d099-9929-462a-8275-c58158cafe2c-kube-api-access-2xsjt\") pod \"cluster-logging-operator-79cf69ddc8-pk8lz\" (UID: \"fe36d099-9929-462a-8275-c58158cafe2c\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-pk8lz" Jan 27 15:55:50 crc kubenswrapper[4966]: I0127 15:55:50.403280 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xsjt\" (UniqueName: \"kubernetes.io/projected/fe36d099-9929-462a-8275-c58158cafe2c-kube-api-access-2xsjt\") pod \"cluster-logging-operator-79cf69ddc8-pk8lz\" (UID: \"fe36d099-9929-462a-8275-c58158cafe2c\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-pk8lz" Jan 27 15:55:50 crc kubenswrapper[4966]: I0127 15:55:50.448108 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-pk8lz" Jan 27 15:55:52 crc kubenswrapper[4966]: I0127 15:55:52.131328 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-pk8lz"] Jan 27 15:55:52 crc kubenswrapper[4966]: I0127 15:55:52.594802 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" event={"ID":"8ee78ad6-4785-4aee-a8cb-c16b147764d9","Type":"ContainerStarted","Data":"84f6743f1cd5bc6127dcfa834dffe76a6e740a7ce3b8e02be1ce653eb8cca121"} Jan 27 15:55:52 crc kubenswrapper[4966]: I0127 15:55:52.595648 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-pk8lz" event={"ID":"fe36d099-9929-462a-8275-c58158cafe2c","Type":"ContainerStarted","Data":"84771a3bcd88a05069138cc9dd481cef800d889b149a424d3ac1ece7a9eb4a8b"} Jan 27 15:55:54 crc kubenswrapper[4966]: I0127 15:55:54.927351 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vptgd" Jan 27 15:55:54 crc kubenswrapper[4966]: I0127 15:55:54.970689 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vptgd" Jan 27 15:55:57 crc kubenswrapper[4966]: I0127 15:55:57.670836 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vptgd"] Jan 27 15:55:57 crc kubenswrapper[4966]: I0127 15:55:57.671342 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vptgd" podUID="e426b946-a63a-4b3c-a448-c2575290ba0f" containerName="registry-server" containerID="cri-o://c5a31c63c7bf80807ec2969670a3afc06f95c6bc78a3e6c7a7d1fa9f516aa407" gracePeriod=2 Jan 27 15:55:58 crc kubenswrapper[4966]: I0127 15:55:58.652368 4966 generic.go:334] "Generic (PLEG): container finished" podID="e426b946-a63a-4b3c-a448-c2575290ba0f" containerID="c5a31c63c7bf80807ec2969670a3afc06f95c6bc78a3e6c7a7d1fa9f516aa407" exitCode=0 Jan 27 15:55:58 crc kubenswrapper[4966]: I0127 15:55:58.652421 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vptgd" event={"ID":"e426b946-a63a-4b3c-a448-c2575290ba0f","Type":"ContainerDied","Data":"c5a31c63c7bf80807ec2969670a3afc06f95c6bc78a3e6c7a7d1fa9f516aa407"} Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.381089 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vptgd" Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.549844 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcgsp\" (UniqueName: \"kubernetes.io/projected/e426b946-a63a-4b3c-a448-c2575290ba0f-kube-api-access-rcgsp\") pod \"e426b946-a63a-4b3c-a448-c2575290ba0f\" (UID: \"e426b946-a63a-4b3c-a448-c2575290ba0f\") " Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.549967 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e426b946-a63a-4b3c-a448-c2575290ba0f-utilities\") pod \"e426b946-a63a-4b3c-a448-c2575290ba0f\" (UID: \"e426b946-a63a-4b3c-a448-c2575290ba0f\") " Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.550036 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e426b946-a63a-4b3c-a448-c2575290ba0f-catalog-content\") pod \"e426b946-a63a-4b3c-a448-c2575290ba0f\" (UID: \"e426b946-a63a-4b3c-a448-c2575290ba0f\") " Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.551748 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e426b946-a63a-4b3c-a448-c2575290ba0f-utilities" (OuterVolumeSpecName: "utilities") pod "e426b946-a63a-4b3c-a448-c2575290ba0f" (UID: "e426b946-a63a-4b3c-a448-c2575290ba0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.556015 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e426b946-a63a-4b3c-a448-c2575290ba0f-kube-api-access-rcgsp" (OuterVolumeSpecName: "kube-api-access-rcgsp") pod "e426b946-a63a-4b3c-a448-c2575290ba0f" (UID: "e426b946-a63a-4b3c-a448-c2575290ba0f"). InnerVolumeSpecName "kube-api-access-rcgsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.651652 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcgsp\" (UniqueName: \"kubernetes.io/projected/e426b946-a63a-4b3c-a448-c2575290ba0f-kube-api-access-rcgsp\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.651682 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e426b946-a63a-4b3c-a448-c2575290ba0f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.666058 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e426b946-a63a-4b3c-a448-c2575290ba0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e426b946-a63a-4b3c-a448-c2575290ba0f" (UID: "e426b946-a63a-4b3c-a448-c2575290ba0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.672096 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vptgd" Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.672105 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vptgd" event={"ID":"e426b946-a63a-4b3c-a448-c2575290ba0f","Type":"ContainerDied","Data":"7ca98d8848834e7fbabeb960246d9f88eae1c8da00c11793e8ab8ce966f6156f"} Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.672163 4966 scope.go:117] "RemoveContainer" containerID="c5a31c63c7bf80807ec2969670a3afc06f95c6bc78a3e6c7a7d1fa9f516aa407" Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.675198 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" event={"ID":"8ee78ad6-4785-4aee-a8cb-c16b147764d9","Type":"ContainerStarted","Data":"7c47cc2d4e949e6f89b6425400f81ccdbd77102517f8f33f0c1dedda900dd045"} Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.676062 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.677078 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-pk8lz" event={"ID":"fe36d099-9929-462a-8275-c58158cafe2c","Type":"ContainerStarted","Data":"5777218c87369e86aa52877907f3568410502b22bb3b1258bf459f7211f1dd40"} Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.678746 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.696506 4966 scope.go:117] "RemoveContainer" containerID="4a12d698332269651321fc75aa4e6d3f8a9538be3445bbe5ff3f5ecb9736e2d0" Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.729601 4966 scope.go:117] "RemoveContainer" containerID="5b2d63090b1499b203526fd20c8f693b8ae89c5416d125b845260d76bff995ad" Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.739321 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" podStartSLOduration=1.96237641 podStartE2EDuration="15.739301234s" podCreationTimestamp="2026-01-27 15:55:45 +0000 UTC" firstStartedPulling="2026-01-27 15:55:46.612858814 +0000 UTC m=+812.915652302" lastFinishedPulling="2026-01-27 15:56:00.389783638 +0000 UTC m=+826.692577126" observedRunningTime="2026-01-27 15:56:00.718177882 +0000 UTC m=+827.020971390" watchObservedRunningTime="2026-01-27 15:56:00.739301234 +0000 UTC m=+827.042094722" Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.751254 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-pk8lz" podStartSLOduration=2.550409826 podStartE2EDuration="10.751232748s" podCreationTimestamp="2026-01-27 15:55:50 +0000 UTC" firstStartedPulling="2026-01-27 15:55:52.169442584 +0000 UTC m=+818.472236072" lastFinishedPulling="2026-01-27 15:56:00.370265506 +0000 UTC m=+826.673058994" observedRunningTime="2026-01-27 15:56:00.735570647 +0000 UTC m=+827.038364145" watchObservedRunningTime="2026-01-27 15:56:00.751232748 +0000 UTC m=+827.054026236" Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.753595 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e426b946-a63a-4b3c-a448-c2575290ba0f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.792563 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vptgd"] Jan 27 15:56:00 crc kubenswrapper[4966]: I0127 15:56:00.799013 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vptgd"] Jan 27 15:56:02 crc kubenswrapper[4966]: I0127 15:56:02.529299 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e426b946-a63a-4b3c-a448-c2575290ba0f" path="/var/lib/kubelet/pods/e426b946-a63a-4b3c-a448-c2575290ba0f/volumes" Jan 27 15:56:05 crc kubenswrapper[4966]: I0127 15:56:05.813455 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Jan 27 15:56:05 crc kubenswrapper[4966]: E0127 15:56:05.814976 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e426b946-a63a-4b3c-a448-c2575290ba0f" containerName="extract-utilities" Jan 27 15:56:05 crc kubenswrapper[4966]: I0127 15:56:05.815140 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="e426b946-a63a-4b3c-a448-c2575290ba0f" containerName="extract-utilities" Jan 27 15:56:05 crc kubenswrapper[4966]: E0127 15:56:05.815295 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e426b946-a63a-4b3c-a448-c2575290ba0f" containerName="registry-server" Jan 27 15:56:05 crc kubenswrapper[4966]: I0127 15:56:05.815439 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="e426b946-a63a-4b3c-a448-c2575290ba0f" containerName="registry-server" Jan 27 15:56:05 crc kubenswrapper[4966]: E0127 15:56:05.815585 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e426b946-a63a-4b3c-a448-c2575290ba0f" containerName="extract-content" Jan 27 15:56:05 crc kubenswrapper[4966]: I0127 15:56:05.815709 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="e426b946-a63a-4b3c-a448-c2575290ba0f" containerName="extract-content" Jan 27 15:56:05 crc kubenswrapper[4966]: I0127 15:56:05.816037 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="e426b946-a63a-4b3c-a448-c2575290ba0f" containerName="registry-server" Jan 27 15:56:05 crc kubenswrapper[4966]: I0127 15:56:05.816836 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 27 15:56:05 crc kubenswrapper[4966]: I0127 15:56:05.819556 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Jan 27 15:56:05 crc kubenswrapper[4966]: I0127 15:56:05.820795 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 27 15:56:05 crc kubenswrapper[4966]: I0127 15:56:05.820877 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Jan 27 15:56:05 crc kubenswrapper[4966]: I0127 15:56:05.922888 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9eb605ba-aa14-482f-bef5-8c34064fcea7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9eb605ba-aa14-482f-bef5-8c34064fcea7\") pod \"minio\" (UID: \"ef8ba961-fbf9-4922-bcca-150d18303d10\") " pod="minio-dev/minio" Jan 27 15:56:05 crc kubenswrapper[4966]: I0127 15:56:05.923012 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dbsg\" (UniqueName: \"kubernetes.io/projected/ef8ba961-fbf9-4922-bcca-150d18303d10-kube-api-access-9dbsg\") pod \"minio\" (UID: \"ef8ba961-fbf9-4922-bcca-150d18303d10\") " pod="minio-dev/minio" Jan 27 15:56:06 crc kubenswrapper[4966]: I0127 15:56:06.024891 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9eb605ba-aa14-482f-bef5-8c34064fcea7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9eb605ba-aa14-482f-bef5-8c34064fcea7\") pod \"minio\" (UID: \"ef8ba961-fbf9-4922-bcca-150d18303d10\") " pod="minio-dev/minio" Jan 27 15:56:06 crc kubenswrapper[4966]: I0127 15:56:06.025014 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dbsg\" (UniqueName: \"kubernetes.io/projected/ef8ba961-fbf9-4922-bcca-150d18303d10-kube-api-access-9dbsg\") pod \"minio\" (UID: \"ef8ba961-fbf9-4922-bcca-150d18303d10\") " pod="minio-dev/minio" Jan 27 15:56:06 crc kubenswrapper[4966]: I0127 15:56:06.028025 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 15:56:06 crc kubenswrapper[4966]: I0127 15:56:06.028061 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9eb605ba-aa14-482f-bef5-8c34064fcea7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9eb605ba-aa14-482f-bef5-8c34064fcea7\") pod \"minio\" (UID: \"ef8ba961-fbf9-4922-bcca-150d18303d10\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7d84a1f98206c1bf0a4b956ed2422230906ba6d3a0bbe3aa308518693d34f878/globalmount\"" pod="minio-dev/minio" Jan 27 15:56:06 crc kubenswrapper[4966]: I0127 15:56:06.053425 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dbsg\" (UniqueName: \"kubernetes.io/projected/ef8ba961-fbf9-4922-bcca-150d18303d10-kube-api-access-9dbsg\") pod \"minio\" (UID: \"ef8ba961-fbf9-4922-bcca-150d18303d10\") " pod="minio-dev/minio" Jan 27 15:56:06 crc kubenswrapper[4966]: I0127 15:56:06.057978 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9eb605ba-aa14-482f-bef5-8c34064fcea7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9eb605ba-aa14-482f-bef5-8c34064fcea7\") pod \"minio\" (UID: \"ef8ba961-fbf9-4922-bcca-150d18303d10\") " pod="minio-dev/minio" Jan 27 15:56:06 crc kubenswrapper[4966]: I0127 15:56:06.149491 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 27 15:56:06 crc kubenswrapper[4966]: I0127 15:56:06.672149 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 27 15:56:06 crc kubenswrapper[4966]: I0127 15:56:06.718733 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"ef8ba961-fbf9-4922-bcca-150d18303d10","Type":"ContainerStarted","Data":"39e8a1b6c8f6fda012cf9f97fcd0c288137f88dd0699465c62b80dd1bc3993b4"} Jan 27 15:56:09 crc kubenswrapper[4966]: I0127 15:56:09.738433 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"ef8ba961-fbf9-4922-bcca-150d18303d10","Type":"ContainerStarted","Data":"e6ebe915d9863ed67befaa6cd16dc142729f9c072e9e2fabfa6bee2d57fb9003"} Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.517193 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=9.716914748 podStartE2EDuration="12.517152851s" podCreationTimestamp="2026-01-27 15:56:03 +0000 UTC" firstStartedPulling="2026-01-27 15:56:06.680284582 +0000 UTC m=+832.983078080" lastFinishedPulling="2026-01-27 15:56:09.480522695 +0000 UTC m=+835.783316183" observedRunningTime="2026-01-27 15:56:09.767968995 +0000 UTC m=+836.070762493" watchObservedRunningTime="2026-01-27 15:56:15.517152851 +0000 UTC m=+841.819946339" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.518416 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc"] Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.519330 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.522594 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-vlqrr" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.522698 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.522862 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.522875 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.523034 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.536661 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc"] Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.585804 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/c70eec8b-c8da-4620-9c5e-bb19e5d66424-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-88nmc\" (UID: \"c70eec8b-c8da-4620-9c5e-bb19e5d66424\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.585930 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/c70eec8b-c8da-4620-9c5e-bb19e5d66424-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-88nmc\" (UID: \"c70eec8b-c8da-4620-9c5e-bb19e5d66424\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.585977 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c70eec8b-c8da-4620-9c5e-bb19e5d66424-config\") pod \"logging-loki-distributor-5f678c8dd6-88nmc\" (UID: \"c70eec8b-c8da-4620-9c5e-bb19e5d66424\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.586012 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqvgh\" (UniqueName: \"kubernetes.io/projected/c70eec8b-c8da-4620-9c5e-bb19e5d66424-kube-api-access-kqvgh\") pod \"logging-loki-distributor-5f678c8dd6-88nmc\" (UID: \"c70eec8b-c8da-4620-9c5e-bb19e5d66424\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.586119 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c70eec8b-c8da-4620-9c5e-bb19e5d66424-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-88nmc\" (UID: \"c70eec8b-c8da-4620-9c5e-bb19e5d66424\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.658561 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-95xm4"] Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.659570 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.661649 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.661740 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.661745 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.684496 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-95xm4"] Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.686912 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/c70eec8b-c8da-4620-9c5e-bb19e5d66424-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-88nmc\" (UID: \"c70eec8b-c8da-4620-9c5e-bb19e5d66424\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.686979 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/c70eec8b-c8da-4620-9c5e-bb19e5d66424-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-88nmc\" (UID: \"c70eec8b-c8da-4620-9c5e-bb19e5d66424\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.687022 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c70eec8b-c8da-4620-9c5e-bb19e5d66424-config\") pod \"logging-loki-distributor-5f678c8dd6-88nmc\" (UID: \"c70eec8b-c8da-4620-9c5e-bb19e5d66424\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.687049 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqvgh\" (UniqueName: \"kubernetes.io/projected/c70eec8b-c8da-4620-9c5e-bb19e5d66424-kube-api-access-kqvgh\") pod \"logging-loki-distributor-5f678c8dd6-88nmc\" (UID: \"c70eec8b-c8da-4620-9c5e-bb19e5d66424\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.687075 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c70eec8b-c8da-4620-9c5e-bb19e5d66424-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-88nmc\" (UID: \"c70eec8b-c8da-4620-9c5e-bb19e5d66424\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.689939 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c70eec8b-c8da-4620-9c5e-bb19e5d66424-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-88nmc\" (UID: \"c70eec8b-c8da-4620-9c5e-bb19e5d66424\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.690827 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c70eec8b-c8da-4620-9c5e-bb19e5d66424-config\") pod \"logging-loki-distributor-5f678c8dd6-88nmc\" (UID: \"c70eec8b-c8da-4620-9c5e-bb19e5d66424\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.701616 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/c70eec8b-c8da-4620-9c5e-bb19e5d66424-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-88nmc\" (UID: \"c70eec8b-c8da-4620-9c5e-bb19e5d66424\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.702652 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/c70eec8b-c8da-4620-9c5e-bb19e5d66424-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-88nmc\" (UID: \"c70eec8b-c8da-4620-9c5e-bb19e5d66424\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.737035 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqvgh\" (UniqueName: \"kubernetes.io/projected/c70eec8b-c8da-4620-9c5e-bb19e5d66424-kube-api-access-kqvgh\") pod \"logging-loki-distributor-5f678c8dd6-88nmc\" (UID: \"c70eec8b-c8da-4620-9c5e-bb19e5d66424\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.772720 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-6s27c"] Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.773600 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.780122 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.785238 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.788342 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/80c26c09-83a0-4b08-979b-a138a5ed5d4b-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-95xm4\" (UID: \"80c26c09-83a0-4b08-979b-a138a5ed5d4b\") " pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.788392 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80c26c09-83a0-4b08-979b-a138a5ed5d4b-config\") pod \"logging-loki-querier-76788598db-95xm4\" (UID: \"80c26c09-83a0-4b08-979b-a138a5ed5d4b\") " pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.788425 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/80c26c09-83a0-4b08-979b-a138a5ed5d4b-logging-loki-s3\") pod \"logging-loki-querier-76788598db-95xm4\" (UID: \"80c26c09-83a0-4b08-979b-a138a5ed5d4b\") " pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.788518 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/80c26c09-83a0-4b08-979b-a138a5ed5d4b-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-95xm4\" (UID: \"80c26c09-83a0-4b08-979b-a138a5ed5d4b\") " pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.788563 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80c26c09-83a0-4b08-979b-a138a5ed5d4b-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-95xm4\" (UID: \"80c26c09-83a0-4b08-979b-a138a5ed5d4b\") " pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.788599 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5khjp\" (UniqueName: \"kubernetes.io/projected/80c26c09-83a0-4b08-979b-a138a5ed5d4b-kube-api-access-5khjp\") pod \"logging-loki-querier-76788598db-95xm4\" (UID: \"80c26c09-83a0-4b08-979b-a138a5ed5d4b\") " pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.828911 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-6s27c"] Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.847372 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.889668 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/80c26c09-83a0-4b08-979b-a138a5ed5d4b-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-95xm4\" (UID: \"80c26c09-83a0-4b08-979b-a138a5ed5d4b\") " pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.889712 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/9c5e1e82-3053-4895-91ce-56475540fc35-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-6s27c\" (UID: \"9c5e1e82-3053-4895-91ce-56475540fc35\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.889742 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmm5q\" (UniqueName: \"kubernetes.io/projected/9c5e1e82-3053-4895-91ce-56475540fc35-kube-api-access-mmm5q\") pod \"logging-loki-query-frontend-69d9546745-6s27c\" (UID: \"9c5e1e82-3053-4895-91ce-56475540fc35\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.889765 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80c26c09-83a0-4b08-979b-a138a5ed5d4b-config\") pod \"logging-loki-querier-76788598db-95xm4\" (UID: \"80c26c09-83a0-4b08-979b-a138a5ed5d4b\") " pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.889784 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c5e1e82-3053-4895-91ce-56475540fc35-config\") pod \"logging-loki-query-frontend-69d9546745-6s27c\" (UID: \"9c5e1e82-3053-4895-91ce-56475540fc35\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.889820 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/80c26c09-83a0-4b08-979b-a138a5ed5d4b-logging-loki-s3\") pod \"logging-loki-querier-76788598db-95xm4\" (UID: \"80c26c09-83a0-4b08-979b-a138a5ed5d4b\") " pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.889851 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c5e1e82-3053-4895-91ce-56475540fc35-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-6s27c\" (UID: \"9c5e1e82-3053-4895-91ce-56475540fc35\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.889873 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/9c5e1e82-3053-4895-91ce-56475540fc35-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-6s27c\" (UID: \"9c5e1e82-3053-4895-91ce-56475540fc35\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.889917 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/80c26c09-83a0-4b08-979b-a138a5ed5d4b-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-95xm4\" (UID: \"80c26c09-83a0-4b08-979b-a138a5ed5d4b\") " pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.889934 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80c26c09-83a0-4b08-979b-a138a5ed5d4b-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-95xm4\" (UID: \"80c26c09-83a0-4b08-979b-a138a5ed5d4b\") " pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.889974 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5khjp\" (UniqueName: \"kubernetes.io/projected/80c26c09-83a0-4b08-979b-a138a5ed5d4b-kube-api-access-5khjp\") pod \"logging-loki-querier-76788598db-95xm4\" (UID: \"80c26c09-83a0-4b08-979b-a138a5ed5d4b\") " pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.890000 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-575b568fc4-lr7wd"] Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.891442 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.893457 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80c26c09-83a0-4b08-979b-a138a5ed5d4b-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-95xm4\" (UID: \"80c26c09-83a0-4b08-979b-a138a5ed5d4b\") " pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.893608 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80c26c09-83a0-4b08-979b-a138a5ed5d4b-config\") pod \"logging-loki-querier-76788598db-95xm4\" (UID: \"80c26c09-83a0-4b08-979b-a138a5ed5d4b\") " pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.893837 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.894539 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.894779 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/80c26c09-83a0-4b08-979b-a138a5ed5d4b-logging-loki-s3\") pod \"logging-loki-querier-76788598db-95xm4\" (UID: \"80c26c09-83a0-4b08-979b-a138a5ed5d4b\") " pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.894790 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.894909 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.895723 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.897054 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/80c26c09-83a0-4b08-979b-a138a5ed5d4b-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-95xm4\" (UID: \"80c26c09-83a0-4b08-979b-a138a5ed5d4b\") " pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.900536 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/80c26c09-83a0-4b08-979b-a138a5ed5d4b-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-95xm4\" (UID: \"80c26c09-83a0-4b08-979b-a138a5ed5d4b\") " pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.914977 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-575b568fc4-5wzxv"] Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.916576 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.918185 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-8tqc4" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.919789 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-575b568fc4-lr7wd"] Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.923590 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-575b568fc4-5wzxv"] Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.925776 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5khjp\" (UniqueName: \"kubernetes.io/projected/80c26c09-83a0-4b08-979b-a138a5ed5d4b-kube-api-access-5khjp\") pod \"logging-loki-querier-76788598db-95xm4\" (UID: \"80c26c09-83a0-4b08-979b-a138a5ed5d4b\") " pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.977396 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.993091 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmm5q\" (UniqueName: \"kubernetes.io/projected/9c5e1e82-3053-4895-91ce-56475540fc35-kube-api-access-mmm5q\") pod \"logging-loki-query-frontend-69d9546745-6s27c\" (UID: \"9c5e1e82-3053-4895-91ce-56475540fc35\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.993130 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c5e1e82-3053-4895-91ce-56475540fc35-config\") pod \"logging-loki-query-frontend-69d9546745-6s27c\" (UID: \"9c5e1e82-3053-4895-91ce-56475540fc35\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.995000 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c5e1e82-3053-4895-91ce-56475540fc35-config\") pod \"logging-loki-query-frontend-69d9546745-6s27c\" (UID: \"9c5e1e82-3053-4895-91ce-56475540fc35\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.995440 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c5e1e82-3053-4895-91ce-56475540fc35-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-6s27c\" (UID: \"9c5e1e82-3053-4895-91ce-56475540fc35\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.995474 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a58b269c-6e15-4eda-aa6f-00e51aa132fe-tls-secret\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.995500 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/9c5e1e82-3053-4895-91ce-56475540fc35-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-6s27c\" (UID: \"9c5e1e82-3053-4895-91ce-56475540fc35\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.995519 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/fa2c58c2-9b23-4360-897e-582237775277-tenants\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.995550 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a58b269c-6e15-4eda-aa6f-00e51aa132fe-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.995578 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa2c58c2-9b23-4360-897e-582237775277-logging-loki-ca-bundle\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.995629 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa2c58c2-9b23-4360-897e-582237775277-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.995662 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/fa2c58c2-9b23-4360-897e-582237775277-rbac\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.995684 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/fa2c58c2-9b23-4360-897e-582237775277-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.995762 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r9nn\" (UniqueName: \"kubernetes.io/projected/fa2c58c2-9b23-4360-897e-582237775277-kube-api-access-7r9nn\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.995853 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a58b269c-6e15-4eda-aa6f-00e51aa132fe-logging-loki-ca-bundle\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.995946 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/fa2c58c2-9b23-4360-897e-582237775277-lokistack-gateway\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.995987 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/a58b269c-6e15-4eda-aa6f-00e51aa132fe-tenants\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.996005 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/a58b269c-6e15-4eda-aa6f-00e51aa132fe-rbac\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.996023 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjwwz\" (UniqueName: \"kubernetes.io/projected/a58b269c-6e15-4eda-aa6f-00e51aa132fe-kube-api-access-wjwwz\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.996085 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/a58b269c-6e15-4eda-aa6f-00e51aa132fe-lokistack-gateway\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.996102 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/fa2c58c2-9b23-4360-897e-582237775277-tls-secret\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.996122 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/9c5e1e82-3053-4895-91ce-56475540fc35-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-6s27c\" (UID: \"9c5e1e82-3053-4895-91ce-56475540fc35\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.996142 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/a58b269c-6e15-4eda-aa6f-00e51aa132fe-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:15 crc kubenswrapper[4966]: I0127 15:56:15.998499 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c5e1e82-3053-4895-91ce-56475540fc35-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-6s27c\" (UID: \"9c5e1e82-3053-4895-91ce-56475540fc35\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.004942 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/9c5e1e82-3053-4895-91ce-56475540fc35-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-6s27c\" (UID: \"9c5e1e82-3053-4895-91ce-56475540fc35\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.006311 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/9c5e1e82-3053-4895-91ce-56475540fc35-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-6s27c\" (UID: \"9c5e1e82-3053-4895-91ce-56475540fc35\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.013354 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmm5q\" (UniqueName: \"kubernetes.io/projected/9c5e1e82-3053-4895-91ce-56475540fc35-kube-api-access-mmm5q\") pod \"logging-loki-query-frontend-69d9546745-6s27c\" (UID: \"9c5e1e82-3053-4895-91ce-56475540fc35\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.092910 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.097857 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a58b269c-6e15-4eda-aa6f-00e51aa132fe-tls-secret\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.097916 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/fa2c58c2-9b23-4360-897e-582237775277-tenants\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.097947 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a58b269c-6e15-4eda-aa6f-00e51aa132fe-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.097968 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa2c58c2-9b23-4360-897e-582237775277-logging-loki-ca-bundle\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.097994 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa2c58c2-9b23-4360-897e-582237775277-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.098011 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/fa2c58c2-9b23-4360-897e-582237775277-rbac\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.098024 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/fa2c58c2-9b23-4360-897e-582237775277-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.098042 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r9nn\" (UniqueName: \"kubernetes.io/projected/fa2c58c2-9b23-4360-897e-582237775277-kube-api-access-7r9nn\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:16 crc kubenswrapper[4966]: E0127 15:56:16.098041 4966 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Jan 27 15:56:16 crc kubenswrapper[4966]: E0127 15:56:16.098115 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a58b269c-6e15-4eda-aa6f-00e51aa132fe-tls-secret podName:a58b269c-6e15-4eda-aa6f-00e51aa132fe nodeName:}" failed. No retries permitted until 2026-01-27 15:56:16.59809611 +0000 UTC m=+842.900889598 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/a58b269c-6e15-4eda-aa6f-00e51aa132fe-tls-secret") pod "logging-loki-gateway-575b568fc4-lr7wd" (UID: "a58b269c-6e15-4eda-aa6f-00e51aa132fe") : secret "logging-loki-gateway-http" not found Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.098056 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a58b269c-6e15-4eda-aa6f-00e51aa132fe-logging-loki-ca-bundle\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.098396 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/fa2c58c2-9b23-4360-897e-582237775277-lokistack-gateway\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.098447 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/a58b269c-6e15-4eda-aa6f-00e51aa132fe-tenants\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.098475 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/a58b269c-6e15-4eda-aa6f-00e51aa132fe-rbac\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.098496 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjwwz\" (UniqueName: \"kubernetes.io/projected/a58b269c-6e15-4eda-aa6f-00e51aa132fe-kube-api-access-wjwwz\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.098604 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/a58b269c-6e15-4eda-aa6f-00e51aa132fe-lokistack-gateway\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.098622 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/fa2c58c2-9b23-4360-897e-582237775277-tls-secret\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.098659 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/a58b269c-6e15-4eda-aa6f-00e51aa132fe-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.098858 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a58b269c-6e15-4eda-aa6f-00e51aa132fe-logging-loki-ca-bundle\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.099259 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa2c58c2-9b23-4360-897e-582237775277-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.099584 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/fa2c58c2-9b23-4360-897e-582237775277-rbac\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.099633 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/fa2c58c2-9b23-4360-897e-582237775277-lokistack-gateway\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:16 crc kubenswrapper[4966]: E0127 15:56:16.099841 4966 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Jan 27 15:56:16 crc kubenswrapper[4966]: E0127 15:56:16.099937 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa2c58c2-9b23-4360-897e-582237775277-tls-secret podName:fa2c58c2-9b23-4360-897e-582237775277 nodeName:}" failed. No retries permitted until 2026-01-27 15:56:16.599913598 +0000 UTC m=+842.902707156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/fa2c58c2-9b23-4360-897e-582237775277-tls-secret") pod "logging-loki-gateway-575b568fc4-5wzxv" (UID: "fa2c58c2-9b23-4360-897e-582237775277") : secret "logging-loki-gateway-http" not found Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.100376 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/a58b269c-6e15-4eda-aa6f-00e51aa132fe-rbac\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.100609 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa2c58c2-9b23-4360-897e-582237775277-logging-loki-ca-bundle\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.100635 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a58b269c-6e15-4eda-aa6f-00e51aa132fe-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.101449 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/a58b269c-6e15-4eda-aa6f-00e51aa132fe-tenants\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.101461 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/a58b269c-6e15-4eda-aa6f-00e51aa132fe-lokistack-gateway\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.101733 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/fa2c58c2-9b23-4360-897e-582237775277-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.104357 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/a58b269c-6e15-4eda-aa6f-00e51aa132fe-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.105547 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/fa2c58c2-9b23-4360-897e-582237775277-tenants\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.117426 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjwwz\" (UniqueName: \"kubernetes.io/projected/a58b269c-6e15-4eda-aa6f-00e51aa132fe-kube-api-access-wjwwz\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.122634 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r9nn\" (UniqueName: \"kubernetes.io/projected/fa2c58c2-9b23-4360-897e-582237775277-kube-api-access-7r9nn\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.266389 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-95xm4"] Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.428009 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc"] Jan 27 15:56:16 crc kubenswrapper[4966]: W0127 15:56:16.429788 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc70eec8b_c8da_4620_9c5e_bb19e5d66424.slice/crio-2ba10edabcdf718559f85374cd5cd52172c0c8fdff1b3992dd5c7d8b2a3c62ea WatchSource:0}: Error finding container 2ba10edabcdf718559f85374cd5cd52172c0c8fdff1b3992dd5c7d8b2a3c62ea: Status 404 returned error can't find the container with id 2ba10edabcdf718559f85374cd5cd52172c0c8fdff1b3992dd5c7d8b2a3c62ea Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.546110 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-6s27c"] Jan 27 15:56:16 crc kubenswrapper[4966]: W0127 15:56:16.562520 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c5e1e82_3053_4895_91ce_56475540fc35.slice/crio-7c2ff2f0049884cbd6153c503594a91f574b475c297530a1bfbd1cc40111611e WatchSource:0}: Error finding container 7c2ff2f0049884cbd6153c503594a91f574b475c297530a1bfbd1cc40111611e: Status 404 returned error can't find the container with id 7c2ff2f0049884cbd6153c503594a91f574b475c297530a1bfbd1cc40111611e Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.611981 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/fa2c58c2-9b23-4360-897e-582237775277-tls-secret\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.612063 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a58b269c-6e15-4eda-aa6f-00e51aa132fe-tls-secret\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.616872 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/fa2c58c2-9b23-4360-897e-582237775277-tls-secret\") pod \"logging-loki-gateway-575b568fc4-5wzxv\" (UID: \"fa2c58c2-9b23-4360-897e-582237775277\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.616874 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a58b269c-6e15-4eda-aa6f-00e51aa132fe-tls-secret\") pod \"logging-loki-gateway-575b568fc4-lr7wd\" (UID: \"a58b269c-6e15-4eda-aa6f-00e51aa132fe\") " pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.659103 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.661983 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.666458 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.669637 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.681457 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.743972 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.744774 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.746593 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.746851 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.758034 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.789762 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" event={"ID":"c70eec8b-c8da-4620-9c5e-bb19e5d66424","Type":"ContainerStarted","Data":"2ba10edabcdf718559f85374cd5cd52172c0c8fdff1b3992dd5c7d8b2a3c62ea"} Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.790847 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-95xm4" event={"ID":"80c26c09-83a0-4b08-979b-a138a5ed5d4b","Type":"ContainerStarted","Data":"42b64716be9f2e67b7f10125a4c3be261132e42ad3f6fb06d4321e8dd1dfa978"} Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.791803 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" event={"ID":"9c5e1e82-3053-4895-91ce-56475540fc35","Type":"ContainerStarted","Data":"7c2ff2f0049884cbd6153c503594a91f574b475c297530a1bfbd1cc40111611e"} Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.815985 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/e440811a-ec7d-4606-a78b-6b3d5062e044-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.816024 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/0caab707-59fa-4d4d-976b-e1f99d30fc01-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.816054 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e440811a-ec7d-4606-a78b-6b3d5062e044-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.816078 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1f36e25a-056a-45b8-9f46-9e76e589e556\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1f36e25a-056a-45b8-9f46-9e76e589e556\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.816105 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/e440811a-ec7d-4606-a78b-6b3d5062e044-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.816158 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0caab707-59fa-4d4d-976b-e1f99d30fc01-config\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.816174 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0caab707-59fa-4d4d-976b-e1f99d30fc01-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.816188 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/0caab707-59fa-4d4d-976b-e1f99d30fc01-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.816206 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/e440811a-ec7d-4606-a78b-6b3d5062e044-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.816230 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ee17e5b0-ee9b-4b09-8d02-7d70fc226350\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee17e5b0-ee9b-4b09-8d02-7d70fc226350\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.816250 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/0caab707-59fa-4d4d-976b-e1f99d30fc01-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.816285 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c476c6b7-e4b3-4b45-bbf8-19e6b8d448eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c476c6b7-e4b3-4b45-bbf8-19e6b8d448eb\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.816514 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgvmb\" (UniqueName: \"kubernetes.io/projected/e440811a-ec7d-4606-a78b-6b3d5062e044-kube-api-access-wgvmb\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.816550 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e440811a-ec7d-4606-a78b-6b3d5062e044-config\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.816570 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tr25\" (UniqueName: \"kubernetes.io/projected/0caab707-59fa-4d4d-976b-e1f99d30fc01-kube-api-access-9tr25\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.831524 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.832850 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.834470 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.834662 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.843261 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.894577 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.910742 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.917804 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/e440811a-ec7d-4606-a78b-6b3d5062e044-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.917884 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/e440811a-ec7d-4606-a78b-6b3d5062e044-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.917947 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/988dcb32-33f1-4e22-8b8c-a1a3b09828b3-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.918019 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/988dcb32-33f1-4e22-8b8c-a1a3b09828b3-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.918218 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/0caab707-59fa-4d4d-976b-e1f99d30fc01-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.918302 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/988dcb32-33f1-4e22-8b8c-a1a3b09828b3-config\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.918376 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c476c6b7-e4b3-4b45-bbf8-19e6b8d448eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c476c6b7-e4b3-4b45-bbf8-19e6b8d448eb\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.918416 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f87aa4c0-db1d-4faf-a1dc-6c6dc0e07bfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f87aa4c0-db1d-4faf-a1dc-6c6dc0e07bfe\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.918446 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/988dcb32-33f1-4e22-8b8c-a1a3b09828b3-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.918543 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/e440811a-ec7d-4606-a78b-6b3d5062e044-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.918655 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7th7h\" (UniqueName: \"kubernetes.io/projected/988dcb32-33f1-4e22-8b8c-a1a3b09828b3-kube-api-access-7th7h\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.918692 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0caab707-59fa-4d4d-976b-e1f99d30fc01-config\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.918719 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0caab707-59fa-4d4d-976b-e1f99d30fc01-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.918737 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/0caab707-59fa-4d4d-976b-e1f99d30fc01-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.918808 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ee17e5b0-ee9b-4b09-8d02-7d70fc226350\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee17e5b0-ee9b-4b09-8d02-7d70fc226350\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.918891 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgvmb\" (UniqueName: \"kubernetes.io/projected/e440811a-ec7d-4606-a78b-6b3d5062e044-kube-api-access-wgvmb\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.918962 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e440811a-ec7d-4606-a78b-6b3d5062e044-config\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.918991 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tr25\" (UniqueName: \"kubernetes.io/projected/0caab707-59fa-4d4d-976b-e1f99d30fc01-kube-api-access-9tr25\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.919014 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/988dcb32-33f1-4e22-8b8c-a1a3b09828b3-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.919062 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/0caab707-59fa-4d4d-976b-e1f99d30fc01-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.919106 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e440811a-ec7d-4606-a78b-6b3d5062e044-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.919132 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1f36e25a-056a-45b8-9f46-9e76e589e556\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1f36e25a-056a-45b8-9f46-9e76e589e556\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.920291 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0caab707-59fa-4d4d-976b-e1f99d30fc01-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.920448 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e440811a-ec7d-4606-a78b-6b3d5062e044-config\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.920575 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0caab707-59fa-4d4d-976b-e1f99d30fc01-config\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.920775 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e440811a-ec7d-4606-a78b-6b3d5062e044-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.925103 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/0caab707-59fa-4d4d-976b-e1f99d30fc01-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.925129 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/e440811a-ec7d-4606-a78b-6b3d5062e044-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.925255 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/0caab707-59fa-4d4d-976b-e1f99d30fc01-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.925437 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/e440811a-ec7d-4606-a78b-6b3d5062e044-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.925709 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/e440811a-ec7d-4606-a78b-6b3d5062e044-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.927452 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.927585 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c476c6b7-e4b3-4b45-bbf8-19e6b8d448eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c476c6b7-e4b3-4b45-bbf8-19e6b8d448eb\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b55f6c54ea805676925c6f43d51d793ff0dfb8195500f5bbfb4b45afa29ca860/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.927706 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.927965 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ee17e5b0-ee9b-4b09-8d02-7d70fc226350\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee17e5b0-ee9b-4b09-8d02-7d70fc226350\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/272fa0bd98a2bf8777c31c8acf9b122f175a49dea957de9e953deea6bd8ea6de/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.927451 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.928218 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1f36e25a-056a-45b8-9f46-9e76e589e556\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1f36e25a-056a-45b8-9f46-9e76e589e556\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9ae7a02db779adecfcf125856d8a3a9b76a2c465b62ef4520fd277d94d3ef230/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.930225 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/0caab707-59fa-4d4d-976b-e1f99d30fc01-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.940196 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tr25\" (UniqueName: \"kubernetes.io/projected/0caab707-59fa-4d4d-976b-e1f99d30fc01-kube-api-access-9tr25\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.941714 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgvmb\" (UniqueName: \"kubernetes.io/projected/e440811a-ec7d-4606-a78b-6b3d5062e044-kube-api-access-wgvmb\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.969461 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c476c6b7-e4b3-4b45-bbf8-19e6b8d448eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c476c6b7-e4b3-4b45-bbf8-19e6b8d448eb\") pod \"logging-loki-compactor-0\" (UID: \"0caab707-59fa-4d4d-976b-e1f99d30fc01\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.969950 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1f36e25a-056a-45b8-9f46-9e76e589e556\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1f36e25a-056a-45b8-9f46-9e76e589e556\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.975512 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ee17e5b0-ee9b-4b09-8d02-7d70fc226350\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee17e5b0-ee9b-4b09-8d02-7d70fc226350\") pod \"logging-loki-ingester-0\" (UID: \"e440811a-ec7d-4606-a78b-6b3d5062e044\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:16 crc kubenswrapper[4966]: I0127 15:56:16.980110 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.022386 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/988dcb32-33f1-4e22-8b8c-a1a3b09828b3-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.022468 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/988dcb32-33f1-4e22-8b8c-a1a3b09828b3-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.022495 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/988dcb32-33f1-4e22-8b8c-a1a3b09828b3-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.022528 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/988dcb32-33f1-4e22-8b8c-a1a3b09828b3-config\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.022557 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f87aa4c0-db1d-4faf-a1dc-6c6dc0e07bfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f87aa4c0-db1d-4faf-a1dc-6c6dc0e07bfe\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.022577 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/988dcb32-33f1-4e22-8b8c-a1a3b09828b3-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.022620 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7th7h\" (UniqueName: \"kubernetes.io/projected/988dcb32-33f1-4e22-8b8c-a1a3b09828b3-kube-api-access-7th7h\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.023994 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/988dcb32-33f1-4e22-8b8c-a1a3b09828b3-config\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.024017 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/988dcb32-33f1-4e22-8b8c-a1a3b09828b3-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.026524 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.026565 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f87aa4c0-db1d-4faf-a1dc-6c6dc0e07bfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f87aa4c0-db1d-4faf-a1dc-6c6dc0e07bfe\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1a76b0f9f0ac425a49e72f0bdec16a6af444eb03c6740cbf6eba8b492e17a27d/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.027214 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/988dcb32-33f1-4e22-8b8c-a1a3b09828b3-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.027516 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/988dcb32-33f1-4e22-8b8c-a1a3b09828b3-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.029447 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/988dcb32-33f1-4e22-8b8c-a1a3b09828b3-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.055835 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7th7h\" (UniqueName: \"kubernetes.io/projected/988dcb32-33f1-4e22-8b8c-a1a3b09828b3-kube-api-access-7th7h\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.057174 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f87aa4c0-db1d-4faf-a1dc-6c6dc0e07bfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f87aa4c0-db1d-4faf-a1dc-6c6dc0e07bfe\") pod \"logging-loki-index-gateway-0\" (UID: \"988dcb32-33f1-4e22-8b8c-a1a3b09828b3\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.069370 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.160330 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.326562 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.355639 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 27 15:56:17 crc kubenswrapper[4966]: W0127 15:56:17.371705 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0caab707_59fa_4d4d_976b_e1f99d30fc01.slice/crio-e3d2129b2b42d23d9c0b8a6b2db2336d3ac0013b4dbb2c5346976ccb3e1ffa7e WatchSource:0}: Error finding container e3d2129b2b42d23d9c0b8a6b2db2336d3ac0013b4dbb2c5346976ccb3e1ffa7e: Status 404 returned error can't find the container with id e3d2129b2b42d23d9c0b8a6b2db2336d3ac0013b4dbb2c5346976ccb3e1ffa7e Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.381668 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-575b568fc4-5wzxv"] Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.387542 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-575b568fc4-lr7wd"] Jan 27 15:56:17 crc kubenswrapper[4966]: W0127 15:56:17.390647 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa2c58c2_9b23_4360_897e_582237775277.slice/crio-20829e1dbe6dfe2d833f1e5da656a31d504d37cbcc570be2181c0923349e37bf WatchSource:0}: Error finding container 20829e1dbe6dfe2d833f1e5da656a31d504d37cbcc570be2181c0923349e37bf: Status 404 returned error can't find the container with id 20829e1dbe6dfe2d833f1e5da656a31d504d37cbcc570be2181c0923349e37bf Jan 27 15:56:17 crc kubenswrapper[4966]: W0127 15:56:17.397230 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda58b269c_6e15_4eda_aa6f_00e51aa132fe.slice/crio-f8cb3a9d06b48ce4a7ac2041b5bccb422ec05c5b598d1366701d0c0883384ef9 WatchSource:0}: Error finding container f8cb3a9d06b48ce4a7ac2041b5bccb422ec05c5b598d1366701d0c0883384ef9: Status 404 returned error can't find the container with id f8cb3a9d06b48ce4a7ac2041b5bccb422ec05c5b598d1366701d0c0883384ef9 Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.637066 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 27 15:56:17 crc kubenswrapper[4966]: W0127 15:56:17.647804 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod988dcb32_33f1_4e22_8b8c_a1a3b09828b3.slice/crio-09841ddf500ee748be688be992a3c1e37834a3c5aa3d16a22a52cc91e0e46bed WatchSource:0}: Error finding container 09841ddf500ee748be688be992a3c1e37834a3c5aa3d16a22a52cc91e0e46bed: Status 404 returned error can't find the container with id 09841ddf500ee748be688be992a3c1e37834a3c5aa3d16a22a52cc91e0e46bed Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.797788 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"988dcb32-33f1-4e22-8b8c-a1a3b09828b3","Type":"ContainerStarted","Data":"09841ddf500ee748be688be992a3c1e37834a3c5aa3d16a22a52cc91e0e46bed"} Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.800003 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"0caab707-59fa-4d4d-976b-e1f99d30fc01","Type":"ContainerStarted","Data":"e3d2129b2b42d23d9c0b8a6b2db2336d3ac0013b4dbb2c5346976ccb3e1ffa7e"} Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.800832 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" event={"ID":"a58b269c-6e15-4eda-aa6f-00e51aa132fe","Type":"ContainerStarted","Data":"f8cb3a9d06b48ce4a7ac2041b5bccb422ec05c5b598d1366701d0c0883384ef9"} Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.801912 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"e440811a-ec7d-4606-a78b-6b3d5062e044","Type":"ContainerStarted","Data":"54d71eb91881bdbba5f07f83ee7ce33cc5bf344d3198bc5d0b9bb233baa08f0e"} Jan 27 15:56:17 crc kubenswrapper[4966]: I0127 15:56:17.802624 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" event={"ID":"fa2c58c2-9b23-4360-897e-582237775277","Type":"ContainerStarted","Data":"20829e1dbe6dfe2d833f1e5da656a31d504d37cbcc570be2181c0923349e37bf"} Jan 27 15:56:21 crc kubenswrapper[4966]: I0127 15:56:21.830457 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" event={"ID":"a58b269c-6e15-4eda-aa6f-00e51aa132fe","Type":"ContainerStarted","Data":"515ca9a0887d452dae36b3044dbabda7025bd585b9310c5c7001a181ad59c260"} Jan 27 15:56:21 crc kubenswrapper[4966]: I0127 15:56:21.832307 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" event={"ID":"9c5e1e82-3053-4895-91ce-56475540fc35","Type":"ContainerStarted","Data":"fe61141b8e677edc5546c7712fece49b3a1ff7c3bfcfd113f2cde895e0b6590a"} Jan 27 15:56:21 crc kubenswrapper[4966]: I0127 15:56:21.832609 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" Jan 27 15:56:21 crc kubenswrapper[4966]: I0127 15:56:21.833757 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"e440811a-ec7d-4606-a78b-6b3d5062e044","Type":"ContainerStarted","Data":"f8f071b6ddc2f3d9d3d891f008a17d71ff3718ec5d3b7c51859bf125a61dc990"} Jan 27 15:56:21 crc kubenswrapper[4966]: I0127 15:56:21.833910 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:56:21 crc kubenswrapper[4966]: I0127 15:56:21.834870 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" event={"ID":"fa2c58c2-9b23-4360-897e-582237775277","Type":"ContainerStarted","Data":"2a3b64360f895fe9164cded07ff1b277a991b1dd3bb90558198634eb6f8d722c"} Jan 27 15:56:21 crc kubenswrapper[4966]: I0127 15:56:21.836384 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"988dcb32-33f1-4e22-8b8c-a1a3b09828b3","Type":"ContainerStarted","Data":"d783512b7ff8d8dadacabb6ae93ab6c770de5f0a4ec4b5eccb6dd7b83de84092"} Jan 27 15:56:21 crc kubenswrapper[4966]: I0127 15:56:21.836515 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:21 crc kubenswrapper[4966]: I0127 15:56:21.837403 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" event={"ID":"c70eec8b-c8da-4620-9c5e-bb19e5d66424","Type":"ContainerStarted","Data":"4cf60da666ffe0a5a6b76e5c3b1f09475688ed8223a123d1adf75ce3278016b1"} Jan 27 15:56:21 crc kubenswrapper[4966]: I0127 15:56:21.837514 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" Jan 27 15:56:21 crc kubenswrapper[4966]: I0127 15:56:21.838783 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"0caab707-59fa-4d4d-976b-e1f99d30fc01","Type":"ContainerStarted","Data":"7ddbb117e00fe04e6267605b617be4f40e954a05ef86e6638ce96bd6cb0931ba"} Jan 27 15:56:21 crc kubenswrapper[4966]: I0127 15:56:21.838988 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:21 crc kubenswrapper[4966]: I0127 15:56:21.840255 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-95xm4" event={"ID":"80c26c09-83a0-4b08-979b-a138a5ed5d4b","Type":"ContainerStarted","Data":"5e8bc8b0a2633405d521fa8a04dc2875d834d3134acf556b3850db83e0d6417e"} Jan 27 15:56:21 crc kubenswrapper[4966]: I0127 15:56:21.840478 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:21 crc kubenswrapper[4966]: I0127 15:56:21.852109 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" podStartSLOduration=2.5196701040000002 podStartE2EDuration="6.852090196s" podCreationTimestamp="2026-01-27 15:56:15 +0000 UTC" firstStartedPulling="2026-01-27 15:56:16.566442761 +0000 UTC m=+842.869236249" lastFinishedPulling="2026-01-27 15:56:20.898862843 +0000 UTC m=+847.201656341" observedRunningTime="2026-01-27 15:56:21.849641229 +0000 UTC m=+848.152434727" watchObservedRunningTime="2026-01-27 15:56:21.852090196 +0000 UTC m=+848.154883674" Jan 27 15:56:21 crc kubenswrapper[4966]: I0127 15:56:21.905873 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" podStartSLOduration=2.442979874 podStartE2EDuration="6.905850115s" podCreationTimestamp="2026-01-27 15:56:15 +0000 UTC" firstStartedPulling="2026-01-27 15:56:16.431707938 +0000 UTC m=+842.734501426" lastFinishedPulling="2026-01-27 15:56:20.894578139 +0000 UTC m=+847.197371667" observedRunningTime="2026-01-27 15:56:21.876999528 +0000 UTC m=+848.179793076" watchObservedRunningTime="2026-01-27 15:56:21.905850115 +0000 UTC m=+848.208643653" Jan 27 15:56:21 crc kubenswrapper[4966]: I0127 15:56:21.909821 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.423353424 podStartE2EDuration="6.909798658s" podCreationTimestamp="2026-01-27 15:56:15 +0000 UTC" firstStartedPulling="2026-01-27 15:56:17.375778829 +0000 UTC m=+843.678572337" lastFinishedPulling="2026-01-27 15:56:20.862224073 +0000 UTC m=+847.165017571" observedRunningTime="2026-01-27 15:56:21.902131827 +0000 UTC m=+848.204925335" watchObservedRunningTime="2026-01-27 15:56:21.909798658 +0000 UTC m=+848.212592166" Jan 27 15:56:21 crc kubenswrapper[4966]: I0127 15:56:21.937104 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76788598db-95xm4" podStartSLOduration=2.403161291 podStartE2EDuration="6.937075495s" podCreationTimestamp="2026-01-27 15:56:15 +0000 UTC" firstStartedPulling="2026-01-27 15:56:16.288497099 +0000 UTC m=+842.591290587" lastFinishedPulling="2026-01-27 15:56:20.822411303 +0000 UTC m=+847.125204791" observedRunningTime="2026-01-27 15:56:21.924656045 +0000 UTC m=+848.227449563" watchObservedRunningTime="2026-01-27 15:56:21.937075495 +0000 UTC m=+848.239869003" Jan 27 15:56:21 crc kubenswrapper[4966]: I0127 15:56:21.946213 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.701808233 podStartE2EDuration="6.946194891s" podCreationTimestamp="2026-01-27 15:56:15 +0000 UTC" firstStartedPulling="2026-01-27 15:56:17.650294984 +0000 UTC m=+843.953088472" lastFinishedPulling="2026-01-27 15:56:20.894681632 +0000 UTC m=+847.197475130" observedRunningTime="2026-01-27 15:56:21.945658485 +0000 UTC m=+848.248451973" watchObservedRunningTime="2026-01-27 15:56:21.946194891 +0000 UTC m=+848.248988399" Jan 27 15:56:21 crc kubenswrapper[4966]: I0127 15:56:21.965495 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.423862203 podStartE2EDuration="6.965472676s" podCreationTimestamp="2026-01-27 15:56:15 +0000 UTC" firstStartedPulling="2026-01-27 15:56:17.334521407 +0000 UTC m=+843.637314895" lastFinishedPulling="2026-01-27 15:56:20.87613188 +0000 UTC m=+847.178925368" observedRunningTime="2026-01-27 15:56:21.960051246 +0000 UTC m=+848.262844744" watchObservedRunningTime="2026-01-27 15:56:21.965472676 +0000 UTC m=+848.268266164" Jan 27 15:56:27 crc kubenswrapper[4966]: I0127 15:56:27.879304 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" event={"ID":"a58b269c-6e15-4eda-aa6f-00e51aa132fe","Type":"ContainerStarted","Data":"651a57da6ab98309d90e289c47796be7df1147bc82cc5d00751149df5137e048"} Jan 27 15:56:27 crc kubenswrapper[4966]: I0127 15:56:27.881126 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:27 crc kubenswrapper[4966]: I0127 15:56:27.881291 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:27 crc kubenswrapper[4966]: I0127 15:56:27.882699 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" event={"ID":"fa2c58c2-9b23-4360-897e-582237775277","Type":"ContainerStarted","Data":"c4ca2b1314d5515fa4dea62271c6aff0575d7b311c92a072536630a9feef6fa5"} Jan 27 15:56:27 crc kubenswrapper[4966]: I0127 15:56:27.882954 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:27 crc kubenswrapper[4966]: I0127 15:56:27.887854 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:27 crc kubenswrapper[4966]: I0127 15:56:27.896613 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" Jan 27 15:56:27 crc kubenswrapper[4966]: I0127 15:56:27.898321 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:27 crc kubenswrapper[4966]: I0127 15:56:27.908349 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" podStartSLOduration=3.477330308 podStartE2EDuration="12.908331801s" podCreationTimestamp="2026-01-27 15:56:15 +0000 UTC" firstStartedPulling="2026-01-27 15:56:17.399614866 +0000 UTC m=+843.702408354" lastFinishedPulling="2026-01-27 15:56:26.830616359 +0000 UTC m=+853.133409847" observedRunningTime="2026-01-27 15:56:27.901669332 +0000 UTC m=+854.204462820" watchObservedRunningTime="2026-01-27 15:56:27.908331801 +0000 UTC m=+854.211125289" Jan 27 15:56:27 crc kubenswrapper[4966]: I0127 15:56:27.955229 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" podStartSLOduration=3.498945078 podStartE2EDuration="12.955207413s" podCreationTimestamp="2026-01-27 15:56:15 +0000 UTC" firstStartedPulling="2026-01-27 15:56:17.392877086 +0000 UTC m=+843.695670564" lastFinishedPulling="2026-01-27 15:56:26.849139371 +0000 UTC m=+853.151932899" observedRunningTime="2026-01-27 15:56:27.952001062 +0000 UTC m=+854.254794640" watchObservedRunningTime="2026-01-27 15:56:27.955207413 +0000 UTC m=+854.258000901" Jan 27 15:56:28 crc kubenswrapper[4966]: I0127 15:56:28.890509 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:28 crc kubenswrapper[4966]: I0127 15:56:28.900724 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" Jan 27 15:56:36 crc kubenswrapper[4966]: I0127 15:56:36.102754 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" Jan 27 15:56:36 crc kubenswrapper[4966]: I0127 15:56:36.987728 4966 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 27 15:56:36 crc kubenswrapper[4966]: I0127 15:56:36.987824 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="e440811a-ec7d-4606-a78b-6b3d5062e044" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 15:56:37 crc kubenswrapper[4966]: I0127 15:56:37.082471 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Jan 27 15:56:37 crc kubenswrapper[4966]: I0127 15:56:37.171775 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 15:56:45 crc kubenswrapper[4966]: I0127 15:56:45.858009 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" Jan 27 15:56:45 crc kubenswrapper[4966]: I0127 15:56:45.985688 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 15:56:46 crc kubenswrapper[4966]: I0127 15:56:46.989602 4966 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 27 15:56:46 crc kubenswrapper[4966]: I0127 15:56:46.990134 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="e440811a-ec7d-4606-a78b-6b3d5062e044" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 15:56:56 crc kubenswrapper[4966]: I0127 15:56:56.777482 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mfh7v"] Jan 27 15:56:56 crc kubenswrapper[4966]: I0127 15:56:56.780514 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mfh7v" Jan 27 15:56:56 crc kubenswrapper[4966]: I0127 15:56:56.793386 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mfh7v"] Jan 27 15:56:56 crc kubenswrapper[4966]: I0127 15:56:56.866280 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01891a9b-107e-432d-a1dc-138529de7c9e-catalog-content\") pod \"certified-operators-mfh7v\" (UID: \"01891a9b-107e-432d-a1dc-138529de7c9e\") " pod="openshift-marketplace/certified-operators-mfh7v" Jan 27 15:56:56 crc kubenswrapper[4966]: I0127 15:56:56.866329 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01891a9b-107e-432d-a1dc-138529de7c9e-utilities\") pod \"certified-operators-mfh7v\" (UID: \"01891a9b-107e-432d-a1dc-138529de7c9e\") " pod="openshift-marketplace/certified-operators-mfh7v" Jan 27 15:56:56 crc kubenswrapper[4966]: I0127 15:56:56.866393 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4hdr\" (UniqueName: \"kubernetes.io/projected/01891a9b-107e-432d-a1dc-138529de7c9e-kube-api-access-p4hdr\") pod \"certified-operators-mfh7v\" (UID: \"01891a9b-107e-432d-a1dc-138529de7c9e\") " pod="openshift-marketplace/certified-operators-mfh7v" Jan 27 15:56:56 crc kubenswrapper[4966]: I0127 15:56:56.968383 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01891a9b-107e-432d-a1dc-138529de7c9e-catalog-content\") pod \"certified-operators-mfh7v\" (UID: \"01891a9b-107e-432d-a1dc-138529de7c9e\") " pod="openshift-marketplace/certified-operators-mfh7v" Jan 27 15:56:56 crc kubenswrapper[4966]: I0127 15:56:56.968434 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01891a9b-107e-432d-a1dc-138529de7c9e-utilities\") pod \"certified-operators-mfh7v\" (UID: \"01891a9b-107e-432d-a1dc-138529de7c9e\") " pod="openshift-marketplace/certified-operators-mfh7v" Jan 27 15:56:56 crc kubenswrapper[4966]: I0127 15:56:56.968463 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4hdr\" (UniqueName: \"kubernetes.io/projected/01891a9b-107e-432d-a1dc-138529de7c9e-kube-api-access-p4hdr\") pod \"certified-operators-mfh7v\" (UID: \"01891a9b-107e-432d-a1dc-138529de7c9e\") " pod="openshift-marketplace/certified-operators-mfh7v" Jan 27 15:56:56 crc kubenswrapper[4966]: I0127 15:56:56.968992 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01891a9b-107e-432d-a1dc-138529de7c9e-utilities\") pod \"certified-operators-mfh7v\" (UID: \"01891a9b-107e-432d-a1dc-138529de7c9e\") " pod="openshift-marketplace/certified-operators-mfh7v" Jan 27 15:56:56 crc kubenswrapper[4966]: I0127 15:56:56.969253 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01891a9b-107e-432d-a1dc-138529de7c9e-catalog-content\") pod \"certified-operators-mfh7v\" (UID: \"01891a9b-107e-432d-a1dc-138529de7c9e\") " pod="openshift-marketplace/certified-operators-mfh7v" Jan 27 15:56:56 crc kubenswrapper[4966]: I0127 15:56:56.988146 4966 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 27 15:56:56 crc kubenswrapper[4966]: I0127 15:56:56.988213 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="e440811a-ec7d-4606-a78b-6b3d5062e044" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 15:56:56 crc kubenswrapper[4966]: I0127 15:56:56.989283 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4hdr\" (UniqueName: \"kubernetes.io/projected/01891a9b-107e-432d-a1dc-138529de7c9e-kube-api-access-p4hdr\") pod \"certified-operators-mfh7v\" (UID: \"01891a9b-107e-432d-a1dc-138529de7c9e\") " pod="openshift-marketplace/certified-operators-mfh7v" Jan 27 15:56:57 crc kubenswrapper[4966]: I0127 15:56:57.153491 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mfh7v" Jan 27 15:56:57 crc kubenswrapper[4966]: I0127 15:56:57.594024 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mfh7v"] Jan 27 15:56:58 crc kubenswrapper[4966]: I0127 15:56:58.118038 4966 generic.go:334] "Generic (PLEG): container finished" podID="01891a9b-107e-432d-a1dc-138529de7c9e" containerID="884e3e042f2696c4aa9299a07e61309502bf4062e5c224fd82bcfbfd84313b8c" exitCode=0 Jan 27 15:56:58 crc kubenswrapper[4966]: I0127 15:56:58.118106 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfh7v" event={"ID":"01891a9b-107e-432d-a1dc-138529de7c9e","Type":"ContainerDied","Data":"884e3e042f2696c4aa9299a07e61309502bf4062e5c224fd82bcfbfd84313b8c"} Jan 27 15:56:58 crc kubenswrapper[4966]: I0127 15:56:58.118391 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfh7v" event={"ID":"01891a9b-107e-432d-a1dc-138529de7c9e","Type":"ContainerStarted","Data":"90e4b01d3f298afe9b6665df9bfc7902f288b3ff2c78733934b1bf84d01b09c6"} Jan 27 15:57:00 crc kubenswrapper[4966]: I0127 15:57:00.140526 4966 generic.go:334] "Generic (PLEG): container finished" podID="01891a9b-107e-432d-a1dc-138529de7c9e" containerID="8d4e6354c67cccc91f7a75b107c9ff3bb3114b9e5655d879bb6e908d389f2182" exitCode=0 Jan 27 15:57:00 crc kubenswrapper[4966]: I0127 15:57:00.141036 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfh7v" event={"ID":"01891a9b-107e-432d-a1dc-138529de7c9e","Type":"ContainerDied","Data":"8d4e6354c67cccc91f7a75b107c9ff3bb3114b9e5655d879bb6e908d389f2182"} Jan 27 15:57:01 crc kubenswrapper[4966]: I0127 15:57:01.166952 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfh7v" event={"ID":"01891a9b-107e-432d-a1dc-138529de7c9e","Type":"ContainerStarted","Data":"612008470c6fdc5c2d33a47226a7ec91979fcd32c6448276aca5248707126c90"} Jan 27 15:57:01 crc kubenswrapper[4966]: I0127 15:57:01.192409 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mfh7v" podStartSLOduration=2.7400911900000002 podStartE2EDuration="5.192385696s" podCreationTimestamp="2026-01-27 15:56:56 +0000 UTC" firstStartedPulling="2026-01-27 15:56:58.12009807 +0000 UTC m=+884.422891558" lastFinishedPulling="2026-01-27 15:57:00.572392566 +0000 UTC m=+886.875186064" observedRunningTime="2026-01-27 15:57:01.184497288 +0000 UTC m=+887.487290786" watchObservedRunningTime="2026-01-27 15:57:01.192385696 +0000 UTC m=+887.495179194" Jan 27 15:57:01 crc kubenswrapper[4966]: I0127 15:57:01.956516 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ngm9n"] Jan 27 15:57:01 crc kubenswrapper[4966]: I0127 15:57:01.958503 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngm9n" Jan 27 15:57:01 crc kubenswrapper[4966]: I0127 15:57:01.972803 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ngm9n"] Jan 27 15:57:02 crc kubenswrapper[4966]: I0127 15:57:02.050445 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dktk2\" (UniqueName: \"kubernetes.io/projected/15d5441f-1992-4f5f-af00-5050a1903e31-kube-api-access-dktk2\") pod \"community-operators-ngm9n\" (UID: \"15d5441f-1992-4f5f-af00-5050a1903e31\") " pod="openshift-marketplace/community-operators-ngm9n" Jan 27 15:57:02 crc kubenswrapper[4966]: I0127 15:57:02.050486 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15d5441f-1992-4f5f-af00-5050a1903e31-catalog-content\") pod \"community-operators-ngm9n\" (UID: \"15d5441f-1992-4f5f-af00-5050a1903e31\") " pod="openshift-marketplace/community-operators-ngm9n" Jan 27 15:57:02 crc kubenswrapper[4966]: I0127 15:57:02.050521 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15d5441f-1992-4f5f-af00-5050a1903e31-utilities\") pod \"community-operators-ngm9n\" (UID: \"15d5441f-1992-4f5f-af00-5050a1903e31\") " pod="openshift-marketplace/community-operators-ngm9n" Jan 27 15:57:02 crc kubenswrapper[4966]: I0127 15:57:02.151571 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dktk2\" (UniqueName: \"kubernetes.io/projected/15d5441f-1992-4f5f-af00-5050a1903e31-kube-api-access-dktk2\") pod \"community-operators-ngm9n\" (UID: \"15d5441f-1992-4f5f-af00-5050a1903e31\") " pod="openshift-marketplace/community-operators-ngm9n" Jan 27 15:57:02 crc kubenswrapper[4966]: I0127 15:57:02.151613 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15d5441f-1992-4f5f-af00-5050a1903e31-catalog-content\") pod \"community-operators-ngm9n\" (UID: \"15d5441f-1992-4f5f-af00-5050a1903e31\") " pod="openshift-marketplace/community-operators-ngm9n" Jan 27 15:57:02 crc kubenswrapper[4966]: I0127 15:57:02.151638 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15d5441f-1992-4f5f-af00-5050a1903e31-utilities\") pod \"community-operators-ngm9n\" (UID: \"15d5441f-1992-4f5f-af00-5050a1903e31\") " pod="openshift-marketplace/community-operators-ngm9n" Jan 27 15:57:02 crc kubenswrapper[4966]: I0127 15:57:02.152132 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15d5441f-1992-4f5f-af00-5050a1903e31-utilities\") pod \"community-operators-ngm9n\" (UID: \"15d5441f-1992-4f5f-af00-5050a1903e31\") " pod="openshift-marketplace/community-operators-ngm9n" Jan 27 15:57:02 crc kubenswrapper[4966]: I0127 15:57:02.152145 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15d5441f-1992-4f5f-af00-5050a1903e31-catalog-content\") pod \"community-operators-ngm9n\" (UID: \"15d5441f-1992-4f5f-af00-5050a1903e31\") " pod="openshift-marketplace/community-operators-ngm9n" Jan 27 15:57:02 crc kubenswrapper[4966]: I0127 15:57:02.171566 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dktk2\" (UniqueName: \"kubernetes.io/projected/15d5441f-1992-4f5f-af00-5050a1903e31-kube-api-access-dktk2\") pod \"community-operators-ngm9n\" (UID: \"15d5441f-1992-4f5f-af00-5050a1903e31\") " pod="openshift-marketplace/community-operators-ngm9n" Jan 27 15:57:02 crc kubenswrapper[4966]: I0127 15:57:02.276442 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngm9n" Jan 27 15:57:02 crc kubenswrapper[4966]: I0127 15:57:02.764095 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ngm9n"] Jan 27 15:57:03 crc kubenswrapper[4966]: I0127 15:57:03.180775 4966 generic.go:334] "Generic (PLEG): container finished" podID="15d5441f-1992-4f5f-af00-5050a1903e31" containerID="52496e9d40d6668c0e514d7dbd96ec04e62b0ff36496414d89030eeb2d44948f" exitCode=0 Jan 27 15:57:03 crc kubenswrapper[4966]: I0127 15:57:03.181105 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngm9n" event={"ID":"15d5441f-1992-4f5f-af00-5050a1903e31","Type":"ContainerDied","Data":"52496e9d40d6668c0e514d7dbd96ec04e62b0ff36496414d89030eeb2d44948f"} Jan 27 15:57:03 crc kubenswrapper[4966]: I0127 15:57:03.182729 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngm9n" event={"ID":"15d5441f-1992-4f5f-af00-5050a1903e31","Type":"ContainerStarted","Data":"e7b4acf1e02e97db6415880a55f8c221a77c141acfe9de1e644859b7a28de27a"} Jan 27 15:57:04 crc kubenswrapper[4966]: I0127 15:57:04.191488 4966 generic.go:334] "Generic (PLEG): container finished" podID="15d5441f-1992-4f5f-af00-5050a1903e31" containerID="250382c03fbf079ac9d781cc7ad19815e62a755bcd2019ed6a0793943fd62d4c" exitCode=0 Jan 27 15:57:04 crc kubenswrapper[4966]: I0127 15:57:04.191559 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngm9n" event={"ID":"15d5441f-1992-4f5f-af00-5050a1903e31","Type":"ContainerDied","Data":"250382c03fbf079ac9d781cc7ad19815e62a755bcd2019ed6a0793943fd62d4c"} Jan 27 15:57:05 crc kubenswrapper[4966]: I0127 15:57:05.201960 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngm9n" event={"ID":"15d5441f-1992-4f5f-af00-5050a1903e31","Type":"ContainerStarted","Data":"37952f4eec7ce2584d1cad2a4b50be06ec0a1784900a6deb33994f385aeecda1"} Jan 27 15:57:06 crc kubenswrapper[4966]: I0127 15:57:06.985151 4966 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 27 15:57:06 crc kubenswrapper[4966]: I0127 15:57:06.985470 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="e440811a-ec7d-4606-a78b-6b3d5062e044" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 15:57:07 crc kubenswrapper[4966]: I0127 15:57:07.154134 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mfh7v" Jan 27 15:57:07 crc kubenswrapper[4966]: I0127 15:57:07.154198 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mfh7v" Jan 27 15:57:07 crc kubenswrapper[4966]: I0127 15:57:07.203692 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mfh7v" Jan 27 15:57:07 crc kubenswrapper[4966]: I0127 15:57:07.225757 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ngm9n" podStartSLOduration=4.766768117 podStartE2EDuration="6.225737111s" podCreationTimestamp="2026-01-27 15:57:01 +0000 UTC" firstStartedPulling="2026-01-27 15:57:03.182762286 +0000 UTC m=+889.485555774" lastFinishedPulling="2026-01-27 15:57:04.64173129 +0000 UTC m=+890.944524768" observedRunningTime="2026-01-27 15:57:05.217748807 +0000 UTC m=+891.520542305" watchObservedRunningTime="2026-01-27 15:57:07.225737111 +0000 UTC m=+893.528530599" Jan 27 15:57:07 crc kubenswrapper[4966]: I0127 15:57:07.272264 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mfh7v" Jan 27 15:57:08 crc kubenswrapper[4966]: I0127 15:57:08.343970 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mfh7v"] Jan 27 15:57:09 crc kubenswrapper[4966]: I0127 15:57:09.241274 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mfh7v" podUID="01891a9b-107e-432d-a1dc-138529de7c9e" containerName="registry-server" containerID="cri-o://612008470c6fdc5c2d33a47226a7ec91979fcd32c6448276aca5248707126c90" gracePeriod=2 Jan 27 15:57:09 crc kubenswrapper[4966]: I0127 15:57:09.650821 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mfh7v" Jan 27 15:57:09 crc kubenswrapper[4966]: I0127 15:57:09.674301 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4hdr\" (UniqueName: \"kubernetes.io/projected/01891a9b-107e-432d-a1dc-138529de7c9e-kube-api-access-p4hdr\") pod \"01891a9b-107e-432d-a1dc-138529de7c9e\" (UID: \"01891a9b-107e-432d-a1dc-138529de7c9e\") " Jan 27 15:57:09 crc kubenswrapper[4966]: I0127 15:57:09.674662 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01891a9b-107e-432d-a1dc-138529de7c9e-utilities\") pod \"01891a9b-107e-432d-a1dc-138529de7c9e\" (UID: \"01891a9b-107e-432d-a1dc-138529de7c9e\") " Jan 27 15:57:09 crc kubenswrapper[4966]: I0127 15:57:09.674750 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01891a9b-107e-432d-a1dc-138529de7c9e-catalog-content\") pod \"01891a9b-107e-432d-a1dc-138529de7c9e\" (UID: \"01891a9b-107e-432d-a1dc-138529de7c9e\") " Jan 27 15:57:09 crc kubenswrapper[4966]: I0127 15:57:09.677826 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01891a9b-107e-432d-a1dc-138529de7c9e-utilities" (OuterVolumeSpecName: "utilities") pod "01891a9b-107e-432d-a1dc-138529de7c9e" (UID: "01891a9b-107e-432d-a1dc-138529de7c9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:57:09 crc kubenswrapper[4966]: I0127 15:57:09.681855 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01891a9b-107e-432d-a1dc-138529de7c9e-kube-api-access-p4hdr" (OuterVolumeSpecName: "kube-api-access-p4hdr") pod "01891a9b-107e-432d-a1dc-138529de7c9e" (UID: "01891a9b-107e-432d-a1dc-138529de7c9e"). InnerVolumeSpecName "kube-api-access-p4hdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:57:09 crc kubenswrapper[4966]: I0127 15:57:09.737920 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01891a9b-107e-432d-a1dc-138529de7c9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01891a9b-107e-432d-a1dc-138529de7c9e" (UID: "01891a9b-107e-432d-a1dc-138529de7c9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:57:09 crc kubenswrapper[4966]: I0127 15:57:09.777219 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01891a9b-107e-432d-a1dc-138529de7c9e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:09 crc kubenswrapper[4966]: I0127 15:57:09.777248 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01891a9b-107e-432d-a1dc-138529de7c9e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:09 crc kubenswrapper[4966]: I0127 15:57:09.777260 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4hdr\" (UniqueName: \"kubernetes.io/projected/01891a9b-107e-432d-a1dc-138529de7c9e-kube-api-access-p4hdr\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:10 crc kubenswrapper[4966]: I0127 15:57:10.119247 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:57:10 crc kubenswrapper[4966]: I0127 15:57:10.119304 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:57:10 crc kubenswrapper[4966]: I0127 15:57:10.247839 4966 generic.go:334] "Generic (PLEG): container finished" podID="01891a9b-107e-432d-a1dc-138529de7c9e" containerID="612008470c6fdc5c2d33a47226a7ec91979fcd32c6448276aca5248707126c90" exitCode=0 Jan 27 15:57:10 crc kubenswrapper[4966]: I0127 15:57:10.247890 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mfh7v" Jan 27 15:57:10 crc kubenswrapper[4966]: I0127 15:57:10.247885 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfh7v" event={"ID":"01891a9b-107e-432d-a1dc-138529de7c9e","Type":"ContainerDied","Data":"612008470c6fdc5c2d33a47226a7ec91979fcd32c6448276aca5248707126c90"} Jan 27 15:57:10 crc kubenswrapper[4966]: I0127 15:57:10.247978 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfh7v" event={"ID":"01891a9b-107e-432d-a1dc-138529de7c9e","Type":"ContainerDied","Data":"90e4b01d3f298afe9b6665df9bfc7902f288b3ff2c78733934b1bf84d01b09c6"} Jan 27 15:57:10 crc kubenswrapper[4966]: I0127 15:57:10.248001 4966 scope.go:117] "RemoveContainer" containerID="612008470c6fdc5c2d33a47226a7ec91979fcd32c6448276aca5248707126c90" Jan 27 15:57:10 crc kubenswrapper[4966]: I0127 15:57:10.271909 4966 scope.go:117] "RemoveContainer" containerID="8d4e6354c67cccc91f7a75b107c9ff3bb3114b9e5655d879bb6e908d389f2182" Jan 27 15:57:10 crc kubenswrapper[4966]: I0127 15:57:10.276097 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mfh7v"] Jan 27 15:57:10 crc kubenswrapper[4966]: I0127 15:57:10.281048 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mfh7v"] Jan 27 15:57:10 crc kubenswrapper[4966]: I0127 15:57:10.303228 4966 scope.go:117] "RemoveContainer" containerID="884e3e042f2696c4aa9299a07e61309502bf4062e5c224fd82bcfbfd84313b8c" Jan 27 15:57:10 crc kubenswrapper[4966]: I0127 15:57:10.323863 4966 scope.go:117] "RemoveContainer" containerID="612008470c6fdc5c2d33a47226a7ec91979fcd32c6448276aca5248707126c90" Jan 27 15:57:10 crc kubenswrapper[4966]: E0127 15:57:10.324367 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"612008470c6fdc5c2d33a47226a7ec91979fcd32c6448276aca5248707126c90\": container with ID starting with 612008470c6fdc5c2d33a47226a7ec91979fcd32c6448276aca5248707126c90 not found: ID does not exist" containerID="612008470c6fdc5c2d33a47226a7ec91979fcd32c6448276aca5248707126c90" Jan 27 15:57:10 crc kubenswrapper[4966]: I0127 15:57:10.324398 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"612008470c6fdc5c2d33a47226a7ec91979fcd32c6448276aca5248707126c90"} err="failed to get container status \"612008470c6fdc5c2d33a47226a7ec91979fcd32c6448276aca5248707126c90\": rpc error: code = NotFound desc = could not find container \"612008470c6fdc5c2d33a47226a7ec91979fcd32c6448276aca5248707126c90\": container with ID starting with 612008470c6fdc5c2d33a47226a7ec91979fcd32c6448276aca5248707126c90 not found: ID does not exist" Jan 27 15:57:10 crc kubenswrapper[4966]: I0127 15:57:10.324420 4966 scope.go:117] "RemoveContainer" containerID="8d4e6354c67cccc91f7a75b107c9ff3bb3114b9e5655d879bb6e908d389f2182" Jan 27 15:57:10 crc kubenswrapper[4966]: E0127 15:57:10.324680 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d4e6354c67cccc91f7a75b107c9ff3bb3114b9e5655d879bb6e908d389f2182\": container with ID starting with 8d4e6354c67cccc91f7a75b107c9ff3bb3114b9e5655d879bb6e908d389f2182 not found: ID does not exist" containerID="8d4e6354c67cccc91f7a75b107c9ff3bb3114b9e5655d879bb6e908d389f2182" Jan 27 15:57:10 crc kubenswrapper[4966]: I0127 15:57:10.324730 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d4e6354c67cccc91f7a75b107c9ff3bb3114b9e5655d879bb6e908d389f2182"} err="failed to get container status \"8d4e6354c67cccc91f7a75b107c9ff3bb3114b9e5655d879bb6e908d389f2182\": rpc error: code = NotFound desc = could not find container \"8d4e6354c67cccc91f7a75b107c9ff3bb3114b9e5655d879bb6e908d389f2182\": container with ID starting with 8d4e6354c67cccc91f7a75b107c9ff3bb3114b9e5655d879bb6e908d389f2182 not found: ID does not exist" Jan 27 15:57:10 crc kubenswrapper[4966]: I0127 15:57:10.324761 4966 scope.go:117] "RemoveContainer" containerID="884e3e042f2696c4aa9299a07e61309502bf4062e5c224fd82bcfbfd84313b8c" Jan 27 15:57:10 crc kubenswrapper[4966]: E0127 15:57:10.325040 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"884e3e042f2696c4aa9299a07e61309502bf4062e5c224fd82bcfbfd84313b8c\": container with ID starting with 884e3e042f2696c4aa9299a07e61309502bf4062e5c224fd82bcfbfd84313b8c not found: ID does not exist" containerID="884e3e042f2696c4aa9299a07e61309502bf4062e5c224fd82bcfbfd84313b8c" Jan 27 15:57:10 crc kubenswrapper[4966]: I0127 15:57:10.325065 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"884e3e042f2696c4aa9299a07e61309502bf4062e5c224fd82bcfbfd84313b8c"} err="failed to get container status \"884e3e042f2696c4aa9299a07e61309502bf4062e5c224fd82bcfbfd84313b8c\": rpc error: code = NotFound desc = could not find container \"884e3e042f2696c4aa9299a07e61309502bf4062e5c224fd82bcfbfd84313b8c\": container with ID starting with 884e3e042f2696c4aa9299a07e61309502bf4062e5c224fd82bcfbfd84313b8c not found: ID does not exist" Jan 27 15:57:10 crc kubenswrapper[4966]: I0127 15:57:10.537163 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01891a9b-107e-432d-a1dc-138529de7c9e" path="/var/lib/kubelet/pods/01891a9b-107e-432d-a1dc-138529de7c9e/volumes" Jan 27 15:57:12 crc kubenswrapper[4966]: I0127 15:57:12.277158 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ngm9n" Jan 27 15:57:12 crc kubenswrapper[4966]: I0127 15:57:12.277489 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ngm9n" Jan 27 15:57:12 crc kubenswrapper[4966]: I0127 15:57:12.337044 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ngm9n" Jan 27 15:57:13 crc kubenswrapper[4966]: I0127 15:57:13.357242 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ngm9n" Jan 27 15:57:13 crc kubenswrapper[4966]: I0127 15:57:13.752032 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ngm9n"] Jan 27 15:57:15 crc kubenswrapper[4966]: I0127 15:57:15.320556 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ngm9n" podUID="15d5441f-1992-4f5f-af00-5050a1903e31" containerName="registry-server" containerID="cri-o://37952f4eec7ce2584d1cad2a4b50be06ec0a1784900a6deb33994f385aeecda1" gracePeriod=2 Jan 27 15:57:16 crc kubenswrapper[4966]: I0127 15:57:16.333529 4966 generic.go:334] "Generic (PLEG): container finished" podID="15d5441f-1992-4f5f-af00-5050a1903e31" containerID="37952f4eec7ce2584d1cad2a4b50be06ec0a1784900a6deb33994f385aeecda1" exitCode=0 Jan 27 15:57:16 crc kubenswrapper[4966]: I0127 15:57:16.333619 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngm9n" event={"ID":"15d5441f-1992-4f5f-af00-5050a1903e31","Type":"ContainerDied","Data":"37952f4eec7ce2584d1cad2a4b50be06ec0a1784900a6deb33994f385aeecda1"} Jan 27 15:57:16 crc kubenswrapper[4966]: I0127 15:57:16.467934 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngm9n" Jan 27 15:57:16 crc kubenswrapper[4966]: I0127 15:57:16.581368 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dktk2\" (UniqueName: \"kubernetes.io/projected/15d5441f-1992-4f5f-af00-5050a1903e31-kube-api-access-dktk2\") pod \"15d5441f-1992-4f5f-af00-5050a1903e31\" (UID: \"15d5441f-1992-4f5f-af00-5050a1903e31\") " Jan 27 15:57:16 crc kubenswrapper[4966]: I0127 15:57:16.581521 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15d5441f-1992-4f5f-af00-5050a1903e31-utilities\") pod \"15d5441f-1992-4f5f-af00-5050a1903e31\" (UID: \"15d5441f-1992-4f5f-af00-5050a1903e31\") " Jan 27 15:57:16 crc kubenswrapper[4966]: I0127 15:57:16.582654 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15d5441f-1992-4f5f-af00-5050a1903e31-utilities" (OuterVolumeSpecName: "utilities") pod "15d5441f-1992-4f5f-af00-5050a1903e31" (UID: "15d5441f-1992-4f5f-af00-5050a1903e31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:57:16 crc kubenswrapper[4966]: I0127 15:57:16.582826 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15d5441f-1992-4f5f-af00-5050a1903e31-catalog-content\") pod \"15d5441f-1992-4f5f-af00-5050a1903e31\" (UID: \"15d5441f-1992-4f5f-af00-5050a1903e31\") " Jan 27 15:57:16 crc kubenswrapper[4966]: I0127 15:57:16.583290 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15d5441f-1992-4f5f-af00-5050a1903e31-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:16 crc kubenswrapper[4966]: I0127 15:57:16.595294 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15d5441f-1992-4f5f-af00-5050a1903e31-kube-api-access-dktk2" (OuterVolumeSpecName: "kube-api-access-dktk2") pod "15d5441f-1992-4f5f-af00-5050a1903e31" (UID: "15d5441f-1992-4f5f-af00-5050a1903e31"). InnerVolumeSpecName "kube-api-access-dktk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:57:16 crc kubenswrapper[4966]: I0127 15:57:16.642183 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15d5441f-1992-4f5f-af00-5050a1903e31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15d5441f-1992-4f5f-af00-5050a1903e31" (UID: "15d5441f-1992-4f5f-af00-5050a1903e31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:57:16 crc kubenswrapper[4966]: I0127 15:57:16.685150 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15d5441f-1992-4f5f-af00-5050a1903e31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:16 crc kubenswrapper[4966]: I0127 15:57:16.685196 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dktk2\" (UniqueName: \"kubernetes.io/projected/15d5441f-1992-4f5f-af00-5050a1903e31-kube-api-access-dktk2\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:16 crc kubenswrapper[4966]: I0127 15:57:16.986142 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Jan 27 15:57:17 crc kubenswrapper[4966]: I0127 15:57:17.343326 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngm9n" event={"ID":"15d5441f-1992-4f5f-af00-5050a1903e31","Type":"ContainerDied","Data":"e7b4acf1e02e97db6415880a55f8c221a77c141acfe9de1e644859b7a28de27a"} Jan 27 15:57:17 crc kubenswrapper[4966]: I0127 15:57:17.343380 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngm9n" Jan 27 15:57:17 crc kubenswrapper[4966]: I0127 15:57:17.343659 4966 scope.go:117] "RemoveContainer" containerID="37952f4eec7ce2584d1cad2a4b50be06ec0a1784900a6deb33994f385aeecda1" Jan 27 15:57:17 crc kubenswrapper[4966]: I0127 15:57:17.361652 4966 scope.go:117] "RemoveContainer" containerID="250382c03fbf079ac9d781cc7ad19815e62a755bcd2019ed6a0793943fd62d4c" Jan 27 15:57:17 crc kubenswrapper[4966]: I0127 15:57:17.382025 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ngm9n"] Jan 27 15:57:17 crc kubenswrapper[4966]: I0127 15:57:17.388759 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ngm9n"] Jan 27 15:57:17 crc kubenswrapper[4966]: I0127 15:57:17.402227 4966 scope.go:117] "RemoveContainer" containerID="52496e9d40d6668c0e514d7dbd96ec04e62b0ff36496414d89030eeb2d44948f" Jan 27 15:57:18 crc kubenswrapper[4966]: I0127 15:57:18.551979 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15d5441f-1992-4f5f-af00-5050a1903e31" path="/var/lib/kubelet/pods/15d5441f-1992-4f5f-af00-5050a1903e31/volumes" Jan 27 15:57:21 crc kubenswrapper[4966]: I0127 15:57:21.332604 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6zdjj"] Jan 27 15:57:21 crc kubenswrapper[4966]: E0127 15:57:21.332879 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01891a9b-107e-432d-a1dc-138529de7c9e" containerName="registry-server" Jan 27 15:57:21 crc kubenswrapper[4966]: I0127 15:57:21.332911 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="01891a9b-107e-432d-a1dc-138529de7c9e" containerName="registry-server" Jan 27 15:57:21 crc kubenswrapper[4966]: E0127 15:57:21.332921 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15d5441f-1992-4f5f-af00-5050a1903e31" containerName="registry-server" Jan 27 15:57:21 crc kubenswrapper[4966]: I0127 15:57:21.332927 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d5441f-1992-4f5f-af00-5050a1903e31" containerName="registry-server" Jan 27 15:57:21 crc kubenswrapper[4966]: E0127 15:57:21.332940 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15d5441f-1992-4f5f-af00-5050a1903e31" containerName="extract-content" Jan 27 15:57:21 crc kubenswrapper[4966]: I0127 15:57:21.332946 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d5441f-1992-4f5f-af00-5050a1903e31" containerName="extract-content" Jan 27 15:57:21 crc kubenswrapper[4966]: E0127 15:57:21.332962 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01891a9b-107e-432d-a1dc-138529de7c9e" containerName="extract-content" Jan 27 15:57:21 crc kubenswrapper[4966]: I0127 15:57:21.332969 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="01891a9b-107e-432d-a1dc-138529de7c9e" containerName="extract-content" Jan 27 15:57:21 crc kubenswrapper[4966]: E0127 15:57:21.332984 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15d5441f-1992-4f5f-af00-5050a1903e31" containerName="extract-utilities" Jan 27 15:57:21 crc kubenswrapper[4966]: I0127 15:57:21.332990 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d5441f-1992-4f5f-af00-5050a1903e31" containerName="extract-utilities" Jan 27 15:57:21 crc kubenswrapper[4966]: E0127 15:57:21.333002 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01891a9b-107e-432d-a1dc-138529de7c9e" containerName="extract-utilities" Jan 27 15:57:21 crc kubenswrapper[4966]: I0127 15:57:21.333007 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="01891a9b-107e-432d-a1dc-138529de7c9e" containerName="extract-utilities" Jan 27 15:57:21 crc kubenswrapper[4966]: I0127 15:57:21.333134 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="15d5441f-1992-4f5f-af00-5050a1903e31" containerName="registry-server" Jan 27 15:57:21 crc kubenswrapper[4966]: I0127 15:57:21.333150 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="01891a9b-107e-432d-a1dc-138529de7c9e" containerName="registry-server" Jan 27 15:57:21 crc kubenswrapper[4966]: I0127 15:57:21.339441 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6zdjj" Jan 27 15:57:21 crc kubenswrapper[4966]: I0127 15:57:21.352591 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6zdjj"] Jan 27 15:57:21 crc kubenswrapper[4966]: I0127 15:57:21.462436 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nxm7\" (UniqueName: \"kubernetes.io/projected/8d6b2e69-9208-4988-aeb6-5d4194bbf13f-kube-api-access-2nxm7\") pod \"redhat-marketplace-6zdjj\" (UID: \"8d6b2e69-9208-4988-aeb6-5d4194bbf13f\") " pod="openshift-marketplace/redhat-marketplace-6zdjj" Jan 27 15:57:21 crc kubenswrapper[4966]: I0127 15:57:21.462955 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d6b2e69-9208-4988-aeb6-5d4194bbf13f-utilities\") pod \"redhat-marketplace-6zdjj\" (UID: \"8d6b2e69-9208-4988-aeb6-5d4194bbf13f\") " pod="openshift-marketplace/redhat-marketplace-6zdjj" Jan 27 15:57:21 crc kubenswrapper[4966]: I0127 15:57:21.463089 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d6b2e69-9208-4988-aeb6-5d4194bbf13f-catalog-content\") pod \"redhat-marketplace-6zdjj\" (UID: \"8d6b2e69-9208-4988-aeb6-5d4194bbf13f\") " pod="openshift-marketplace/redhat-marketplace-6zdjj" Jan 27 15:57:21 crc kubenswrapper[4966]: I0127 15:57:21.564034 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nxm7\" (UniqueName: \"kubernetes.io/projected/8d6b2e69-9208-4988-aeb6-5d4194bbf13f-kube-api-access-2nxm7\") pod \"redhat-marketplace-6zdjj\" (UID: \"8d6b2e69-9208-4988-aeb6-5d4194bbf13f\") " pod="openshift-marketplace/redhat-marketplace-6zdjj" Jan 27 15:57:21 crc kubenswrapper[4966]: I0127 15:57:21.564090 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d6b2e69-9208-4988-aeb6-5d4194bbf13f-utilities\") pod \"redhat-marketplace-6zdjj\" (UID: \"8d6b2e69-9208-4988-aeb6-5d4194bbf13f\") " pod="openshift-marketplace/redhat-marketplace-6zdjj" Jan 27 15:57:21 crc kubenswrapper[4966]: I0127 15:57:21.564158 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d6b2e69-9208-4988-aeb6-5d4194bbf13f-catalog-content\") pod \"redhat-marketplace-6zdjj\" (UID: \"8d6b2e69-9208-4988-aeb6-5d4194bbf13f\") " pod="openshift-marketplace/redhat-marketplace-6zdjj" Jan 27 15:57:21 crc kubenswrapper[4966]: I0127 15:57:21.564796 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d6b2e69-9208-4988-aeb6-5d4194bbf13f-utilities\") pod \"redhat-marketplace-6zdjj\" (UID: \"8d6b2e69-9208-4988-aeb6-5d4194bbf13f\") " pod="openshift-marketplace/redhat-marketplace-6zdjj" Jan 27 15:57:21 crc kubenswrapper[4966]: I0127 15:57:21.564817 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d6b2e69-9208-4988-aeb6-5d4194bbf13f-catalog-content\") pod \"redhat-marketplace-6zdjj\" (UID: \"8d6b2e69-9208-4988-aeb6-5d4194bbf13f\") " pod="openshift-marketplace/redhat-marketplace-6zdjj" Jan 27 15:57:21 crc kubenswrapper[4966]: I0127 15:57:21.584126 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nxm7\" (UniqueName: \"kubernetes.io/projected/8d6b2e69-9208-4988-aeb6-5d4194bbf13f-kube-api-access-2nxm7\") pod \"redhat-marketplace-6zdjj\" (UID: \"8d6b2e69-9208-4988-aeb6-5d4194bbf13f\") " pod="openshift-marketplace/redhat-marketplace-6zdjj" Jan 27 15:57:21 crc kubenswrapper[4966]: I0127 15:57:21.675283 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6zdjj" Jan 27 15:57:22 crc kubenswrapper[4966]: I0127 15:57:22.173519 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6zdjj"] Jan 27 15:57:22 crc kubenswrapper[4966]: I0127 15:57:22.379313 4966 generic.go:334] "Generic (PLEG): container finished" podID="8d6b2e69-9208-4988-aeb6-5d4194bbf13f" containerID="27fc983fe6dbf453ac54adbb8b239a7472d9ae868bb84690401e6e4f64d962e2" exitCode=0 Jan 27 15:57:22 crc kubenswrapper[4966]: I0127 15:57:22.379417 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6zdjj" event={"ID":"8d6b2e69-9208-4988-aeb6-5d4194bbf13f","Type":"ContainerDied","Data":"27fc983fe6dbf453ac54adbb8b239a7472d9ae868bb84690401e6e4f64d962e2"} Jan 27 15:57:22 crc kubenswrapper[4966]: I0127 15:57:22.379602 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6zdjj" event={"ID":"8d6b2e69-9208-4988-aeb6-5d4194bbf13f","Type":"ContainerStarted","Data":"988c432396f64a8b9cd31bab000bc0f9b879bcd169b2db715992c7a7788e354c"} Jan 27 15:57:23 crc kubenswrapper[4966]: I0127 15:57:23.388504 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6zdjj" event={"ID":"8d6b2e69-9208-4988-aeb6-5d4194bbf13f","Type":"ContainerStarted","Data":"dcd3443ceb56546a0d679f30ecbf5fcc82fa5c745b2d0fdb7315cbc66c8472c8"} Jan 27 15:57:24 crc kubenswrapper[4966]: I0127 15:57:24.400082 4966 generic.go:334] "Generic (PLEG): container finished" podID="8d6b2e69-9208-4988-aeb6-5d4194bbf13f" containerID="dcd3443ceb56546a0d679f30ecbf5fcc82fa5c745b2d0fdb7315cbc66c8472c8" exitCode=0 Jan 27 15:57:24 crc kubenswrapper[4966]: I0127 15:57:24.400154 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6zdjj" event={"ID":"8d6b2e69-9208-4988-aeb6-5d4194bbf13f","Type":"ContainerDied","Data":"dcd3443ceb56546a0d679f30ecbf5fcc82fa5c745b2d0fdb7315cbc66c8472c8"} Jan 27 15:57:25 crc kubenswrapper[4966]: I0127 15:57:25.409730 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6zdjj" event={"ID":"8d6b2e69-9208-4988-aeb6-5d4194bbf13f","Type":"ContainerStarted","Data":"570aa382d64c329a95e7f79ab72f878dd552cb68e8d397bf889f82c468809b44"} Jan 27 15:57:25 crc kubenswrapper[4966]: I0127 15:57:25.432885 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6zdjj" podStartSLOduration=1.709804143 podStartE2EDuration="4.432863771s" podCreationTimestamp="2026-01-27 15:57:21 +0000 UTC" firstStartedPulling="2026-01-27 15:57:22.380887625 +0000 UTC m=+908.683681113" lastFinishedPulling="2026-01-27 15:57:25.103947253 +0000 UTC m=+911.406740741" observedRunningTime="2026-01-27 15:57:25.430359113 +0000 UTC m=+911.733152621" watchObservedRunningTime="2026-01-27 15:57:25.432863771 +0000 UTC m=+911.735657269" Jan 27 15:57:31 crc kubenswrapper[4966]: I0127 15:57:31.675453 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6zdjj" Jan 27 15:57:31 crc kubenswrapper[4966]: I0127 15:57:31.676183 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6zdjj" Jan 27 15:57:31 crc kubenswrapper[4966]: I0127 15:57:31.722168 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6zdjj" Jan 27 15:57:32 crc kubenswrapper[4966]: I0127 15:57:32.528801 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6zdjj" Jan 27 15:57:32 crc kubenswrapper[4966]: I0127 15:57:32.954473 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6zdjj"] Jan 27 15:57:34 crc kubenswrapper[4966]: I0127 15:57:34.492732 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6zdjj" podUID="8d6b2e69-9208-4988-aeb6-5d4194bbf13f" containerName="registry-server" containerID="cri-o://570aa382d64c329a95e7f79ab72f878dd552cb68e8d397bf889f82c468809b44" gracePeriod=2 Jan 27 15:57:34 crc kubenswrapper[4966]: I0127 15:57:34.906791 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-9dq7q"] Jan 27 15:57:34 crc kubenswrapper[4966]: I0127 15:57:34.908622 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-9dq7q" Jan 27 15:57:34 crc kubenswrapper[4966]: I0127 15:57:34.911226 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 27 15:57:34 crc kubenswrapper[4966]: I0127 15:57:34.911304 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-vg8pm" Jan 27 15:57:34 crc kubenswrapper[4966]: I0127 15:57:34.911642 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 27 15:57:34 crc kubenswrapper[4966]: I0127 15:57:34.911743 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 27 15:57:34 crc kubenswrapper[4966]: I0127 15:57:34.911820 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 27 15:57:34 crc kubenswrapper[4966]: I0127 15:57:34.923308 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 27 15:57:34 crc kubenswrapper[4966]: I0127 15:57:34.923330 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-9dq7q"] Jan 27 15:57:34 crc kubenswrapper[4966]: I0127 15:57:34.931071 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6zdjj" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.015483 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-trusted-ca\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.015835 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-config-openshift-service-cacrt\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.015867 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-collector-syslog-receiver\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.015924 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksl9g\" (UniqueName: \"kubernetes.io/projected/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-kube-api-access-ksl9g\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.015981 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-entrypoint\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.016019 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-datadir\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.016059 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-tmp\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.016108 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-sa-token\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.016183 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-config\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.016272 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-collector-token\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.016322 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-metrics\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.074291 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-9dq7q"] Jan 27 15:57:35 crc kubenswrapper[4966]: E0127 15:57:35.074968 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-ksl9g metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-9dq7q" podUID="6e4da306-1d2e-4d16-bba5-192f5d0b8e9d" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.117013 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d6b2e69-9208-4988-aeb6-5d4194bbf13f-catalog-content\") pod \"8d6b2e69-9208-4988-aeb6-5d4194bbf13f\" (UID: \"8d6b2e69-9208-4988-aeb6-5d4194bbf13f\") " Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.117118 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nxm7\" (UniqueName: \"kubernetes.io/projected/8d6b2e69-9208-4988-aeb6-5d4194bbf13f-kube-api-access-2nxm7\") pod \"8d6b2e69-9208-4988-aeb6-5d4194bbf13f\" (UID: \"8d6b2e69-9208-4988-aeb6-5d4194bbf13f\") " Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.117175 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d6b2e69-9208-4988-aeb6-5d4194bbf13f-utilities\") pod \"8d6b2e69-9208-4988-aeb6-5d4194bbf13f\" (UID: \"8d6b2e69-9208-4988-aeb6-5d4194bbf13f\") " Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.117456 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-collector-token\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.117506 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-metrics\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.117539 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-trusted-ca\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.117578 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-config-openshift-service-cacrt\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.117606 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-collector-syslog-receiver\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.117635 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksl9g\" (UniqueName: \"kubernetes.io/projected/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-kube-api-access-ksl9g\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.117670 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-datadir\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.117687 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-entrypoint\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.117706 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-tmp\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.117735 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-sa-token\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.117761 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-config\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.118640 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-config\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.118882 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-entrypoint\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.119219 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-datadir\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: E0127 15:57:35.119730 4966 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Jan 27 15:57:35 crc kubenswrapper[4966]: E0127 15:57:35.119767 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-metrics podName:6e4da306-1d2e-4d16-bba5-192f5d0b8e9d nodeName:}" failed. No retries permitted until 2026-01-27 15:57:35.619755964 +0000 UTC m=+921.922549452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-metrics") pod "collector-9dq7q" (UID: "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d") : secret "collector-metrics" not found Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.120458 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d6b2e69-9208-4988-aeb6-5d4194bbf13f-utilities" (OuterVolumeSpecName: "utilities") pod "8d6b2e69-9208-4988-aeb6-5d4194bbf13f" (UID: "8d6b2e69-9208-4988-aeb6-5d4194bbf13f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.124964 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-collector-syslog-receiver\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.124993 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-collector-token\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.125307 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-tmp\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.126764 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-trusted-ca\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.127381 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d6b2e69-9208-4988-aeb6-5d4194bbf13f-kube-api-access-2nxm7" (OuterVolumeSpecName: "kube-api-access-2nxm7") pod "8d6b2e69-9208-4988-aeb6-5d4194bbf13f" (UID: "8d6b2e69-9208-4988-aeb6-5d4194bbf13f"). InnerVolumeSpecName "kube-api-access-2nxm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.128200 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-config-openshift-service-cacrt\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.137377 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksl9g\" (UniqueName: \"kubernetes.io/projected/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-kube-api-access-ksl9g\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.144460 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-sa-token\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.153922 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d6b2e69-9208-4988-aeb6-5d4194bbf13f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d6b2e69-9208-4988-aeb6-5d4194bbf13f" (UID: "8d6b2e69-9208-4988-aeb6-5d4194bbf13f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.219557 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d6b2e69-9208-4988-aeb6-5d4194bbf13f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.219590 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nxm7\" (UniqueName: \"kubernetes.io/projected/8d6b2e69-9208-4988-aeb6-5d4194bbf13f-kube-api-access-2nxm7\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.219601 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d6b2e69-9208-4988-aeb6-5d4194bbf13f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.501458 4966 generic.go:334] "Generic (PLEG): container finished" podID="8d6b2e69-9208-4988-aeb6-5d4194bbf13f" containerID="570aa382d64c329a95e7f79ab72f878dd552cb68e8d397bf889f82c468809b44" exitCode=0 Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.501498 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6zdjj" event={"ID":"8d6b2e69-9208-4988-aeb6-5d4194bbf13f","Type":"ContainerDied","Data":"570aa382d64c329a95e7f79ab72f878dd552cb68e8d397bf889f82c468809b44"} Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.502622 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6zdjj" event={"ID":"8d6b2e69-9208-4988-aeb6-5d4194bbf13f","Type":"ContainerDied","Data":"988c432396f64a8b9cd31bab000bc0f9b879bcd169b2db715992c7a7788e354c"} Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.501533 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6zdjj" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.502675 4966 scope.go:117] "RemoveContainer" containerID="570aa382d64c329a95e7f79ab72f878dd552cb68e8d397bf889f82c468809b44" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.502817 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.514502 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.535466 4966 scope.go:117] "RemoveContainer" containerID="dcd3443ceb56546a0d679f30ecbf5fcc82fa5c745b2d0fdb7315cbc66c8472c8" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.543329 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6zdjj"] Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.549539 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6zdjj"] Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.559793 4966 scope.go:117] "RemoveContainer" containerID="27fc983fe6dbf453ac54adbb8b239a7472d9ae868bb84690401e6e4f64d962e2" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.575909 4966 scope.go:117] "RemoveContainer" containerID="570aa382d64c329a95e7f79ab72f878dd552cb68e8d397bf889f82c468809b44" Jan 27 15:57:35 crc kubenswrapper[4966]: E0127 15:57:35.576298 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"570aa382d64c329a95e7f79ab72f878dd552cb68e8d397bf889f82c468809b44\": container with ID starting with 570aa382d64c329a95e7f79ab72f878dd552cb68e8d397bf889f82c468809b44 not found: ID does not exist" containerID="570aa382d64c329a95e7f79ab72f878dd552cb68e8d397bf889f82c468809b44" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.576335 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"570aa382d64c329a95e7f79ab72f878dd552cb68e8d397bf889f82c468809b44"} err="failed to get container status \"570aa382d64c329a95e7f79ab72f878dd552cb68e8d397bf889f82c468809b44\": rpc error: code = NotFound desc = could not find container \"570aa382d64c329a95e7f79ab72f878dd552cb68e8d397bf889f82c468809b44\": container with ID starting with 570aa382d64c329a95e7f79ab72f878dd552cb68e8d397bf889f82c468809b44 not found: ID does not exist" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.576358 4966 scope.go:117] "RemoveContainer" containerID="dcd3443ceb56546a0d679f30ecbf5fcc82fa5c745b2d0fdb7315cbc66c8472c8" Jan 27 15:57:35 crc kubenswrapper[4966]: E0127 15:57:35.576634 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcd3443ceb56546a0d679f30ecbf5fcc82fa5c745b2d0fdb7315cbc66c8472c8\": container with ID starting with dcd3443ceb56546a0d679f30ecbf5fcc82fa5c745b2d0fdb7315cbc66c8472c8 not found: ID does not exist" containerID="dcd3443ceb56546a0d679f30ecbf5fcc82fa5c745b2d0fdb7315cbc66c8472c8" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.576660 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcd3443ceb56546a0d679f30ecbf5fcc82fa5c745b2d0fdb7315cbc66c8472c8"} err="failed to get container status \"dcd3443ceb56546a0d679f30ecbf5fcc82fa5c745b2d0fdb7315cbc66c8472c8\": rpc error: code = NotFound desc = could not find container \"dcd3443ceb56546a0d679f30ecbf5fcc82fa5c745b2d0fdb7315cbc66c8472c8\": container with ID starting with dcd3443ceb56546a0d679f30ecbf5fcc82fa5c745b2d0fdb7315cbc66c8472c8 not found: ID does not exist" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.576676 4966 scope.go:117] "RemoveContainer" containerID="27fc983fe6dbf453ac54adbb8b239a7472d9ae868bb84690401e6e4f64d962e2" Jan 27 15:57:35 crc kubenswrapper[4966]: E0127 15:57:35.577007 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27fc983fe6dbf453ac54adbb8b239a7472d9ae868bb84690401e6e4f64d962e2\": container with ID starting with 27fc983fe6dbf453ac54adbb8b239a7472d9ae868bb84690401e6e4f64d962e2 not found: ID does not exist" containerID="27fc983fe6dbf453ac54adbb8b239a7472d9ae868bb84690401e6e4f64d962e2" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.577058 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27fc983fe6dbf453ac54adbb8b239a7472d9ae868bb84690401e6e4f64d962e2"} err="failed to get container status \"27fc983fe6dbf453ac54adbb8b239a7472d9ae868bb84690401e6e4f64d962e2\": rpc error: code = NotFound desc = could not find container \"27fc983fe6dbf453ac54adbb8b239a7472d9ae868bb84690401e6e4f64d962e2\": container with ID starting with 27fc983fe6dbf453ac54adbb8b239a7472d9ae868bb84690401e6e4f64d962e2 not found: ID does not exist" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.628884 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-config-openshift-service-cacrt\") pod \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.628994 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-trusted-ca\") pod \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.629081 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-config\") pod \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.629189 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-tmp\") pod \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.629251 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-collector-token\") pod \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.629296 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-entrypoint\") pod \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.629325 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-collector-syslog-receiver\") pod \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.629351 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksl9g\" (UniqueName: \"kubernetes.io/projected/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-kube-api-access-ksl9g\") pod \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.629391 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-sa-token\") pod \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.629416 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-datadir\") pod \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.629578 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d" (UID: "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.629712 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-metrics\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.629865 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d" (UID: "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.629892 4966 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.630419 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-datadir" (OuterVolumeSpecName: "datadir") pod "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d" (UID: "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.630774 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d" (UID: "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.630834 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-config" (OuterVolumeSpecName: "config") pod "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d" (UID: "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.632929 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-sa-token" (OuterVolumeSpecName: "sa-token") pod "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d" (UID: "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.633116 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-tmp" (OuterVolumeSpecName: "tmp") pod "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d" (UID: "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.633239 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-metrics\") pod \"collector-9dq7q\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " pod="openshift-logging/collector-9dq7q" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.633385 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d" (UID: "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.633644 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-kube-api-access-ksl9g" (OuterVolumeSpecName: "kube-api-access-ksl9g") pod "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d" (UID: "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d"). InnerVolumeSpecName "kube-api-access-ksl9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.635414 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-collector-token" (OuterVolumeSpecName: "collector-token") pod "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d" (UID: "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.730520 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-metrics\") pod \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\" (UID: \"6e4da306-1d2e-4d16-bba5-192f5d0b8e9d\") " Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.731017 4966 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.731040 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.731050 4966 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-tmp\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.731059 4966 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-collector-token\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.731068 4966 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-entrypoint\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.731077 4966 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.731086 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksl9g\" (UniqueName: \"kubernetes.io/projected/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-kube-api-access-ksl9g\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.731093 4966 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.731103 4966 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-datadir\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.733069 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-metrics" (OuterVolumeSpecName: "metrics") pod "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d" (UID: "6e4da306-1d2e-4d16-bba5-192f5d0b8e9d"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:57:35 crc kubenswrapper[4966]: I0127 15:57:35.832952 4966 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.511410 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-9dq7q" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.538299 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d6b2e69-9208-4988-aeb6-5d4194bbf13f" path="/var/lib/kubelet/pods/8d6b2e69-9208-4988-aeb6-5d4194bbf13f/volumes" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.584930 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-9dq7q"] Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.588509 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-9dq7q"] Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.599759 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-8vsc7"] Jan 27 15:57:36 crc kubenswrapper[4966]: E0127 15:57:36.600091 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6b2e69-9208-4988-aeb6-5d4194bbf13f" containerName="registry-server" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.600111 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6b2e69-9208-4988-aeb6-5d4194bbf13f" containerName="registry-server" Jan 27 15:57:36 crc kubenswrapper[4966]: E0127 15:57:36.600124 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6b2e69-9208-4988-aeb6-5d4194bbf13f" containerName="extract-content" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.600134 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6b2e69-9208-4988-aeb6-5d4194bbf13f" containerName="extract-content" Jan 27 15:57:36 crc kubenswrapper[4966]: E0127 15:57:36.600160 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6b2e69-9208-4988-aeb6-5d4194bbf13f" containerName="extract-utilities" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.600167 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6b2e69-9208-4988-aeb6-5d4194bbf13f" containerName="extract-utilities" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.600314 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d6b2e69-9208-4988-aeb6-5d4194bbf13f" containerName="registry-server" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.601118 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.603394 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.603456 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-vg8pm" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.605201 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.606384 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.606410 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.608911 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.613776 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-8vsc7"] Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.748712 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-datadir\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.748755 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-entrypoint\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.748777 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-config-openshift-service-cacrt\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.748810 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-config\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.748845 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-collector-syslog-receiver\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.748866 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-metrics\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.748908 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-trusted-ca\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.748936 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-tmp\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.748957 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-sa-token\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.748993 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljqnv\" (UniqueName: \"kubernetes.io/projected/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-kube-api-access-ljqnv\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.749010 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-collector-token\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.850213 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljqnv\" (UniqueName: \"kubernetes.io/projected/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-kube-api-access-ljqnv\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.850259 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-collector-token\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.850282 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-datadir\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.850299 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-entrypoint\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.850316 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-config-openshift-service-cacrt\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.850345 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-config\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.850383 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-collector-syslog-receiver\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.850407 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-metrics\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.850441 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-trusted-ca\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.850470 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-tmp\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.850489 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-sa-token\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.851871 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-datadir\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.852530 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-config\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.852618 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-entrypoint\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.853487 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-config-openshift-service-cacrt\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.853916 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-trusted-ca\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.855342 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-collector-token\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.857014 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-collector-syslog-receiver\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.863603 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-metrics\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.868141 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-tmp\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.869071 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljqnv\" (UniqueName: \"kubernetes.io/projected/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-kube-api-access-ljqnv\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.869387 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3b8269e8-5b68-4f5b-8bcf-9b4852846b6a-sa-token\") pod \"collector-8vsc7\" (UID: \"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a\") " pod="openshift-logging/collector-8vsc7" Jan 27 15:57:36 crc kubenswrapper[4966]: I0127 15:57:36.928087 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-8vsc7" Jan 27 15:57:37 crc kubenswrapper[4966]: I0127 15:57:37.391036 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-8vsc7"] Jan 27 15:57:37 crc kubenswrapper[4966]: I0127 15:57:37.522377 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-8vsc7" event={"ID":"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a","Type":"ContainerStarted","Data":"637a710ccead78585b8ef7c661aa27a988d1181499bf797944c2062d00c6188c"} Jan 27 15:57:38 crc kubenswrapper[4966]: I0127 15:57:38.538468 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e4da306-1d2e-4d16-bba5-192f5d0b8e9d" path="/var/lib/kubelet/pods/6e4da306-1d2e-4d16-bba5-192f5d0b8e9d/volumes" Jan 27 15:57:40 crc kubenswrapper[4966]: I0127 15:57:40.119259 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:57:40 crc kubenswrapper[4966]: I0127 15:57:40.119554 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:57:43 crc kubenswrapper[4966]: I0127 15:57:43.580009 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-8vsc7" event={"ID":"3b8269e8-5b68-4f5b-8bcf-9b4852846b6a","Type":"ContainerStarted","Data":"8786c16f73876aa232d55362951168faec2e80501f8c197391eef6173e980d1d"} Jan 27 15:57:43 crc kubenswrapper[4966]: I0127 15:57:43.658013 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-8vsc7" podStartSLOduration=2.166478476 podStartE2EDuration="7.657991486s" podCreationTimestamp="2026-01-27 15:57:36 +0000 UTC" firstStartedPulling="2026-01-27 15:57:37.407234444 +0000 UTC m=+923.710027982" lastFinishedPulling="2026-01-27 15:57:42.898747504 +0000 UTC m=+929.201540992" observedRunningTime="2026-01-27 15:57:43.65270701 +0000 UTC m=+929.955500518" watchObservedRunningTime="2026-01-27 15:57:43.657991486 +0000 UTC m=+929.960784974" Jan 27 15:58:10 crc kubenswrapper[4966]: I0127 15:58:10.119567 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:58:10 crc kubenswrapper[4966]: I0127 15:58:10.120196 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:58:10 crc kubenswrapper[4966]: I0127 15:58:10.120245 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 15:58:10 crc kubenswrapper[4966]: I0127 15:58:10.120808 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d5bc75034aa8f67957594a0b69bd77e9edbe97abc49cd6f918ee618fb39479c"} pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:58:10 crc kubenswrapper[4966]: I0127 15:58:10.120852 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" containerID="cri-o://3d5bc75034aa8f67957594a0b69bd77e9edbe97abc49cd6f918ee618fb39479c" gracePeriod=600 Jan 27 15:58:10 crc kubenswrapper[4966]: I0127 15:58:10.862120 4966 generic.go:334] "Generic (PLEG): container finished" podID="75889828-fc5d-4516-a3c4-db3affd4f810" containerID="3d5bc75034aa8f67957594a0b69bd77e9edbe97abc49cd6f918ee618fb39479c" exitCode=0 Jan 27 15:58:10 crc kubenswrapper[4966]: I0127 15:58:10.862358 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerDied","Data":"3d5bc75034aa8f67957594a0b69bd77e9edbe97abc49cd6f918ee618fb39479c"} Jan 27 15:58:10 crc kubenswrapper[4966]: I0127 15:58:10.862800 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"ea82bc681b618d3fe42f05ec74306309a62aca633f08e8c15e2eb2ef6d9d0842"} Jan 27 15:58:10 crc kubenswrapper[4966]: I0127 15:58:10.862826 4966 scope.go:117] "RemoveContainer" containerID="a029a7ea4e898768651cb0cd5395b77c8220f6bb9b0dc0c1269341b2b5716b1b" Jan 27 15:58:16 crc kubenswrapper[4966]: I0127 15:58:16.979591 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p"] Jan 27 15:58:16 crc kubenswrapper[4966]: I0127 15:58:16.982375 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p" Jan 27 15:58:16 crc kubenswrapper[4966]: I0127 15:58:16.986636 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 15:58:16 crc kubenswrapper[4966]: I0127 15:58:16.987062 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p"] Jan 27 15:58:17 crc kubenswrapper[4966]: I0127 15:58:17.040940 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8314cd46-03d3-46e9-a7dc-d7a561fe50fb-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p\" (UID: \"8314cd46-03d3-46e9-a7dc-d7a561fe50fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p" Jan 27 15:58:17 crc kubenswrapper[4966]: I0127 15:58:17.041161 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f7ps\" (UniqueName: \"kubernetes.io/projected/8314cd46-03d3-46e9-a7dc-d7a561fe50fb-kube-api-access-6f7ps\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p\" (UID: \"8314cd46-03d3-46e9-a7dc-d7a561fe50fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p" Jan 27 15:58:17 crc kubenswrapper[4966]: I0127 15:58:17.041319 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8314cd46-03d3-46e9-a7dc-d7a561fe50fb-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p\" (UID: \"8314cd46-03d3-46e9-a7dc-d7a561fe50fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p" Jan 27 15:58:17 crc kubenswrapper[4966]: I0127 15:58:17.142298 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8314cd46-03d3-46e9-a7dc-d7a561fe50fb-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p\" (UID: \"8314cd46-03d3-46e9-a7dc-d7a561fe50fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p" Jan 27 15:58:17 crc kubenswrapper[4966]: I0127 15:58:17.142716 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8314cd46-03d3-46e9-a7dc-d7a561fe50fb-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p\" (UID: \"8314cd46-03d3-46e9-a7dc-d7a561fe50fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p" Jan 27 15:58:17 crc kubenswrapper[4966]: I0127 15:58:17.142784 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f7ps\" (UniqueName: \"kubernetes.io/projected/8314cd46-03d3-46e9-a7dc-d7a561fe50fb-kube-api-access-6f7ps\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p\" (UID: \"8314cd46-03d3-46e9-a7dc-d7a561fe50fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p" Jan 27 15:58:17 crc kubenswrapper[4966]: I0127 15:58:17.142941 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8314cd46-03d3-46e9-a7dc-d7a561fe50fb-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p\" (UID: \"8314cd46-03d3-46e9-a7dc-d7a561fe50fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p" Jan 27 15:58:17 crc kubenswrapper[4966]: I0127 15:58:17.143178 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8314cd46-03d3-46e9-a7dc-d7a561fe50fb-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p\" (UID: \"8314cd46-03d3-46e9-a7dc-d7a561fe50fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p" Jan 27 15:58:17 crc kubenswrapper[4966]: I0127 15:58:17.159334 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f7ps\" (UniqueName: \"kubernetes.io/projected/8314cd46-03d3-46e9-a7dc-d7a561fe50fb-kube-api-access-6f7ps\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p\" (UID: \"8314cd46-03d3-46e9-a7dc-d7a561fe50fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p" Jan 27 15:58:17 crc kubenswrapper[4966]: I0127 15:58:17.306594 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p" Jan 27 15:58:17 crc kubenswrapper[4966]: I0127 15:58:17.547260 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p"] Jan 27 15:58:17 crc kubenswrapper[4966]: I0127 15:58:17.928085 4966 generic.go:334] "Generic (PLEG): container finished" podID="8314cd46-03d3-46e9-a7dc-d7a561fe50fb" containerID="b89deb570159ac908980ca89194eed27a71b75e0d09a2b69c96af84268d2d517" exitCode=0 Jan 27 15:58:17 crc kubenswrapper[4966]: I0127 15:58:17.928160 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p" event={"ID":"8314cd46-03d3-46e9-a7dc-d7a561fe50fb","Type":"ContainerDied","Data":"b89deb570159ac908980ca89194eed27a71b75e0d09a2b69c96af84268d2d517"} Jan 27 15:58:17 crc kubenswrapper[4966]: I0127 15:58:17.928227 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p" event={"ID":"8314cd46-03d3-46e9-a7dc-d7a561fe50fb","Type":"ContainerStarted","Data":"271f7dd43af31da25e4bfaf67cb3a0a86db27ca8a8cbe096f05c84bb2ce3b01e"} Jan 27 15:58:19 crc kubenswrapper[4966]: I0127 15:58:19.941687 4966 generic.go:334] "Generic (PLEG): container finished" podID="8314cd46-03d3-46e9-a7dc-d7a561fe50fb" containerID="f083463cc0cebc42994914bafa9347d3810f8925b90d60f7b119bda13ab9a2f2" exitCode=0 Jan 27 15:58:19 crc kubenswrapper[4966]: I0127 15:58:19.941775 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p" event={"ID":"8314cd46-03d3-46e9-a7dc-d7a561fe50fb","Type":"ContainerDied","Data":"f083463cc0cebc42994914bafa9347d3810f8925b90d60f7b119bda13ab9a2f2"} Jan 27 15:58:20 crc kubenswrapper[4966]: I0127 15:58:20.954712 4966 generic.go:334] "Generic (PLEG): container finished" podID="8314cd46-03d3-46e9-a7dc-d7a561fe50fb" containerID="6e1abb30a696ecc9bad6676c259fef0122465233240ad5238de119968df44aa6" exitCode=0 Jan 27 15:58:20 crc kubenswrapper[4966]: I0127 15:58:20.954765 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p" event={"ID":"8314cd46-03d3-46e9-a7dc-d7a561fe50fb","Type":"ContainerDied","Data":"6e1abb30a696ecc9bad6676c259fef0122465233240ad5238de119968df44aa6"} Jan 27 15:58:22 crc kubenswrapper[4966]: I0127 15:58:22.287151 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p" Jan 27 15:58:22 crc kubenswrapper[4966]: I0127 15:58:22.422672 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f7ps\" (UniqueName: \"kubernetes.io/projected/8314cd46-03d3-46e9-a7dc-d7a561fe50fb-kube-api-access-6f7ps\") pod \"8314cd46-03d3-46e9-a7dc-d7a561fe50fb\" (UID: \"8314cd46-03d3-46e9-a7dc-d7a561fe50fb\") " Jan 27 15:58:22 crc kubenswrapper[4966]: I0127 15:58:22.422807 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8314cd46-03d3-46e9-a7dc-d7a561fe50fb-bundle\") pod \"8314cd46-03d3-46e9-a7dc-d7a561fe50fb\" (UID: \"8314cd46-03d3-46e9-a7dc-d7a561fe50fb\") " Jan 27 15:58:22 crc kubenswrapper[4966]: I0127 15:58:22.422861 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8314cd46-03d3-46e9-a7dc-d7a561fe50fb-util\") pod \"8314cd46-03d3-46e9-a7dc-d7a561fe50fb\" (UID: \"8314cd46-03d3-46e9-a7dc-d7a561fe50fb\") " Jan 27 15:58:22 crc kubenswrapper[4966]: I0127 15:58:22.425076 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8314cd46-03d3-46e9-a7dc-d7a561fe50fb-bundle" (OuterVolumeSpecName: "bundle") pod "8314cd46-03d3-46e9-a7dc-d7a561fe50fb" (UID: "8314cd46-03d3-46e9-a7dc-d7a561fe50fb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:58:22 crc kubenswrapper[4966]: I0127 15:58:22.430084 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8314cd46-03d3-46e9-a7dc-d7a561fe50fb-kube-api-access-6f7ps" (OuterVolumeSpecName: "kube-api-access-6f7ps") pod "8314cd46-03d3-46e9-a7dc-d7a561fe50fb" (UID: "8314cd46-03d3-46e9-a7dc-d7a561fe50fb"). InnerVolumeSpecName "kube-api-access-6f7ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:58:22 crc kubenswrapper[4966]: I0127 15:58:22.443323 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8314cd46-03d3-46e9-a7dc-d7a561fe50fb-util" (OuterVolumeSpecName: "util") pod "8314cd46-03d3-46e9-a7dc-d7a561fe50fb" (UID: "8314cd46-03d3-46e9-a7dc-d7a561fe50fb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:58:22 crc kubenswrapper[4966]: I0127 15:58:22.526158 4966 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8314cd46-03d3-46e9-a7dc-d7a561fe50fb-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:58:22 crc kubenswrapper[4966]: I0127 15:58:22.526209 4966 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8314cd46-03d3-46e9-a7dc-d7a561fe50fb-util\") on node \"crc\" DevicePath \"\"" Jan 27 15:58:22 crc kubenswrapper[4966]: I0127 15:58:22.526231 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f7ps\" (UniqueName: \"kubernetes.io/projected/8314cd46-03d3-46e9-a7dc-d7a561fe50fb-kube-api-access-6f7ps\") on node \"crc\" DevicePath \"\"" Jan 27 15:58:22 crc kubenswrapper[4966]: I0127 15:58:22.978290 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p" event={"ID":"8314cd46-03d3-46e9-a7dc-d7a561fe50fb","Type":"ContainerDied","Data":"271f7dd43af31da25e4bfaf67cb3a0a86db27ca8a8cbe096f05c84bb2ce3b01e"} Jan 27 15:58:22 crc kubenswrapper[4966]: I0127 15:58:22.978334 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="271f7dd43af31da25e4bfaf67cb3a0a86db27ca8a8cbe096f05c84bb2ce3b01e" Jan 27 15:58:22 crc kubenswrapper[4966]: I0127 15:58:22.978394 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p" Jan 27 15:58:28 crc kubenswrapper[4966]: I0127 15:58:28.971464 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-l2twl"] Jan 27 15:58:28 crc kubenswrapper[4966]: E0127 15:58:28.973024 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8314cd46-03d3-46e9-a7dc-d7a561fe50fb" containerName="extract" Jan 27 15:58:28 crc kubenswrapper[4966]: I0127 15:58:28.973115 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="8314cd46-03d3-46e9-a7dc-d7a561fe50fb" containerName="extract" Jan 27 15:58:28 crc kubenswrapper[4966]: E0127 15:58:28.973199 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8314cd46-03d3-46e9-a7dc-d7a561fe50fb" containerName="util" Jan 27 15:58:28 crc kubenswrapper[4966]: I0127 15:58:28.973280 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="8314cd46-03d3-46e9-a7dc-d7a561fe50fb" containerName="util" Jan 27 15:58:28 crc kubenswrapper[4966]: E0127 15:58:28.973358 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8314cd46-03d3-46e9-a7dc-d7a561fe50fb" containerName="pull" Jan 27 15:58:28 crc kubenswrapper[4966]: I0127 15:58:28.973457 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="8314cd46-03d3-46e9-a7dc-d7a561fe50fb" containerName="pull" Jan 27 15:58:28 crc kubenswrapper[4966]: I0127 15:58:28.973813 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="8314cd46-03d3-46e9-a7dc-d7a561fe50fb" containerName="extract" Jan 27 15:58:28 crc kubenswrapper[4966]: I0127 15:58:28.974558 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-l2twl" Jan 27 15:58:28 crc kubenswrapper[4966]: I0127 15:58:28.976924 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 15:58:28 crc kubenswrapper[4966]: I0127 15:58:28.979483 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 15:58:28 crc kubenswrapper[4966]: I0127 15:58:28.979566 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-m6frl" Jan 27 15:58:28 crc kubenswrapper[4966]: I0127 15:58:28.987266 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-l2twl"] Jan 27 15:58:29 crc kubenswrapper[4966]: I0127 15:58:29.129469 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j27q9\" (UniqueName: \"kubernetes.io/projected/7761ce15-c3e7-45f2-a85e-000ea7043118-kube-api-access-j27q9\") pod \"nmstate-operator-646758c888-l2twl\" (UID: \"7761ce15-c3e7-45f2-a85e-000ea7043118\") " pod="openshift-nmstate/nmstate-operator-646758c888-l2twl" Jan 27 15:58:29 crc kubenswrapper[4966]: I0127 15:58:29.231131 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j27q9\" (UniqueName: \"kubernetes.io/projected/7761ce15-c3e7-45f2-a85e-000ea7043118-kube-api-access-j27q9\") pod \"nmstate-operator-646758c888-l2twl\" (UID: \"7761ce15-c3e7-45f2-a85e-000ea7043118\") " pod="openshift-nmstate/nmstate-operator-646758c888-l2twl" Jan 27 15:58:29 crc kubenswrapper[4966]: I0127 15:58:29.251387 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j27q9\" (UniqueName: \"kubernetes.io/projected/7761ce15-c3e7-45f2-a85e-000ea7043118-kube-api-access-j27q9\") pod \"nmstate-operator-646758c888-l2twl\" (UID: \"7761ce15-c3e7-45f2-a85e-000ea7043118\") " pod="openshift-nmstate/nmstate-operator-646758c888-l2twl" Jan 27 15:58:29 crc kubenswrapper[4966]: I0127 15:58:29.297486 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-l2twl" Jan 27 15:58:29 crc kubenswrapper[4966]: I0127 15:58:29.750095 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-l2twl"] Jan 27 15:58:30 crc kubenswrapper[4966]: I0127 15:58:30.033601 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-l2twl" event={"ID":"7761ce15-c3e7-45f2-a85e-000ea7043118","Type":"ContainerStarted","Data":"b747aebde5b07256456f3b9f7bf3c90bca0db2e88457be7cfe0a4a8e2f0842ec"} Jan 27 15:58:33 crc kubenswrapper[4966]: I0127 15:58:33.059016 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-l2twl" event={"ID":"7761ce15-c3e7-45f2-a85e-000ea7043118","Type":"ContainerStarted","Data":"03bfb29c92d6c44dd74a1e813f7fa616ab2a9156ee95314082fe1b6f83fa2b66"} Jan 27 15:58:33 crc kubenswrapper[4966]: I0127 15:58:33.094032 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-l2twl" podStartSLOduration=1.9783238060000001 podStartE2EDuration="5.094006447s" podCreationTimestamp="2026-01-27 15:58:28 +0000 UTC" firstStartedPulling="2026-01-27 15:58:29.756296676 +0000 UTC m=+976.059090164" lastFinishedPulling="2026-01-27 15:58:32.871979317 +0000 UTC m=+979.174772805" observedRunningTime="2026-01-27 15:58:33.082463395 +0000 UTC m=+979.385256883" watchObservedRunningTime="2026-01-27 15:58:33.094006447 +0000 UTC m=+979.396799965" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.224356 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-8lp6r"] Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.227101 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-8lp6r" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.229398 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-wrx8b" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.232833 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-5bg28"] Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.233764 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5bg28" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.236160 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.253343 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-5bg28"] Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.278191 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-8lp6r"] Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.296716 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-86gp2"] Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.297585 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-86gp2" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.378664 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ee84a560-7150-49bd-94ac-e190aab8bc92-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-5bg28\" (UID: \"ee84a560-7150-49bd-94ac-e190aab8bc92\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5bg28" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.378763 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54zds\" (UniqueName: \"kubernetes.io/projected/24848ca3-bec9-4747-9d22-58606da5ef34-kube-api-access-54zds\") pod \"nmstate-metrics-54757c584b-8lp6r\" (UID: \"24848ca3-bec9-4747-9d22-58606da5ef34\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-8lp6r" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.378835 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjwj4\" (UniqueName: \"kubernetes.io/projected/ee84a560-7150-49bd-94ac-e190aab8bc92-kube-api-access-kjwj4\") pod \"nmstate-webhook-8474b5b9d8-5bg28\" (UID: \"ee84a560-7150-49bd-94ac-e190aab8bc92\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5bg28" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.479884 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prvfj\" (UniqueName: \"kubernetes.io/projected/25afd019-0360-4ea5-ac94-94c6f42bb8a8-kube-api-access-prvfj\") pod \"nmstate-handler-86gp2\" (UID: \"25afd019-0360-4ea5-ac94-94c6f42bb8a8\") " pod="openshift-nmstate/nmstate-handler-86gp2" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.479986 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ee84a560-7150-49bd-94ac-e190aab8bc92-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-5bg28\" (UID: \"ee84a560-7150-49bd-94ac-e190aab8bc92\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5bg28" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.480025 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/25afd019-0360-4ea5-ac94-94c6f42bb8a8-dbus-socket\") pod \"nmstate-handler-86gp2\" (UID: \"25afd019-0360-4ea5-ac94-94c6f42bb8a8\") " pod="openshift-nmstate/nmstate-handler-86gp2" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.480047 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54zds\" (UniqueName: \"kubernetes.io/projected/24848ca3-bec9-4747-9d22-58606da5ef34-kube-api-access-54zds\") pod \"nmstate-metrics-54757c584b-8lp6r\" (UID: \"24848ca3-bec9-4747-9d22-58606da5ef34\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-8lp6r" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.480069 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/25afd019-0360-4ea5-ac94-94c6f42bb8a8-ovs-socket\") pod \"nmstate-handler-86gp2\" (UID: \"25afd019-0360-4ea5-ac94-94c6f42bb8a8\") " pod="openshift-nmstate/nmstate-handler-86gp2" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.480094 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/25afd019-0360-4ea5-ac94-94c6f42bb8a8-nmstate-lock\") pod \"nmstate-handler-86gp2\" (UID: \"25afd019-0360-4ea5-ac94-94c6f42bb8a8\") " pod="openshift-nmstate/nmstate-handler-86gp2" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.480111 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjwj4\" (UniqueName: \"kubernetes.io/projected/ee84a560-7150-49bd-94ac-e190aab8bc92-kube-api-access-kjwj4\") pod \"nmstate-webhook-8474b5b9d8-5bg28\" (UID: \"ee84a560-7150-49bd-94ac-e190aab8bc92\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5bg28" Jan 27 15:58:38 crc kubenswrapper[4966]: E0127 15:58:38.480583 4966 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 27 15:58:38 crc kubenswrapper[4966]: E0127 15:58:38.480741 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84a560-7150-49bd-94ac-e190aab8bc92-tls-key-pair podName:ee84a560-7150-49bd-94ac-e190aab8bc92 nodeName:}" failed. No retries permitted until 2026-01-27 15:58:38.980719147 +0000 UTC m=+985.283512625 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/ee84a560-7150-49bd-94ac-e190aab8bc92-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-5bg28" (UID: "ee84a560-7150-49bd-94ac-e190aab8bc92") : secret "openshift-nmstate-webhook" not found Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.509317 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjwj4\" (UniqueName: \"kubernetes.io/projected/ee84a560-7150-49bd-94ac-e190aab8bc92-kube-api-access-kjwj4\") pod \"nmstate-webhook-8474b5b9d8-5bg28\" (UID: \"ee84a560-7150-49bd-94ac-e190aab8bc92\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5bg28" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.513598 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54zds\" (UniqueName: \"kubernetes.io/projected/24848ca3-bec9-4747-9d22-58606da5ef34-kube-api-access-54zds\") pod \"nmstate-metrics-54757c584b-8lp6r\" (UID: \"24848ca3-bec9-4747-9d22-58606da5ef34\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-8lp6r" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.517485 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mtks6"] Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.522025 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mtks6" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.523521 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-t69sd" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.523839 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.524572 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.533168 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mtks6"] Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.557537 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-8lp6r" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.581965 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/25afd019-0360-4ea5-ac94-94c6f42bb8a8-dbus-socket\") pod \"nmstate-handler-86gp2\" (UID: \"25afd019-0360-4ea5-ac94-94c6f42bb8a8\") " pod="openshift-nmstate/nmstate-handler-86gp2" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.582018 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/25afd019-0360-4ea5-ac94-94c6f42bb8a8-ovs-socket\") pod \"nmstate-handler-86gp2\" (UID: \"25afd019-0360-4ea5-ac94-94c6f42bb8a8\") " pod="openshift-nmstate/nmstate-handler-86gp2" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.582051 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/25afd019-0360-4ea5-ac94-94c6f42bb8a8-nmstate-lock\") pod \"nmstate-handler-86gp2\" (UID: \"25afd019-0360-4ea5-ac94-94c6f42bb8a8\") " pod="openshift-nmstate/nmstate-handler-86gp2" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.582112 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prvfj\" (UniqueName: \"kubernetes.io/projected/25afd019-0360-4ea5-ac94-94c6f42bb8a8-kube-api-access-prvfj\") pod \"nmstate-handler-86gp2\" (UID: \"25afd019-0360-4ea5-ac94-94c6f42bb8a8\") " pod="openshift-nmstate/nmstate-handler-86gp2" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.582361 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/25afd019-0360-4ea5-ac94-94c6f42bb8a8-ovs-socket\") pod \"nmstate-handler-86gp2\" (UID: \"25afd019-0360-4ea5-ac94-94c6f42bb8a8\") " pod="openshift-nmstate/nmstate-handler-86gp2" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.582646 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/25afd019-0360-4ea5-ac94-94c6f42bb8a8-dbus-socket\") pod \"nmstate-handler-86gp2\" (UID: \"25afd019-0360-4ea5-ac94-94c6f42bb8a8\") " pod="openshift-nmstate/nmstate-handler-86gp2" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.582680 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/25afd019-0360-4ea5-ac94-94c6f42bb8a8-nmstate-lock\") pod \"nmstate-handler-86gp2\" (UID: \"25afd019-0360-4ea5-ac94-94c6f42bb8a8\") " pod="openshift-nmstate/nmstate-handler-86gp2" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.609979 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prvfj\" (UniqueName: \"kubernetes.io/projected/25afd019-0360-4ea5-ac94-94c6f42bb8a8-kube-api-access-prvfj\") pod \"nmstate-handler-86gp2\" (UID: \"25afd019-0360-4ea5-ac94-94c6f42bb8a8\") " pod="openshift-nmstate/nmstate-handler-86gp2" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.629643 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-86gp2" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.685298 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mtks6\" (UID: \"a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mtks6" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.685612 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mtks6\" (UID: \"a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mtks6" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.685696 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqcs5\" (UniqueName: \"kubernetes.io/projected/a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e-kube-api-access-fqcs5\") pod \"nmstate-console-plugin-7754f76f8b-mtks6\" (UID: \"a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mtks6" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.738869 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-774cdb758b-bmhpk"] Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.740491 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.771300 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-774cdb758b-bmhpk"] Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.786884 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mtks6\" (UID: \"a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mtks6" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.786984 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mtks6\" (UID: \"a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mtks6" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.787045 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqcs5\" (UniqueName: \"kubernetes.io/projected/a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e-kube-api-access-fqcs5\") pod \"nmstate-console-plugin-7754f76f8b-mtks6\" (UID: \"a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mtks6" Jan 27 15:58:38 crc kubenswrapper[4966]: E0127 15:58:38.787430 4966 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 27 15:58:38 crc kubenswrapper[4966]: E0127 15:58:38.787660 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e-plugin-serving-cert podName:a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e nodeName:}" failed. No retries permitted until 2026-01-27 15:58:39.287460025 +0000 UTC m=+985.590253513 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-mtks6" (UID: "a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e") : secret "plugin-serving-cert" not found Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.788958 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mtks6\" (UID: \"a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mtks6" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.806369 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqcs5\" (UniqueName: \"kubernetes.io/projected/a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e-kube-api-access-fqcs5\") pod \"nmstate-console-plugin-7754f76f8b-mtks6\" (UID: \"a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mtks6" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.887932 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/efa660ea-77f1-49d3-809c-d8f73519dd08-console-oauth-config\") pod \"console-774cdb758b-bmhpk\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.887999 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-service-ca\") pod \"console-774cdb758b-bmhpk\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.888023 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/efa660ea-77f1-49d3-809c-d8f73519dd08-console-serving-cert\") pod \"console-774cdb758b-bmhpk\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.888041 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-trusted-ca-bundle\") pod \"console-774cdb758b-bmhpk\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.888074 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5m4j\" (UniqueName: \"kubernetes.io/projected/efa660ea-77f1-49d3-809c-d8f73519dd08-kube-api-access-n5m4j\") pod \"console-774cdb758b-bmhpk\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.888393 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-oauth-serving-cert\") pod \"console-774cdb758b-bmhpk\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.888675 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-console-config\") pod \"console-774cdb758b-bmhpk\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:38 crc kubenswrapper[4966]: W0127 15:58:38.894113 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24848ca3_bec9_4747_9d22_58606da5ef34.slice/crio-d8bc7ab5c92192c7089c95836d872cdd43f693d9dd3422d49d2f23fb89e155c4 WatchSource:0}: Error finding container d8bc7ab5c92192c7089c95836d872cdd43f693d9dd3422d49d2f23fb89e155c4: Status 404 returned error can't find the container with id d8bc7ab5c92192c7089c95836d872cdd43f693d9dd3422d49d2f23fb89e155c4 Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.894338 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-8lp6r"] Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.989775 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/efa660ea-77f1-49d3-809c-d8f73519dd08-console-oauth-config\") pod \"console-774cdb758b-bmhpk\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.989854 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-service-ca\") pod \"console-774cdb758b-bmhpk\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.989876 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/efa660ea-77f1-49d3-809c-d8f73519dd08-console-serving-cert\") pod \"console-774cdb758b-bmhpk\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.989890 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-trusted-ca-bundle\") pod \"console-774cdb758b-bmhpk\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.989937 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5m4j\" (UniqueName: \"kubernetes.io/projected/efa660ea-77f1-49d3-809c-d8f73519dd08-kube-api-access-n5m4j\") pod \"console-774cdb758b-bmhpk\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.989985 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-oauth-serving-cert\") pod \"console-774cdb758b-bmhpk\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.990013 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ee84a560-7150-49bd-94ac-e190aab8bc92-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-5bg28\" (UID: \"ee84a560-7150-49bd-94ac-e190aab8bc92\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5bg28" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.990077 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-console-config\") pod \"console-774cdb758b-bmhpk\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.990837 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-service-ca\") pod \"console-774cdb758b-bmhpk\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.990928 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-console-config\") pod \"console-774cdb758b-bmhpk\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.991667 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-oauth-serving-cert\") pod \"console-774cdb758b-bmhpk\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.991863 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-trusted-ca-bundle\") pod \"console-774cdb758b-bmhpk\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.993773 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/efa660ea-77f1-49d3-809c-d8f73519dd08-console-oauth-config\") pod \"console-774cdb758b-bmhpk\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.994454 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ee84a560-7150-49bd-94ac-e190aab8bc92-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-5bg28\" (UID: \"ee84a560-7150-49bd-94ac-e190aab8bc92\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5bg28" Jan 27 15:58:38 crc kubenswrapper[4966]: I0127 15:58:38.994455 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/efa660ea-77f1-49d3-809c-d8f73519dd08-console-serving-cert\") pod \"console-774cdb758b-bmhpk\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:39 crc kubenswrapper[4966]: I0127 15:58:39.006489 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5m4j\" (UniqueName: \"kubernetes.io/projected/efa660ea-77f1-49d3-809c-d8f73519dd08-kube-api-access-n5m4j\") pod \"console-774cdb758b-bmhpk\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:39 crc kubenswrapper[4966]: I0127 15:58:39.094445 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:39 crc kubenswrapper[4966]: I0127 15:58:39.099748 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-86gp2" event={"ID":"25afd019-0360-4ea5-ac94-94c6f42bb8a8","Type":"ContainerStarted","Data":"5d8c200c02b99db1fa7af7963e14453839b2bae1c70d1d6c9d71e4a27fc1628d"} Jan 27 15:58:39 crc kubenswrapper[4966]: I0127 15:58:39.101549 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-8lp6r" event={"ID":"24848ca3-bec9-4747-9d22-58606da5ef34","Type":"ContainerStarted","Data":"d8bc7ab5c92192c7089c95836d872cdd43f693d9dd3422d49d2f23fb89e155c4"} Jan 27 15:58:39 crc kubenswrapper[4966]: I0127 15:58:39.170914 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5bg28" Jan 27 15:58:39 crc kubenswrapper[4966]: I0127 15:58:39.295736 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mtks6\" (UID: \"a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mtks6" Jan 27 15:58:39 crc kubenswrapper[4966]: I0127 15:58:39.300455 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mtks6\" (UID: \"a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mtks6" Jan 27 15:58:39 crc kubenswrapper[4966]: I0127 15:58:39.412323 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-5bg28"] Jan 27 15:58:39 crc kubenswrapper[4966]: W0127 15:58:39.413019 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee84a560_7150_49bd_94ac_e190aab8bc92.slice/crio-36c1e00c8aca414807913bdb2489df8a1fdfcdfb16f7f06c86acbc9c70651598 WatchSource:0}: Error finding container 36c1e00c8aca414807913bdb2489df8a1fdfcdfb16f7f06c86acbc9c70651598: Status 404 returned error can't find the container with id 36c1e00c8aca414807913bdb2489df8a1fdfcdfb16f7f06c86acbc9c70651598 Jan 27 15:58:39 crc kubenswrapper[4966]: I0127 15:58:39.544054 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mtks6" Jan 27 15:58:39 crc kubenswrapper[4966]: I0127 15:58:39.578197 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-774cdb758b-bmhpk"] Jan 27 15:58:39 crc kubenswrapper[4966]: W0127 15:58:39.598479 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefa660ea_77f1_49d3_809c_d8f73519dd08.slice/crio-c63a4ff5788f54c3f95695abfad42cae02634213563c694837748449d09beade WatchSource:0}: Error finding container c63a4ff5788f54c3f95695abfad42cae02634213563c694837748449d09beade: Status 404 returned error can't find the container with id c63a4ff5788f54c3f95695abfad42cae02634213563c694837748449d09beade Jan 27 15:58:39 crc kubenswrapper[4966]: I0127 15:58:39.957884 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mtks6"] Jan 27 15:58:40 crc kubenswrapper[4966]: I0127 15:58:40.110265 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5bg28" event={"ID":"ee84a560-7150-49bd-94ac-e190aab8bc92","Type":"ContainerStarted","Data":"36c1e00c8aca414807913bdb2489df8a1fdfcdfb16f7f06c86acbc9c70651598"} Jan 27 15:58:40 crc kubenswrapper[4966]: I0127 15:58:40.111643 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mtks6" event={"ID":"a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e","Type":"ContainerStarted","Data":"d23d000f3723053b93790cfde4dd41e65d7e8ae4a5de7c216931cfadd03d341e"} Jan 27 15:58:40 crc kubenswrapper[4966]: I0127 15:58:40.113257 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-774cdb758b-bmhpk" event={"ID":"efa660ea-77f1-49d3-809c-d8f73519dd08","Type":"ContainerStarted","Data":"c47139525faa9af3edeb86b680fa64ead596a43b74df629b52759e228e74153d"} Jan 27 15:58:40 crc kubenswrapper[4966]: I0127 15:58:40.113295 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-774cdb758b-bmhpk" event={"ID":"efa660ea-77f1-49d3-809c-d8f73519dd08","Type":"ContainerStarted","Data":"c63a4ff5788f54c3f95695abfad42cae02634213563c694837748449d09beade"} Jan 27 15:58:40 crc kubenswrapper[4966]: I0127 15:58:40.137146 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-774cdb758b-bmhpk" podStartSLOduration=2.137125031 podStartE2EDuration="2.137125031s" podCreationTimestamp="2026-01-27 15:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:58:40.136415119 +0000 UTC m=+986.439208637" watchObservedRunningTime="2026-01-27 15:58:40.137125031 +0000 UTC m=+986.439918529" Jan 27 15:58:42 crc kubenswrapper[4966]: I0127 15:58:42.131706 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-86gp2" event={"ID":"25afd019-0360-4ea5-ac94-94c6f42bb8a8","Type":"ContainerStarted","Data":"e80639ea432ed48dea5e488688bc9531a38b0c01f7835ec29857901f01abc663"} Jan 27 15:58:42 crc kubenswrapper[4966]: I0127 15:58:42.132539 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-86gp2" Jan 27 15:58:42 crc kubenswrapper[4966]: I0127 15:58:42.133969 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5bg28" event={"ID":"ee84a560-7150-49bd-94ac-e190aab8bc92","Type":"ContainerStarted","Data":"e28bebb3029c8f7bdba7090c4036f98f3eb5760cf7c642fc76a58ba54f28f7ba"} Jan 27 15:58:42 crc kubenswrapper[4966]: I0127 15:58:42.134605 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5bg28" Jan 27 15:58:42 crc kubenswrapper[4966]: I0127 15:58:42.135875 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-8lp6r" event={"ID":"24848ca3-bec9-4747-9d22-58606da5ef34","Type":"ContainerStarted","Data":"14fea07e571ffc36a15c6a61403ad1d83aa7de0b2a03f91c406137875a064178"} Jan 27 15:58:42 crc kubenswrapper[4966]: I0127 15:58:42.157127 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-86gp2" podStartSLOduration=1.203744501 podStartE2EDuration="4.156950863s" podCreationTimestamp="2026-01-27 15:58:38 +0000 UTC" firstStartedPulling="2026-01-27 15:58:38.686106123 +0000 UTC m=+984.988899611" lastFinishedPulling="2026-01-27 15:58:41.639312485 +0000 UTC m=+987.942105973" observedRunningTime="2026-01-27 15:58:42.153353861 +0000 UTC m=+988.456147379" watchObservedRunningTime="2026-01-27 15:58:42.156950863 +0000 UTC m=+988.459744381" Jan 27 15:58:42 crc kubenswrapper[4966]: I0127 15:58:42.181643 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5bg28" podStartSLOduration=1.997272312 podStartE2EDuration="4.181618369s" podCreationTimestamp="2026-01-27 15:58:38 +0000 UTC" firstStartedPulling="2026-01-27 15:58:39.414719975 +0000 UTC m=+985.717513463" lastFinishedPulling="2026-01-27 15:58:41.599066032 +0000 UTC m=+987.901859520" observedRunningTime="2026-01-27 15:58:42.173193034 +0000 UTC m=+988.475986532" watchObservedRunningTime="2026-01-27 15:58:42.181618369 +0000 UTC m=+988.484411867" Jan 27 15:58:43 crc kubenswrapper[4966]: I0127 15:58:43.147825 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mtks6" event={"ID":"a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e","Type":"ContainerStarted","Data":"74f9d8f7a108cf8576e9591165f5fe46929fd079a196cafd07a1f7807ecf07a8"} Jan 27 15:58:44 crc kubenswrapper[4966]: I0127 15:58:44.191232 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mtks6" podStartSLOduration=3.693714703 podStartE2EDuration="6.19121576s" podCreationTimestamp="2026-01-27 15:58:38 +0000 UTC" firstStartedPulling="2026-01-27 15:58:39.963944595 +0000 UTC m=+986.266738083" lastFinishedPulling="2026-01-27 15:58:42.461445652 +0000 UTC m=+988.764239140" observedRunningTime="2026-01-27 15:58:44.189794225 +0000 UTC m=+990.492587723" watchObservedRunningTime="2026-01-27 15:58:44.19121576 +0000 UTC m=+990.494009248" Jan 27 15:58:45 crc kubenswrapper[4966]: I0127 15:58:45.179782 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-8lp6r" event={"ID":"24848ca3-bec9-4747-9d22-58606da5ef34","Type":"ContainerStarted","Data":"456850d733d9496aa7716f11d350e49b887637530b5636e2b31eff391d495fcd"} Jan 27 15:58:45 crc kubenswrapper[4966]: I0127 15:58:45.201157 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-8lp6r" podStartSLOduration=1.559409466 podStartE2EDuration="7.201141161s" podCreationTimestamp="2026-01-27 15:58:38 +0000 UTC" firstStartedPulling="2026-01-27 15:58:38.895383022 +0000 UTC m=+985.198176510" lastFinishedPulling="2026-01-27 15:58:44.537114707 +0000 UTC m=+990.839908205" observedRunningTime="2026-01-27 15:58:45.20109664 +0000 UTC m=+991.503890138" watchObservedRunningTime="2026-01-27 15:58:45.201141161 +0000 UTC m=+991.503934649" Jan 27 15:58:48 crc kubenswrapper[4966]: I0127 15:58:48.669231 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-86gp2" Jan 27 15:58:49 crc kubenswrapper[4966]: I0127 15:58:49.094920 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:49 crc kubenswrapper[4966]: I0127 15:58:49.094955 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:49 crc kubenswrapper[4966]: I0127 15:58:49.098827 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:49 crc kubenswrapper[4966]: I0127 15:58:49.213203 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 15:58:49 crc kubenswrapper[4966]: I0127 15:58:49.282008 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5c557ffddd-h86q9"] Jan 27 15:58:59 crc kubenswrapper[4966]: I0127 15:58:59.178824 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5bg28" Jan 27 15:59:14 crc kubenswrapper[4966]: I0127 15:59:14.338391 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5c557ffddd-h86q9" podUID="3e677fe7-a980-4979-92bc-966eed6ddf11" containerName="console" containerID="cri-o://2186769a4a7fcf311a8cafb27ac230391483567b2f5c63f2636ed85b666c45b5" gracePeriod=15 Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.215619 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5c557ffddd-h86q9_3e677fe7-a980-4979-92bc-966eed6ddf11/console/0.log" Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.216260 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.313494 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-trusted-ca-bundle\") pod \"3e677fe7-a980-4979-92bc-966eed6ddf11\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.313562 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-service-ca\") pod \"3e677fe7-a980-4979-92bc-966eed6ddf11\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.313601 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-oauth-serving-cert\") pod \"3e677fe7-a980-4979-92bc-966eed6ddf11\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.313629 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3e677fe7-a980-4979-92bc-966eed6ddf11-console-serving-cert\") pod \"3e677fe7-a980-4979-92bc-966eed6ddf11\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.313683 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3e677fe7-a980-4979-92bc-966eed6ddf11-console-oauth-config\") pod \"3e677fe7-a980-4979-92bc-966eed6ddf11\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.313711 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr4hk\" (UniqueName: \"kubernetes.io/projected/3e677fe7-a980-4979-92bc-966eed6ddf11-kube-api-access-cr4hk\") pod \"3e677fe7-a980-4979-92bc-966eed6ddf11\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.313757 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-console-config\") pod \"3e677fe7-a980-4979-92bc-966eed6ddf11\" (UID: \"3e677fe7-a980-4979-92bc-966eed6ddf11\") " Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.314888 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-console-config" (OuterVolumeSpecName: "console-config") pod "3e677fe7-a980-4979-92bc-966eed6ddf11" (UID: "3e677fe7-a980-4979-92bc-966eed6ddf11"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.315411 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3e677fe7-a980-4979-92bc-966eed6ddf11" (UID: "3e677fe7-a980-4979-92bc-966eed6ddf11"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.315831 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-service-ca" (OuterVolumeSpecName: "service-ca") pod "3e677fe7-a980-4979-92bc-966eed6ddf11" (UID: "3e677fe7-a980-4979-92bc-966eed6ddf11"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.316346 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3e677fe7-a980-4979-92bc-966eed6ddf11" (UID: "3e677fe7-a980-4979-92bc-966eed6ddf11"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.322718 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e677fe7-a980-4979-92bc-966eed6ddf11-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3e677fe7-a980-4979-92bc-966eed6ddf11" (UID: "3e677fe7-a980-4979-92bc-966eed6ddf11"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.324046 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e677fe7-a980-4979-92bc-966eed6ddf11-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3e677fe7-a980-4979-92bc-966eed6ddf11" (UID: "3e677fe7-a980-4979-92bc-966eed6ddf11"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.327501 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e677fe7-a980-4979-92bc-966eed6ddf11-kube-api-access-cr4hk" (OuterVolumeSpecName: "kube-api-access-cr4hk") pod "3e677fe7-a980-4979-92bc-966eed6ddf11" (UID: "3e677fe7-a980-4979-92bc-966eed6ddf11"). InnerVolumeSpecName "kube-api-access-cr4hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.415488 4966 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3e677fe7-a980-4979-92bc-966eed6ddf11-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.415533 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr4hk\" (UniqueName: \"kubernetes.io/projected/3e677fe7-a980-4979-92bc-966eed6ddf11-kube-api-access-cr4hk\") on node \"crc\" DevicePath \"\"" Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.415549 4966 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.415562 4966 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.415615 4966 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.415627 4966 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3e677fe7-a980-4979-92bc-966eed6ddf11-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.415641 4966 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3e677fe7-a980-4979-92bc-966eed6ddf11-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.485713 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5c557ffddd-h86q9_3e677fe7-a980-4979-92bc-966eed6ddf11/console/0.log" Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.485787 4966 generic.go:334] "Generic (PLEG): container finished" podID="3e677fe7-a980-4979-92bc-966eed6ddf11" containerID="2186769a4a7fcf311a8cafb27ac230391483567b2f5c63f2636ed85b666c45b5" exitCode=2 Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.485823 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c557ffddd-h86q9" event={"ID":"3e677fe7-a980-4979-92bc-966eed6ddf11","Type":"ContainerDied","Data":"2186769a4a7fcf311a8cafb27ac230391483567b2f5c63f2636ed85b666c45b5"} Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.485861 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c557ffddd-h86q9" event={"ID":"3e677fe7-a980-4979-92bc-966eed6ddf11","Type":"ContainerDied","Data":"43fcb1f5ad85758fb5ab12f1ee2ad327bfb07879fbc6d81a592d2d8e7b181421"} Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.485861 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c557ffddd-h86q9" Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.485883 4966 scope.go:117] "RemoveContainer" containerID="2186769a4a7fcf311a8cafb27ac230391483567b2f5c63f2636ed85b666c45b5" Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.521233 4966 scope.go:117] "RemoveContainer" containerID="2186769a4a7fcf311a8cafb27ac230391483567b2f5c63f2636ed85b666c45b5" Jan 27 15:59:15 crc kubenswrapper[4966]: E0127 15:59:15.521697 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2186769a4a7fcf311a8cafb27ac230391483567b2f5c63f2636ed85b666c45b5\": container with ID starting with 2186769a4a7fcf311a8cafb27ac230391483567b2f5c63f2636ed85b666c45b5 not found: ID does not exist" containerID="2186769a4a7fcf311a8cafb27ac230391483567b2f5c63f2636ed85b666c45b5" Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.521745 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2186769a4a7fcf311a8cafb27ac230391483567b2f5c63f2636ed85b666c45b5"} err="failed to get container status \"2186769a4a7fcf311a8cafb27ac230391483567b2f5c63f2636ed85b666c45b5\": rpc error: code = NotFound desc = could not find container \"2186769a4a7fcf311a8cafb27ac230391483567b2f5c63f2636ed85b666c45b5\": container with ID starting with 2186769a4a7fcf311a8cafb27ac230391483567b2f5c63f2636ed85b666c45b5 not found: ID does not exist" Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.533065 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5c557ffddd-h86q9"] Jan 27 15:59:15 crc kubenswrapper[4966]: I0127 15:59:15.542065 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5c557ffddd-h86q9"] Jan 27 15:59:16 crc kubenswrapper[4966]: I0127 15:59:16.528719 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e677fe7-a980-4979-92bc-966eed6ddf11" path="/var/lib/kubelet/pods/3e677fe7-a980-4979-92bc-966eed6ddf11/volumes" Jan 27 15:59:19 crc kubenswrapper[4966]: I0127 15:59:19.121357 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd"] Jan 27 15:59:19 crc kubenswrapper[4966]: E0127 15:59:19.122252 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e677fe7-a980-4979-92bc-966eed6ddf11" containerName="console" Jan 27 15:59:19 crc kubenswrapper[4966]: I0127 15:59:19.122266 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e677fe7-a980-4979-92bc-966eed6ddf11" containerName="console" Jan 27 15:59:19 crc kubenswrapper[4966]: I0127 15:59:19.122422 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e677fe7-a980-4979-92bc-966eed6ddf11" containerName="console" Jan 27 15:59:19 crc kubenswrapper[4966]: I0127 15:59:19.123367 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd" Jan 27 15:59:19 crc kubenswrapper[4966]: I0127 15:59:19.132626 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd"] Jan 27 15:59:19 crc kubenswrapper[4966]: I0127 15:59:19.132818 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 15:59:19 crc kubenswrapper[4966]: I0127 15:59:19.184738 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh787\" (UniqueName: \"kubernetes.io/projected/071b256a-ffeb-405a-b9ac-d65c622633b9-kube-api-access-bh787\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd\" (UID: \"071b256a-ffeb-405a-b9ac-d65c622633b9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd" Jan 27 15:59:19 crc kubenswrapper[4966]: I0127 15:59:19.184807 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/071b256a-ffeb-405a-b9ac-d65c622633b9-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd\" (UID: \"071b256a-ffeb-405a-b9ac-d65c622633b9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd" Jan 27 15:59:19 crc kubenswrapper[4966]: I0127 15:59:19.184956 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/071b256a-ffeb-405a-b9ac-d65c622633b9-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd\" (UID: \"071b256a-ffeb-405a-b9ac-d65c622633b9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd" Jan 27 15:59:19 crc kubenswrapper[4966]: I0127 15:59:19.286140 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/071b256a-ffeb-405a-b9ac-d65c622633b9-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd\" (UID: \"071b256a-ffeb-405a-b9ac-d65c622633b9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd" Jan 27 15:59:19 crc kubenswrapper[4966]: I0127 15:59:19.286224 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/071b256a-ffeb-405a-b9ac-d65c622633b9-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd\" (UID: \"071b256a-ffeb-405a-b9ac-d65c622633b9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd" Jan 27 15:59:19 crc kubenswrapper[4966]: I0127 15:59:19.286329 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh787\" (UniqueName: \"kubernetes.io/projected/071b256a-ffeb-405a-b9ac-d65c622633b9-kube-api-access-bh787\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd\" (UID: \"071b256a-ffeb-405a-b9ac-d65c622633b9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd" Jan 27 15:59:19 crc kubenswrapper[4966]: I0127 15:59:19.286601 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/071b256a-ffeb-405a-b9ac-d65c622633b9-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd\" (UID: \"071b256a-ffeb-405a-b9ac-d65c622633b9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd" Jan 27 15:59:19 crc kubenswrapper[4966]: I0127 15:59:19.286709 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/071b256a-ffeb-405a-b9ac-d65c622633b9-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd\" (UID: \"071b256a-ffeb-405a-b9ac-d65c622633b9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd" Jan 27 15:59:19 crc kubenswrapper[4966]: I0127 15:59:19.321853 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh787\" (UniqueName: \"kubernetes.io/projected/071b256a-ffeb-405a-b9ac-d65c622633b9-kube-api-access-bh787\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd\" (UID: \"071b256a-ffeb-405a-b9ac-d65c622633b9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd" Jan 27 15:59:19 crc kubenswrapper[4966]: I0127 15:59:19.440092 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd" Jan 27 15:59:19 crc kubenswrapper[4966]: I0127 15:59:19.888797 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd"] Jan 27 15:59:19 crc kubenswrapper[4966]: W0127 15:59:19.895467 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod071b256a_ffeb_405a_b9ac_d65c622633b9.slice/crio-fc580bb7184f43ca779f284246079d85829a752d62bebda41033b13199288c18 WatchSource:0}: Error finding container fc580bb7184f43ca779f284246079d85829a752d62bebda41033b13199288c18: Status 404 returned error can't find the container with id fc580bb7184f43ca779f284246079d85829a752d62bebda41033b13199288c18 Jan 27 15:59:20 crc kubenswrapper[4966]: I0127 15:59:20.525503 4966 generic.go:334] "Generic (PLEG): container finished" podID="071b256a-ffeb-405a-b9ac-d65c622633b9" containerID="34a0f6c673fa8f75ee97d6ab4f6a1f452a4c7049e0e5c88213ccf734186b3fc3" exitCode=0 Jan 27 15:59:20 crc kubenswrapper[4966]: I0127 15:59:20.527829 4966 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:59:20 crc kubenswrapper[4966]: I0127 15:59:20.532613 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd" event={"ID":"071b256a-ffeb-405a-b9ac-d65c622633b9","Type":"ContainerDied","Data":"34a0f6c673fa8f75ee97d6ab4f6a1f452a4c7049e0e5c88213ccf734186b3fc3"} Jan 27 15:59:20 crc kubenswrapper[4966]: I0127 15:59:20.532727 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd" event={"ID":"071b256a-ffeb-405a-b9ac-d65c622633b9","Type":"ContainerStarted","Data":"fc580bb7184f43ca779f284246079d85829a752d62bebda41033b13199288c18"} Jan 27 15:59:23 crc kubenswrapper[4966]: I0127 15:59:23.557806 4966 generic.go:334] "Generic (PLEG): container finished" podID="071b256a-ffeb-405a-b9ac-d65c622633b9" containerID="5621b8e9b50b37cd6f2aec5ba908af6c1c53343a5a5b238c30682acb1556632e" exitCode=0 Jan 27 15:59:23 crc kubenswrapper[4966]: I0127 15:59:23.557976 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd" event={"ID":"071b256a-ffeb-405a-b9ac-d65c622633b9","Type":"ContainerDied","Data":"5621b8e9b50b37cd6f2aec5ba908af6c1c53343a5a5b238c30682acb1556632e"} Jan 27 15:59:24 crc kubenswrapper[4966]: I0127 15:59:24.568532 4966 generic.go:334] "Generic (PLEG): container finished" podID="071b256a-ffeb-405a-b9ac-d65c622633b9" containerID="a2b1a04ddb204e596f639ca6ecaf1dd601c7226b353f714ddc5090b7752e8867" exitCode=0 Jan 27 15:59:24 crc kubenswrapper[4966]: I0127 15:59:24.568579 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd" event={"ID":"071b256a-ffeb-405a-b9ac-d65c622633b9","Type":"ContainerDied","Data":"a2b1a04ddb204e596f639ca6ecaf1dd601c7226b353f714ddc5090b7752e8867"} Jan 27 15:59:25 crc kubenswrapper[4966]: I0127 15:59:25.966489 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd" Jan 27 15:59:26 crc kubenswrapper[4966]: I0127 15:59:26.119601 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh787\" (UniqueName: \"kubernetes.io/projected/071b256a-ffeb-405a-b9ac-d65c622633b9-kube-api-access-bh787\") pod \"071b256a-ffeb-405a-b9ac-d65c622633b9\" (UID: \"071b256a-ffeb-405a-b9ac-d65c622633b9\") " Jan 27 15:59:26 crc kubenswrapper[4966]: I0127 15:59:26.119681 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/071b256a-ffeb-405a-b9ac-d65c622633b9-util\") pod \"071b256a-ffeb-405a-b9ac-d65c622633b9\" (UID: \"071b256a-ffeb-405a-b9ac-d65c622633b9\") " Jan 27 15:59:26 crc kubenswrapper[4966]: I0127 15:59:26.119714 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/071b256a-ffeb-405a-b9ac-d65c622633b9-bundle\") pod \"071b256a-ffeb-405a-b9ac-d65c622633b9\" (UID: \"071b256a-ffeb-405a-b9ac-d65c622633b9\") " Jan 27 15:59:26 crc kubenswrapper[4966]: I0127 15:59:26.121163 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/071b256a-ffeb-405a-b9ac-d65c622633b9-bundle" (OuterVolumeSpecName: "bundle") pod "071b256a-ffeb-405a-b9ac-d65c622633b9" (UID: "071b256a-ffeb-405a-b9ac-d65c622633b9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:59:26 crc kubenswrapper[4966]: I0127 15:59:26.133743 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/071b256a-ffeb-405a-b9ac-d65c622633b9-kube-api-access-bh787" (OuterVolumeSpecName: "kube-api-access-bh787") pod "071b256a-ffeb-405a-b9ac-d65c622633b9" (UID: "071b256a-ffeb-405a-b9ac-d65c622633b9"). InnerVolumeSpecName "kube-api-access-bh787". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:59:26 crc kubenswrapper[4966]: I0127 15:59:26.136451 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/071b256a-ffeb-405a-b9ac-d65c622633b9-util" (OuterVolumeSpecName: "util") pod "071b256a-ffeb-405a-b9ac-d65c622633b9" (UID: "071b256a-ffeb-405a-b9ac-d65c622633b9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:59:26 crc kubenswrapper[4966]: I0127 15:59:26.222187 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh787\" (UniqueName: \"kubernetes.io/projected/071b256a-ffeb-405a-b9ac-d65c622633b9-kube-api-access-bh787\") on node \"crc\" DevicePath \"\"" Jan 27 15:59:26 crc kubenswrapper[4966]: I0127 15:59:26.222234 4966 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/071b256a-ffeb-405a-b9ac-d65c622633b9-util\") on node \"crc\" DevicePath \"\"" Jan 27 15:59:26 crc kubenswrapper[4966]: I0127 15:59:26.222246 4966 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/071b256a-ffeb-405a-b9ac-d65c622633b9-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:59:26 crc kubenswrapper[4966]: I0127 15:59:26.585666 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd" event={"ID":"071b256a-ffeb-405a-b9ac-d65c622633b9","Type":"ContainerDied","Data":"fc580bb7184f43ca779f284246079d85829a752d62bebda41033b13199288c18"} Jan 27 15:59:26 crc kubenswrapper[4966]: I0127 15:59:26.585711 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc580bb7184f43ca779f284246079d85829a752d62bebda41033b13199288c18" Jan 27 15:59:26 crc kubenswrapper[4966]: I0127 15:59:26.585781 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.437804 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-784794b655-q7lf8"] Jan 27 15:59:38 crc kubenswrapper[4966]: E0127 15:59:38.438486 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071b256a-ffeb-405a-b9ac-d65c622633b9" containerName="pull" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.438497 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="071b256a-ffeb-405a-b9ac-d65c622633b9" containerName="pull" Jan 27 15:59:38 crc kubenswrapper[4966]: E0127 15:59:38.438511 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071b256a-ffeb-405a-b9ac-d65c622633b9" containerName="extract" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.438518 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="071b256a-ffeb-405a-b9ac-d65c622633b9" containerName="extract" Jan 27 15:59:38 crc kubenswrapper[4966]: E0127 15:59:38.438534 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071b256a-ffeb-405a-b9ac-d65c622633b9" containerName="util" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.438539 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="071b256a-ffeb-405a-b9ac-d65c622633b9" containerName="util" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.438657 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="071b256a-ffeb-405a-b9ac-d65c622633b9" containerName="extract" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.439173 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-784794b655-q7lf8" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.443116 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.443194 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.443922 4966 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.444959 4966 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-gnkxr" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.447318 4966 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.454460 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-784794b655-q7lf8"] Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.529940 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9a9151b-f291-44db-a0fb-904cf48b7e37-webhook-cert\") pod \"metallb-operator-controller-manager-784794b655-q7lf8\" (UID: \"c9a9151b-f291-44db-a0fb-904cf48b7e37\") " pod="metallb-system/metallb-operator-controller-manager-784794b655-q7lf8" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.530012 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9a9151b-f291-44db-a0fb-904cf48b7e37-apiservice-cert\") pod \"metallb-operator-controller-manager-784794b655-q7lf8\" (UID: \"c9a9151b-f291-44db-a0fb-904cf48b7e37\") " pod="metallb-system/metallb-operator-controller-manager-784794b655-q7lf8" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.530111 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzjp4\" (UniqueName: \"kubernetes.io/projected/c9a9151b-f291-44db-a0fb-904cf48b7e37-kube-api-access-mzjp4\") pod \"metallb-operator-controller-manager-784794b655-q7lf8\" (UID: \"c9a9151b-f291-44db-a0fb-904cf48b7e37\") " pod="metallb-system/metallb-operator-controller-manager-784794b655-q7lf8" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.630926 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9a9151b-f291-44db-a0fb-904cf48b7e37-webhook-cert\") pod \"metallb-operator-controller-manager-784794b655-q7lf8\" (UID: \"c9a9151b-f291-44db-a0fb-904cf48b7e37\") " pod="metallb-system/metallb-operator-controller-manager-784794b655-q7lf8" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.630975 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9a9151b-f291-44db-a0fb-904cf48b7e37-apiservice-cert\") pod \"metallb-operator-controller-manager-784794b655-q7lf8\" (UID: \"c9a9151b-f291-44db-a0fb-904cf48b7e37\") " pod="metallb-system/metallb-operator-controller-manager-784794b655-q7lf8" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.631087 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzjp4\" (UniqueName: \"kubernetes.io/projected/c9a9151b-f291-44db-a0fb-904cf48b7e37-kube-api-access-mzjp4\") pod \"metallb-operator-controller-manager-784794b655-q7lf8\" (UID: \"c9a9151b-f291-44db-a0fb-904cf48b7e37\") " pod="metallb-system/metallb-operator-controller-manager-784794b655-q7lf8" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.638156 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9a9151b-f291-44db-a0fb-904cf48b7e37-webhook-cert\") pod \"metallb-operator-controller-manager-784794b655-q7lf8\" (UID: \"c9a9151b-f291-44db-a0fb-904cf48b7e37\") " pod="metallb-system/metallb-operator-controller-manager-784794b655-q7lf8" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.638161 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9a9151b-f291-44db-a0fb-904cf48b7e37-apiservice-cert\") pod \"metallb-operator-controller-manager-784794b655-q7lf8\" (UID: \"c9a9151b-f291-44db-a0fb-904cf48b7e37\") " pod="metallb-system/metallb-operator-controller-manager-784794b655-q7lf8" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.676462 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzjp4\" (UniqueName: \"kubernetes.io/projected/c9a9151b-f291-44db-a0fb-904cf48b7e37-kube-api-access-mzjp4\") pod \"metallb-operator-controller-manager-784794b655-q7lf8\" (UID: \"c9a9151b-f291-44db-a0fb-904cf48b7e37\") " pod="metallb-system/metallb-operator-controller-manager-784794b655-q7lf8" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.751440 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9"] Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.752360 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.753635 4966 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-px2gc" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.754132 4966 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.754316 4966 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.757241 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-784794b655-q7lf8" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.775335 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9"] Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.936456 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef-apiservice-cert\") pod \"metallb-operator-webhook-server-546646bf6b-gmbc9\" (UID: \"2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef\") " pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.936788 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnc69\" (UniqueName: \"kubernetes.io/projected/2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef-kube-api-access-vnc69\") pod \"metallb-operator-webhook-server-546646bf6b-gmbc9\" (UID: \"2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef\") " pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" Jan 27 15:59:38 crc kubenswrapper[4966]: I0127 15:59:38.936847 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef-webhook-cert\") pod \"metallb-operator-webhook-server-546646bf6b-gmbc9\" (UID: \"2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef\") " pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" Jan 27 15:59:39 crc kubenswrapper[4966]: I0127 15:59:39.037976 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnc69\" (UniqueName: \"kubernetes.io/projected/2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef-kube-api-access-vnc69\") pod \"metallb-operator-webhook-server-546646bf6b-gmbc9\" (UID: \"2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef\") " pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" Jan 27 15:59:39 crc kubenswrapper[4966]: I0127 15:59:39.038054 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef-webhook-cert\") pod \"metallb-operator-webhook-server-546646bf6b-gmbc9\" (UID: \"2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef\") " pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" Jan 27 15:59:39 crc kubenswrapper[4966]: I0127 15:59:39.038101 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef-apiservice-cert\") pod \"metallb-operator-webhook-server-546646bf6b-gmbc9\" (UID: \"2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef\") " pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" Jan 27 15:59:39 crc kubenswrapper[4966]: I0127 15:59:39.046002 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef-webhook-cert\") pod \"metallb-operator-webhook-server-546646bf6b-gmbc9\" (UID: \"2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef\") " pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" Jan 27 15:59:39 crc kubenswrapper[4966]: I0127 15:59:39.046150 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef-apiservice-cert\") pod \"metallb-operator-webhook-server-546646bf6b-gmbc9\" (UID: \"2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef\") " pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" Jan 27 15:59:39 crc kubenswrapper[4966]: I0127 15:59:39.060554 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnc69\" (UniqueName: \"kubernetes.io/projected/2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef-kube-api-access-vnc69\") pod \"metallb-operator-webhook-server-546646bf6b-gmbc9\" (UID: \"2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef\") " pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" Jan 27 15:59:39 crc kubenswrapper[4966]: I0127 15:59:39.073077 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" Jan 27 15:59:39 crc kubenswrapper[4966]: I0127 15:59:39.210844 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-784794b655-q7lf8"] Jan 27 15:59:39 crc kubenswrapper[4966]: I0127 15:59:39.539167 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9"] Jan 27 15:59:39 crc kubenswrapper[4966]: I0127 15:59:39.693213 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" event={"ID":"2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef","Type":"ContainerStarted","Data":"34a462ffddf340321e2ebc49d8f0ca1156ad5d577be504c87c96f43b7042dd32"} Jan 27 15:59:39 crc kubenswrapper[4966]: I0127 15:59:39.694737 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-784794b655-q7lf8" event={"ID":"c9a9151b-f291-44db-a0fb-904cf48b7e37","Type":"ContainerStarted","Data":"a925aeb3ff9a59075702e25babd2848725b37863d9e97c3488b5e95e73b69bf3"} Jan 27 15:59:42 crc kubenswrapper[4966]: I0127 15:59:42.720328 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-784794b655-q7lf8" event={"ID":"c9a9151b-f291-44db-a0fb-904cf48b7e37","Type":"ContainerStarted","Data":"5a88ea0d9061ae5f7fb07d322f77a7b8e3cd5b44d06f7c03f142184e2b560d98"} Jan 27 15:59:42 crc kubenswrapper[4966]: I0127 15:59:42.720751 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-784794b655-q7lf8" Jan 27 15:59:42 crc kubenswrapper[4966]: I0127 15:59:42.747596 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-784794b655-q7lf8" podStartSLOduration=1.588578854 podStartE2EDuration="4.747579316s" podCreationTimestamp="2026-01-27 15:59:38 +0000 UTC" firstStartedPulling="2026-01-27 15:59:39.232728084 +0000 UTC m=+1045.535521572" lastFinishedPulling="2026-01-27 15:59:42.391728556 +0000 UTC m=+1048.694522034" observedRunningTime="2026-01-27 15:59:42.739138941 +0000 UTC m=+1049.041932449" watchObservedRunningTime="2026-01-27 15:59:42.747579316 +0000 UTC m=+1049.050372804" Jan 27 15:59:44 crc kubenswrapper[4966]: I0127 15:59:44.739099 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" event={"ID":"2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef","Type":"ContainerStarted","Data":"7ace6a2e65da1f98c08f6ed0fd474fab99d4561d106243d58cdf4987c5fc2b78"} Jan 27 15:59:44 crc kubenswrapper[4966]: I0127 15:59:44.739416 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" Jan 27 15:59:44 crc kubenswrapper[4966]: I0127 15:59:44.757701 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" podStartSLOduration=2.108747453 podStartE2EDuration="6.757684454s" podCreationTimestamp="2026-01-27 15:59:38 +0000 UTC" firstStartedPulling="2026-01-27 15:59:39.549002542 +0000 UTC m=+1045.851796050" lastFinishedPulling="2026-01-27 15:59:44.197939563 +0000 UTC m=+1050.500733051" observedRunningTime="2026-01-27 15:59:44.757096675 +0000 UTC m=+1051.059890263" watchObservedRunningTime="2026-01-27 15:59:44.757684454 +0000 UTC m=+1051.060477952" Jan 27 15:59:59 crc kubenswrapper[4966]: I0127 15:59:59.080378 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" Jan 27 16:00:00 crc kubenswrapper[4966]: I0127 16:00:00.149674 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492160-m5225"] Jan 27 16:00:00 crc kubenswrapper[4966]: I0127 16:00:00.150986 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-m5225" Jan 27 16:00:00 crc kubenswrapper[4966]: I0127 16:00:00.152701 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 16:00:00 crc kubenswrapper[4966]: I0127 16:00:00.156865 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 16:00:00 crc kubenswrapper[4966]: I0127 16:00:00.161638 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492160-m5225"] Jan 27 16:00:00 crc kubenswrapper[4966]: I0127 16:00:00.216207 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c54394f6-54e8-472f-b09b-198431196e09-secret-volume\") pod \"collect-profiles-29492160-m5225\" (UID: \"c54394f6-54e8-472f-b09b-198431196e09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-m5225" Jan 27 16:00:00 crc kubenswrapper[4966]: I0127 16:00:00.216457 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c54394f6-54e8-472f-b09b-198431196e09-config-volume\") pod \"collect-profiles-29492160-m5225\" (UID: \"c54394f6-54e8-472f-b09b-198431196e09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-m5225" Jan 27 16:00:00 crc kubenswrapper[4966]: I0127 16:00:00.216564 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp7kt\" (UniqueName: \"kubernetes.io/projected/c54394f6-54e8-472f-b09b-198431196e09-kube-api-access-rp7kt\") pod \"collect-profiles-29492160-m5225\" (UID: \"c54394f6-54e8-472f-b09b-198431196e09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-m5225" Jan 27 16:00:00 crc kubenswrapper[4966]: I0127 16:00:00.318176 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp7kt\" (UniqueName: \"kubernetes.io/projected/c54394f6-54e8-472f-b09b-198431196e09-kube-api-access-rp7kt\") pod \"collect-profiles-29492160-m5225\" (UID: \"c54394f6-54e8-472f-b09b-198431196e09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-m5225" Jan 27 16:00:00 crc kubenswrapper[4966]: I0127 16:00:00.318280 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c54394f6-54e8-472f-b09b-198431196e09-secret-volume\") pod \"collect-profiles-29492160-m5225\" (UID: \"c54394f6-54e8-472f-b09b-198431196e09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-m5225" Jan 27 16:00:00 crc kubenswrapper[4966]: I0127 16:00:00.318338 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c54394f6-54e8-472f-b09b-198431196e09-config-volume\") pod \"collect-profiles-29492160-m5225\" (UID: \"c54394f6-54e8-472f-b09b-198431196e09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-m5225" Jan 27 16:00:00 crc kubenswrapper[4966]: I0127 16:00:00.319235 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c54394f6-54e8-472f-b09b-198431196e09-config-volume\") pod \"collect-profiles-29492160-m5225\" (UID: \"c54394f6-54e8-472f-b09b-198431196e09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-m5225" Jan 27 16:00:00 crc kubenswrapper[4966]: I0127 16:00:00.327244 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c54394f6-54e8-472f-b09b-198431196e09-secret-volume\") pod \"collect-profiles-29492160-m5225\" (UID: \"c54394f6-54e8-472f-b09b-198431196e09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-m5225" Jan 27 16:00:00 crc kubenswrapper[4966]: I0127 16:00:00.334656 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp7kt\" (UniqueName: \"kubernetes.io/projected/c54394f6-54e8-472f-b09b-198431196e09-kube-api-access-rp7kt\") pod \"collect-profiles-29492160-m5225\" (UID: \"c54394f6-54e8-472f-b09b-198431196e09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-m5225" Jan 27 16:00:00 crc kubenswrapper[4966]: I0127 16:00:00.470406 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-m5225" Jan 27 16:00:00 crc kubenswrapper[4966]: I0127 16:00:00.905734 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492160-m5225"] Jan 27 16:00:00 crc kubenswrapper[4966]: W0127 16:00:00.926053 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc54394f6_54e8_472f_b09b_198431196e09.slice/crio-aede6a909c57418528a5e0a00b9f6d44096c5f7a17c276f9ca51b4bc2416c8c8 WatchSource:0}: Error finding container aede6a909c57418528a5e0a00b9f6d44096c5f7a17c276f9ca51b4bc2416c8c8: Status 404 returned error can't find the container with id aede6a909c57418528a5e0a00b9f6d44096c5f7a17c276f9ca51b4bc2416c8c8 Jan 27 16:00:01 crc kubenswrapper[4966]: I0127 16:00:01.867608 4966 generic.go:334] "Generic (PLEG): container finished" podID="c54394f6-54e8-472f-b09b-198431196e09" containerID="92ad7e2d28cd60d99f263d76ed16d0a5f54d8d8f72f6868c5ac79c2eba86a9e9" exitCode=0 Jan 27 16:00:01 crc kubenswrapper[4966]: I0127 16:00:01.867756 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-m5225" event={"ID":"c54394f6-54e8-472f-b09b-198431196e09","Type":"ContainerDied","Data":"92ad7e2d28cd60d99f263d76ed16d0a5f54d8d8f72f6868c5ac79c2eba86a9e9"} Jan 27 16:00:01 crc kubenswrapper[4966]: I0127 16:00:01.867849 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-m5225" event={"ID":"c54394f6-54e8-472f-b09b-198431196e09","Type":"ContainerStarted","Data":"aede6a909c57418528a5e0a00b9f6d44096c5f7a17c276f9ca51b4bc2416c8c8"} Jan 27 16:00:03 crc kubenswrapper[4966]: I0127 16:00:03.240772 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-m5225" Jan 27 16:00:03 crc kubenswrapper[4966]: I0127 16:00:03.279106 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c54394f6-54e8-472f-b09b-198431196e09-secret-volume\") pod \"c54394f6-54e8-472f-b09b-198431196e09\" (UID: \"c54394f6-54e8-472f-b09b-198431196e09\") " Jan 27 16:00:03 crc kubenswrapper[4966]: I0127 16:00:03.279217 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c54394f6-54e8-472f-b09b-198431196e09-config-volume\") pod \"c54394f6-54e8-472f-b09b-198431196e09\" (UID: \"c54394f6-54e8-472f-b09b-198431196e09\") " Jan 27 16:00:03 crc kubenswrapper[4966]: I0127 16:00:03.279254 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp7kt\" (UniqueName: \"kubernetes.io/projected/c54394f6-54e8-472f-b09b-198431196e09-kube-api-access-rp7kt\") pod \"c54394f6-54e8-472f-b09b-198431196e09\" (UID: \"c54394f6-54e8-472f-b09b-198431196e09\") " Jan 27 16:00:03 crc kubenswrapper[4966]: I0127 16:00:03.286685 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c54394f6-54e8-472f-b09b-198431196e09-config-volume" (OuterVolumeSpecName: "config-volume") pod "c54394f6-54e8-472f-b09b-198431196e09" (UID: "c54394f6-54e8-472f-b09b-198431196e09"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:00:03 crc kubenswrapper[4966]: I0127 16:00:03.294249 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c54394f6-54e8-472f-b09b-198431196e09-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c54394f6-54e8-472f-b09b-198431196e09" (UID: "c54394f6-54e8-472f-b09b-198431196e09"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:00:03 crc kubenswrapper[4966]: I0127 16:00:03.299954 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c54394f6-54e8-472f-b09b-198431196e09-kube-api-access-rp7kt" (OuterVolumeSpecName: "kube-api-access-rp7kt") pod "c54394f6-54e8-472f-b09b-198431196e09" (UID: "c54394f6-54e8-472f-b09b-198431196e09"). InnerVolumeSpecName "kube-api-access-rp7kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:00:03 crc kubenswrapper[4966]: I0127 16:00:03.380656 4966 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c54394f6-54e8-472f-b09b-198431196e09-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:03 crc kubenswrapper[4966]: I0127 16:00:03.380687 4966 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c54394f6-54e8-472f-b09b-198431196e09-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:03 crc kubenswrapper[4966]: I0127 16:00:03.380698 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp7kt\" (UniqueName: \"kubernetes.io/projected/c54394f6-54e8-472f-b09b-198431196e09-kube-api-access-rp7kt\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:03 crc kubenswrapper[4966]: I0127 16:00:03.891588 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-m5225" event={"ID":"c54394f6-54e8-472f-b09b-198431196e09","Type":"ContainerDied","Data":"aede6a909c57418528a5e0a00b9f6d44096c5f7a17c276f9ca51b4bc2416c8c8"} Jan 27 16:00:03 crc kubenswrapper[4966]: I0127 16:00:03.891830 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aede6a909c57418528a5e0a00b9f6d44096c5f7a17c276f9ca51b4bc2416c8c8" Jan 27 16:00:03 crc kubenswrapper[4966]: I0127 16:00:03.891881 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-m5225" Jan 27 16:00:10 crc kubenswrapper[4966]: I0127 16:00:10.120155 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:00:10 crc kubenswrapper[4966]: I0127 16:00:10.121136 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:00:18 crc kubenswrapper[4966]: I0127 16:00:18.760433 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-784794b655-q7lf8" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.529136 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb"] Jan 27 16:00:19 crc kubenswrapper[4966]: E0127 16:00:19.531611 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c54394f6-54e8-472f-b09b-198431196e09" containerName="collect-profiles" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.531642 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="c54394f6-54e8-472f-b09b-198431196e09" containerName="collect-profiles" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.531786 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="c54394f6-54e8-472f-b09b-198431196e09" containerName="collect-profiles" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.532361 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.537740 4966 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.538029 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb"] Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.538236 4966 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-btndk" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.559881 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-fpvwf"] Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.565487 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.566562 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3e510e0a-a47d-416e-aec1-c7de88b0a2af-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-hfhvb\" (UID: \"3e510e0a-a47d-416e-aec1-c7de88b0a2af\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.566597 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e75b042c-789e-43fc-8736-b3f5093f21db-metrics\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.566618 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e75b042c-789e-43fc-8736-b3f5093f21db-frr-sockets\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.566673 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmggd\" (UniqueName: \"kubernetes.io/projected/e75b042c-789e-43fc-8736-b3f5093f21db-kube-api-access-tmggd\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.566717 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krccw\" (UniqueName: \"kubernetes.io/projected/3e510e0a-a47d-416e-aec1-c7de88b0a2af-kube-api-access-krccw\") pod \"frr-k8s-webhook-server-7df86c4f6c-hfhvb\" (UID: \"3e510e0a-a47d-416e-aec1-c7de88b0a2af\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.566737 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e75b042c-789e-43fc-8736-b3f5093f21db-metrics-certs\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.566753 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e75b042c-789e-43fc-8736-b3f5093f21db-frr-conf\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.566797 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e75b042c-789e-43fc-8736-b3f5093f21db-reloader\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.566813 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e75b042c-789e-43fc-8736-b3f5093f21db-frr-startup\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.567572 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.568833 4966 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.625221 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-wpv4z"] Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.628366 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wpv4z" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.630141 4966 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-hd8w5" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.630302 4966 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.635412 4966 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.635439 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.641363 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-nn8hx"] Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.642499 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-nn8hx" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.644329 4966 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.659564 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-nn8hx"] Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.672587 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e75b042c-789e-43fc-8736-b3f5093f21db-reloader\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.672653 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e75b042c-789e-43fc-8736-b3f5093f21db-frr-startup\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.672703 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3e510e0a-a47d-416e-aec1-c7de88b0a2af-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-hfhvb\" (UID: \"3e510e0a-a47d-416e-aec1-c7de88b0a2af\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.672729 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e75b042c-789e-43fc-8736-b3f5093f21db-metrics\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.672764 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e75b042c-789e-43fc-8736-b3f5093f21db-frr-sockets\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.672840 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmggd\" (UniqueName: \"kubernetes.io/projected/e75b042c-789e-43fc-8736-b3f5093f21db-kube-api-access-tmggd\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.672933 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krccw\" (UniqueName: \"kubernetes.io/projected/3e510e0a-a47d-416e-aec1-c7de88b0a2af-kube-api-access-krccw\") pod \"frr-k8s-webhook-server-7df86c4f6c-hfhvb\" (UID: \"3e510e0a-a47d-416e-aec1-c7de88b0a2af\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.672965 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e75b042c-789e-43fc-8736-b3f5093f21db-metrics-certs\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.672996 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e75b042c-789e-43fc-8736-b3f5093f21db-frr-conf\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.673109 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e75b042c-789e-43fc-8736-b3f5093f21db-reloader\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.673449 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e75b042c-789e-43fc-8736-b3f5093f21db-frr-conf\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.674393 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e75b042c-789e-43fc-8736-b3f5093f21db-frr-startup\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:19 crc kubenswrapper[4966]: E0127 16:00:19.674502 4966 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 27 16:00:19 crc kubenswrapper[4966]: E0127 16:00:19.674565 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e510e0a-a47d-416e-aec1-c7de88b0a2af-cert podName:3e510e0a-a47d-416e-aec1-c7de88b0a2af nodeName:}" failed. No retries permitted until 2026-01-27 16:00:20.174548012 +0000 UTC m=+1086.477341500 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3e510e0a-a47d-416e-aec1-c7de88b0a2af-cert") pod "frr-k8s-webhook-server-7df86c4f6c-hfhvb" (UID: "3e510e0a-a47d-416e-aec1-c7de88b0a2af") : secret "frr-k8s-webhook-server-cert" not found Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.674962 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e75b042c-789e-43fc-8736-b3f5093f21db-metrics\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.675182 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e75b042c-789e-43fc-8736-b3f5093f21db-frr-sockets\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:19 crc kubenswrapper[4966]: E0127 16:00:19.675206 4966 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 27 16:00:19 crc kubenswrapper[4966]: E0127 16:00:19.675245 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e75b042c-789e-43fc-8736-b3f5093f21db-metrics-certs podName:e75b042c-789e-43fc-8736-b3f5093f21db nodeName:}" failed. No retries permitted until 2026-01-27 16:00:20.175235553 +0000 UTC m=+1086.478029031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e75b042c-789e-43fc-8736-b3f5093f21db-metrics-certs") pod "frr-k8s-fpvwf" (UID: "e75b042c-789e-43fc-8736-b3f5093f21db") : secret "frr-k8s-certs-secret" not found Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.718561 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmggd\" (UniqueName: \"kubernetes.io/projected/e75b042c-789e-43fc-8736-b3f5093f21db-kube-api-access-tmggd\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.719387 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krccw\" (UniqueName: \"kubernetes.io/projected/3e510e0a-a47d-416e-aec1-c7de88b0a2af-kube-api-access-krccw\") pod \"frr-k8s-webhook-server-7df86c4f6c-hfhvb\" (UID: \"3e510e0a-a47d-416e-aec1-c7de88b0a2af\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.774800 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9fb28925-f952-48ea-88e5-db1ec4dba047-metrics-certs\") pod \"controller-6968d8fdc4-nn8hx\" (UID: \"9fb28925-f952-48ea-88e5-db1ec4dba047\") " pod="metallb-system/controller-6968d8fdc4-nn8hx" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.775701 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvt4z\" (UniqueName: \"kubernetes.io/projected/9fb28925-f952-48ea-88e5-db1ec4dba047-kube-api-access-bvt4z\") pod \"controller-6968d8fdc4-nn8hx\" (UID: \"9fb28925-f952-48ea-88e5-db1ec4dba047\") " pod="metallb-system/controller-6968d8fdc4-nn8hx" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.775836 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ce06a03b-db66-49be-ace7-f79a0b78dc62-memberlist\") pod \"speaker-wpv4z\" (UID: \"ce06a03b-db66-49be-ace7-f79a0b78dc62\") " pod="metallb-system/speaker-wpv4z" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.775978 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ce06a03b-db66-49be-ace7-f79a0b78dc62-metallb-excludel2\") pod \"speaker-wpv4z\" (UID: \"ce06a03b-db66-49be-ace7-f79a0b78dc62\") " pod="metallb-system/speaker-wpv4z" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.776147 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72zcp\" (UniqueName: \"kubernetes.io/projected/ce06a03b-db66-49be-ace7-f79a0b78dc62-kube-api-access-72zcp\") pod \"speaker-wpv4z\" (UID: \"ce06a03b-db66-49be-ace7-f79a0b78dc62\") " pod="metallb-system/speaker-wpv4z" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.776269 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9fb28925-f952-48ea-88e5-db1ec4dba047-cert\") pod \"controller-6968d8fdc4-nn8hx\" (UID: \"9fb28925-f952-48ea-88e5-db1ec4dba047\") " pod="metallb-system/controller-6968d8fdc4-nn8hx" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.776437 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce06a03b-db66-49be-ace7-f79a0b78dc62-metrics-certs\") pod \"speaker-wpv4z\" (UID: \"ce06a03b-db66-49be-ace7-f79a0b78dc62\") " pod="metallb-system/speaker-wpv4z" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.877490 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ce06a03b-db66-49be-ace7-f79a0b78dc62-metallb-excludel2\") pod \"speaker-wpv4z\" (UID: \"ce06a03b-db66-49be-ace7-f79a0b78dc62\") " pod="metallb-system/speaker-wpv4z" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.877541 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72zcp\" (UniqueName: \"kubernetes.io/projected/ce06a03b-db66-49be-ace7-f79a0b78dc62-kube-api-access-72zcp\") pod \"speaker-wpv4z\" (UID: \"ce06a03b-db66-49be-ace7-f79a0b78dc62\") " pod="metallb-system/speaker-wpv4z" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.877589 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9fb28925-f952-48ea-88e5-db1ec4dba047-cert\") pod \"controller-6968d8fdc4-nn8hx\" (UID: \"9fb28925-f952-48ea-88e5-db1ec4dba047\") " pod="metallb-system/controller-6968d8fdc4-nn8hx" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.877660 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce06a03b-db66-49be-ace7-f79a0b78dc62-metrics-certs\") pod \"speaker-wpv4z\" (UID: \"ce06a03b-db66-49be-ace7-f79a0b78dc62\") " pod="metallb-system/speaker-wpv4z" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.877690 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9fb28925-f952-48ea-88e5-db1ec4dba047-metrics-certs\") pod \"controller-6968d8fdc4-nn8hx\" (UID: \"9fb28925-f952-48ea-88e5-db1ec4dba047\") " pod="metallb-system/controller-6968d8fdc4-nn8hx" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.877731 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvt4z\" (UniqueName: \"kubernetes.io/projected/9fb28925-f952-48ea-88e5-db1ec4dba047-kube-api-access-bvt4z\") pod \"controller-6968d8fdc4-nn8hx\" (UID: \"9fb28925-f952-48ea-88e5-db1ec4dba047\") " pod="metallb-system/controller-6968d8fdc4-nn8hx" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.877748 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ce06a03b-db66-49be-ace7-f79a0b78dc62-memberlist\") pod \"speaker-wpv4z\" (UID: \"ce06a03b-db66-49be-ace7-f79a0b78dc62\") " pod="metallb-system/speaker-wpv4z" Jan 27 16:00:19 crc kubenswrapper[4966]: E0127 16:00:19.877860 4966 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 16:00:19 crc kubenswrapper[4966]: E0127 16:00:19.877924 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce06a03b-db66-49be-ace7-f79a0b78dc62-memberlist podName:ce06a03b-db66-49be-ace7-f79a0b78dc62 nodeName:}" failed. No retries permitted until 2026-01-27 16:00:20.377910446 +0000 UTC m=+1086.680703934 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ce06a03b-db66-49be-ace7-f79a0b78dc62-memberlist") pod "speaker-wpv4z" (UID: "ce06a03b-db66-49be-ace7-f79a0b78dc62") : secret "metallb-memberlist" not found Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.878667 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ce06a03b-db66-49be-ace7-f79a0b78dc62-metallb-excludel2\") pod \"speaker-wpv4z\" (UID: \"ce06a03b-db66-49be-ace7-f79a0b78dc62\") " pod="metallb-system/speaker-wpv4z" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.880440 4966 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.883662 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9fb28925-f952-48ea-88e5-db1ec4dba047-metrics-certs\") pod \"controller-6968d8fdc4-nn8hx\" (UID: \"9fb28925-f952-48ea-88e5-db1ec4dba047\") " pod="metallb-system/controller-6968d8fdc4-nn8hx" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.884158 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce06a03b-db66-49be-ace7-f79a0b78dc62-metrics-certs\") pod \"speaker-wpv4z\" (UID: \"ce06a03b-db66-49be-ace7-f79a0b78dc62\") " pod="metallb-system/speaker-wpv4z" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.892027 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9fb28925-f952-48ea-88e5-db1ec4dba047-cert\") pod \"controller-6968d8fdc4-nn8hx\" (UID: \"9fb28925-f952-48ea-88e5-db1ec4dba047\") " pod="metallb-system/controller-6968d8fdc4-nn8hx" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.897816 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72zcp\" (UniqueName: \"kubernetes.io/projected/ce06a03b-db66-49be-ace7-f79a0b78dc62-kube-api-access-72zcp\") pod \"speaker-wpv4z\" (UID: \"ce06a03b-db66-49be-ace7-f79a0b78dc62\") " pod="metallb-system/speaker-wpv4z" Jan 27 16:00:19 crc kubenswrapper[4966]: I0127 16:00:19.900827 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvt4z\" (UniqueName: \"kubernetes.io/projected/9fb28925-f952-48ea-88e5-db1ec4dba047-kube-api-access-bvt4z\") pod \"controller-6968d8fdc4-nn8hx\" (UID: \"9fb28925-f952-48ea-88e5-db1ec4dba047\") " pod="metallb-system/controller-6968d8fdc4-nn8hx" Jan 27 16:00:20 crc kubenswrapper[4966]: I0127 16:00:20.012118 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-nn8hx" Jan 27 16:00:20 crc kubenswrapper[4966]: I0127 16:00:20.182773 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3e510e0a-a47d-416e-aec1-c7de88b0a2af-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-hfhvb\" (UID: \"3e510e0a-a47d-416e-aec1-c7de88b0a2af\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" Jan 27 16:00:20 crc kubenswrapper[4966]: I0127 16:00:20.183227 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e75b042c-789e-43fc-8736-b3f5093f21db-metrics-certs\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:20 crc kubenswrapper[4966]: I0127 16:00:20.187299 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e75b042c-789e-43fc-8736-b3f5093f21db-metrics-certs\") pod \"frr-k8s-fpvwf\" (UID: \"e75b042c-789e-43fc-8736-b3f5093f21db\") " pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:20 crc kubenswrapper[4966]: I0127 16:00:20.187510 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3e510e0a-a47d-416e-aec1-c7de88b0a2af-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-hfhvb\" (UID: \"3e510e0a-a47d-416e-aec1-c7de88b0a2af\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" Jan 27 16:00:20 crc kubenswrapper[4966]: I0127 16:00:20.386169 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ce06a03b-db66-49be-ace7-f79a0b78dc62-memberlist\") pod \"speaker-wpv4z\" (UID: \"ce06a03b-db66-49be-ace7-f79a0b78dc62\") " pod="metallb-system/speaker-wpv4z" Jan 27 16:00:20 crc kubenswrapper[4966]: E0127 16:00:20.386348 4966 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 16:00:20 crc kubenswrapper[4966]: E0127 16:00:20.386419 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce06a03b-db66-49be-ace7-f79a0b78dc62-memberlist podName:ce06a03b-db66-49be-ace7-f79a0b78dc62 nodeName:}" failed. No retries permitted until 2026-01-27 16:00:21.386400837 +0000 UTC m=+1087.689194315 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ce06a03b-db66-49be-ace7-f79a0b78dc62-memberlist") pod "speaker-wpv4z" (UID: "ce06a03b-db66-49be-ace7-f79a0b78dc62") : secret "metallb-memberlist" not found Jan 27 16:00:20 crc kubenswrapper[4966]: I0127 16:00:20.429740 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-nn8hx"] Jan 27 16:00:20 crc kubenswrapper[4966]: I0127 16:00:20.465615 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" Jan 27 16:00:20 crc kubenswrapper[4966]: I0127 16:00:20.480626 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:20 crc kubenswrapper[4966]: I0127 16:00:20.713148 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb"] Jan 27 16:00:21 crc kubenswrapper[4966]: I0127 16:00:21.021062 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fpvwf" event={"ID":"e75b042c-789e-43fc-8736-b3f5093f21db","Type":"ContainerStarted","Data":"78ad96b154036fbdaad01efb334143bd780886c3f05b818e402222d479ec97a4"} Jan 27 16:00:21 crc kubenswrapper[4966]: I0127 16:00:21.022887 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" event={"ID":"3e510e0a-a47d-416e-aec1-c7de88b0a2af","Type":"ContainerStarted","Data":"5e19bb511fab765cfacc83ab69ff3fc0912a216988087af8a904fc6e864f665f"} Jan 27 16:00:21 crc kubenswrapper[4966]: I0127 16:00:21.024762 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-nn8hx" event={"ID":"9fb28925-f952-48ea-88e5-db1ec4dba047","Type":"ContainerStarted","Data":"2eb6fec770b2fbc65d557c0089feb4866377c49096d9c420d4484a4c5c5983da"} Jan 27 16:00:21 crc kubenswrapper[4966]: I0127 16:00:21.024812 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-nn8hx" event={"ID":"9fb28925-f952-48ea-88e5-db1ec4dba047","Type":"ContainerStarted","Data":"edc3058b4701062b9af689f16d0d5554c855afd5ec389b3e2ff5d540eb0cc11d"} Jan 27 16:00:21 crc kubenswrapper[4966]: I0127 16:00:21.024826 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-nn8hx" event={"ID":"9fb28925-f952-48ea-88e5-db1ec4dba047","Type":"ContainerStarted","Data":"bbc20e17fd7f154747c846e0bd6e5f76ce57ea31850d938ad4bb93a5fc4ee6d8"} Jan 27 16:00:21 crc kubenswrapper[4966]: I0127 16:00:21.024935 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-nn8hx" Jan 27 16:00:21 crc kubenswrapper[4966]: I0127 16:00:21.403946 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ce06a03b-db66-49be-ace7-f79a0b78dc62-memberlist\") pod \"speaker-wpv4z\" (UID: \"ce06a03b-db66-49be-ace7-f79a0b78dc62\") " pod="metallb-system/speaker-wpv4z" Jan 27 16:00:21 crc kubenswrapper[4966]: I0127 16:00:21.418508 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ce06a03b-db66-49be-ace7-f79a0b78dc62-memberlist\") pod \"speaker-wpv4z\" (UID: \"ce06a03b-db66-49be-ace7-f79a0b78dc62\") " pod="metallb-system/speaker-wpv4z" Jan 27 16:00:21 crc kubenswrapper[4966]: I0127 16:00:21.495146 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wpv4z" Jan 27 16:00:22 crc kubenswrapper[4966]: I0127 16:00:22.037848 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wpv4z" event={"ID":"ce06a03b-db66-49be-ace7-f79a0b78dc62","Type":"ContainerStarted","Data":"b8e2f25137a3a0bdaaadce4e11fe9656739f79145d8fc96c9f1e4179d2dc4905"} Jan 27 16:00:22 crc kubenswrapper[4966]: I0127 16:00:22.038193 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wpv4z" event={"ID":"ce06a03b-db66-49be-ace7-f79a0b78dc62","Type":"ContainerStarted","Data":"f278a95b1eb8fec1f4e4131a71ccf31d013ca07cc44d3c38b2b0034bba7835fd"} Jan 27 16:00:23 crc kubenswrapper[4966]: I0127 16:00:23.049642 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wpv4z" event={"ID":"ce06a03b-db66-49be-ace7-f79a0b78dc62","Type":"ContainerStarted","Data":"b34fa4241c7b670eda910e32acd93992d62a8f1992d7dd8e6ab456d53b243920"} Jan 27 16:00:23 crc kubenswrapper[4966]: I0127 16:00:23.049970 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-wpv4z" Jan 27 16:00:23 crc kubenswrapper[4966]: I0127 16:00:23.066826 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-wpv4z" podStartSLOduration=4.066811916 podStartE2EDuration="4.066811916s" podCreationTimestamp="2026-01-27 16:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:00:23.062407117 +0000 UTC m=+1089.365200625" watchObservedRunningTime="2026-01-27 16:00:23.066811916 +0000 UTC m=+1089.369605404" Jan 27 16:00:23 crc kubenswrapper[4966]: I0127 16:00:23.068240 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-nn8hx" podStartSLOduration=4.068085106 podStartE2EDuration="4.068085106s" podCreationTimestamp="2026-01-27 16:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:00:21.046228419 +0000 UTC m=+1087.349021927" watchObservedRunningTime="2026-01-27 16:00:23.068085106 +0000 UTC m=+1089.370878594" Jan 27 16:00:29 crc kubenswrapper[4966]: I0127 16:00:29.120843 4966 generic.go:334] "Generic (PLEG): container finished" podID="e75b042c-789e-43fc-8736-b3f5093f21db" containerID="ff5245ec1743dcc5f0cb99b3cd86923935661032efeb2e51066523d38211f896" exitCode=0 Jan 27 16:00:29 crc kubenswrapper[4966]: I0127 16:00:29.120982 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fpvwf" event={"ID":"e75b042c-789e-43fc-8736-b3f5093f21db","Type":"ContainerDied","Data":"ff5245ec1743dcc5f0cb99b3cd86923935661032efeb2e51066523d38211f896"} Jan 27 16:00:29 crc kubenswrapper[4966]: I0127 16:00:29.124053 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" event={"ID":"3e510e0a-a47d-416e-aec1-c7de88b0a2af","Type":"ContainerStarted","Data":"02ade3bf4a2516b3c82e52c35c6a892ef6a46a0fc6b01c0fcf75bdb056adf70c"} Jan 27 16:00:29 crc kubenswrapper[4966]: I0127 16:00:29.124232 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" Jan 27 16:00:29 crc kubenswrapper[4966]: I0127 16:00:29.192893 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" podStartSLOduration=2.5788628879999997 podStartE2EDuration="10.192869084s" podCreationTimestamp="2026-01-27 16:00:19 +0000 UTC" firstStartedPulling="2026-01-27 16:00:20.719017278 +0000 UTC m=+1087.021810766" lastFinishedPulling="2026-01-27 16:00:28.333023454 +0000 UTC m=+1094.635816962" observedRunningTime="2026-01-27 16:00:29.188841998 +0000 UTC m=+1095.491635506" watchObservedRunningTime="2026-01-27 16:00:29.192869084 +0000 UTC m=+1095.495662612" Jan 27 16:00:30 crc kubenswrapper[4966]: I0127 16:00:30.016638 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-nn8hx" Jan 27 16:00:30 crc kubenswrapper[4966]: I0127 16:00:30.135455 4966 generic.go:334] "Generic (PLEG): container finished" podID="e75b042c-789e-43fc-8736-b3f5093f21db" containerID="f5a094533db1bd94e4643956816310484358056496957848846875060620aac5" exitCode=0 Jan 27 16:00:30 crc kubenswrapper[4966]: I0127 16:00:30.136043 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fpvwf" event={"ID":"e75b042c-789e-43fc-8736-b3f5093f21db","Type":"ContainerDied","Data":"f5a094533db1bd94e4643956816310484358056496957848846875060620aac5"} Jan 27 16:00:31 crc kubenswrapper[4966]: I0127 16:00:31.166190 4966 generic.go:334] "Generic (PLEG): container finished" podID="e75b042c-789e-43fc-8736-b3f5093f21db" containerID="08b8f3193c39eb498ab7f48bc7a1d1997b7df91cea4dc2fea8224c01e0f909a8" exitCode=0 Jan 27 16:00:31 crc kubenswrapper[4966]: I0127 16:00:31.166280 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fpvwf" event={"ID":"e75b042c-789e-43fc-8736-b3f5093f21db","Type":"ContainerDied","Data":"08b8f3193c39eb498ab7f48bc7a1d1997b7df91cea4dc2fea8224c01e0f909a8"} Jan 27 16:00:31 crc kubenswrapper[4966]: I0127 16:00:31.499745 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-wpv4z" Jan 27 16:00:32 crc kubenswrapper[4966]: I0127 16:00:32.178579 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fpvwf" event={"ID":"e75b042c-789e-43fc-8736-b3f5093f21db","Type":"ContainerStarted","Data":"e224ab4c87320968867721f0b5d5d97a3453e0f608b2818b76ea299d6c120917"} Jan 27 16:00:32 crc kubenswrapper[4966]: I0127 16:00:32.179028 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fpvwf" event={"ID":"e75b042c-789e-43fc-8736-b3f5093f21db","Type":"ContainerStarted","Data":"b7c9af743b5f3d202359cf8dde06682358d22535b1fcf5e71dfe0b97f8fa02d1"} Jan 27 16:00:33 crc kubenswrapper[4966]: I0127 16:00:33.195594 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fpvwf" event={"ID":"e75b042c-789e-43fc-8736-b3f5093f21db","Type":"ContainerStarted","Data":"834847f43f7d79ac32963f9105ef2741d1628a3c8396ffe47e735e70733172d6"} Jan 27 16:00:33 crc kubenswrapper[4966]: I0127 16:00:33.195924 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fpvwf" event={"ID":"e75b042c-789e-43fc-8736-b3f5093f21db","Type":"ContainerStarted","Data":"27cadc8a09cb300886879693019d6d63bf86cdc017ccf61ff0502fc4f8157477"} Jan 27 16:00:33 crc kubenswrapper[4966]: I0127 16:00:33.195943 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fpvwf" event={"ID":"e75b042c-789e-43fc-8736-b3f5093f21db","Type":"ContainerStarted","Data":"31ee9db35905dd7c5f936ff0bf7d2cbedaf297fe26303e372e501c51dbbb1e59"} Jan 27 16:00:33 crc kubenswrapper[4966]: I0127 16:00:33.195960 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:33 crc kubenswrapper[4966]: I0127 16:00:33.195973 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fpvwf" event={"ID":"e75b042c-789e-43fc-8736-b3f5093f21db","Type":"ContainerStarted","Data":"11c3df3c3b21fe59395983d6efd08c2d9a612941e277589ab4e5b0e1055c7a2f"} Jan 27 16:00:33 crc kubenswrapper[4966]: I0127 16:00:33.223917 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-fpvwf" podStartSLOduration=6.49885242 podStartE2EDuration="14.223881269s" podCreationTimestamp="2026-01-27 16:00:19 +0000 UTC" firstStartedPulling="2026-01-27 16:00:20.603223114 +0000 UTC m=+1086.906016602" lastFinishedPulling="2026-01-27 16:00:28.328251963 +0000 UTC m=+1094.631045451" observedRunningTime="2026-01-27 16:00:33.222692331 +0000 UTC m=+1099.525485839" watchObservedRunningTime="2026-01-27 16:00:33.223881269 +0000 UTC m=+1099.526674767" Jan 27 16:00:34 crc kubenswrapper[4966]: I0127 16:00:34.634272 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-pkmk9"] Jan 27 16:00:34 crc kubenswrapper[4966]: I0127 16:00:34.635252 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pkmk9" Jan 27 16:00:34 crc kubenswrapper[4966]: I0127 16:00:34.637445 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 16:00:34 crc kubenswrapper[4966]: I0127 16:00:34.637450 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-8mfqq" Jan 27 16:00:34 crc kubenswrapper[4966]: I0127 16:00:34.637478 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 16:00:34 crc kubenswrapper[4966]: I0127 16:00:34.672011 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pkmk9"] Jan 27 16:00:34 crc kubenswrapper[4966]: I0127 16:00:34.770175 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wjs6\" (UniqueName: \"kubernetes.io/projected/02a4949e-8d4b-4192-a34f-2d4f55e0c8cc-kube-api-access-7wjs6\") pod \"openstack-operator-index-pkmk9\" (UID: \"02a4949e-8d4b-4192-a34f-2d4f55e0c8cc\") " pod="openstack-operators/openstack-operator-index-pkmk9" Jan 27 16:00:34 crc kubenswrapper[4966]: I0127 16:00:34.871624 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wjs6\" (UniqueName: \"kubernetes.io/projected/02a4949e-8d4b-4192-a34f-2d4f55e0c8cc-kube-api-access-7wjs6\") pod \"openstack-operator-index-pkmk9\" (UID: \"02a4949e-8d4b-4192-a34f-2d4f55e0c8cc\") " pod="openstack-operators/openstack-operator-index-pkmk9" Jan 27 16:00:34 crc kubenswrapper[4966]: I0127 16:00:34.899673 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wjs6\" (UniqueName: \"kubernetes.io/projected/02a4949e-8d4b-4192-a34f-2d4f55e0c8cc-kube-api-access-7wjs6\") pod \"openstack-operator-index-pkmk9\" (UID: \"02a4949e-8d4b-4192-a34f-2d4f55e0c8cc\") " pod="openstack-operators/openstack-operator-index-pkmk9" Jan 27 16:00:34 crc kubenswrapper[4966]: I0127 16:00:34.968907 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pkmk9" Jan 27 16:00:35 crc kubenswrapper[4966]: I0127 16:00:35.409933 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pkmk9"] Jan 27 16:00:35 crc kubenswrapper[4966]: I0127 16:00:35.482108 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:35 crc kubenswrapper[4966]: I0127 16:00:35.523197 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:36 crc kubenswrapper[4966]: I0127 16:00:36.225146 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pkmk9" event={"ID":"02a4949e-8d4b-4192-a34f-2d4f55e0c8cc","Type":"ContainerStarted","Data":"65a00f6de2b37106c1284c5c3f307b9769aee2741a7428fee978a9aa9d21820e"} Jan 27 16:00:38 crc kubenswrapper[4966]: I0127 16:00:38.010231 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-pkmk9"] Jan 27 16:00:38 crc kubenswrapper[4966]: I0127 16:00:38.623850 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-48xxm"] Jan 27 16:00:38 crc kubenswrapper[4966]: I0127 16:00:38.625409 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-48xxm" Jan 27 16:00:38 crc kubenswrapper[4966]: I0127 16:00:38.641248 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-48xxm"] Jan 27 16:00:38 crc kubenswrapper[4966]: I0127 16:00:38.742755 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlxmq\" (UniqueName: \"kubernetes.io/projected/57175838-b13a-4dc9-bf85-ed8668e3d88c-kube-api-access-xlxmq\") pod \"openstack-operator-index-48xxm\" (UID: \"57175838-b13a-4dc9-bf85-ed8668e3d88c\") " pod="openstack-operators/openstack-operator-index-48xxm" Jan 27 16:00:38 crc kubenswrapper[4966]: I0127 16:00:38.844827 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlxmq\" (UniqueName: \"kubernetes.io/projected/57175838-b13a-4dc9-bf85-ed8668e3d88c-kube-api-access-xlxmq\") pod \"openstack-operator-index-48xxm\" (UID: \"57175838-b13a-4dc9-bf85-ed8668e3d88c\") " pod="openstack-operators/openstack-operator-index-48xxm" Jan 27 16:00:38 crc kubenswrapper[4966]: I0127 16:00:38.870402 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlxmq\" (UniqueName: \"kubernetes.io/projected/57175838-b13a-4dc9-bf85-ed8668e3d88c-kube-api-access-xlxmq\") pod \"openstack-operator-index-48xxm\" (UID: \"57175838-b13a-4dc9-bf85-ed8668e3d88c\") " pod="openstack-operators/openstack-operator-index-48xxm" Jan 27 16:00:38 crc kubenswrapper[4966]: I0127 16:00:38.993224 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-48xxm" Jan 27 16:00:39 crc kubenswrapper[4966]: I0127 16:00:39.247612 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pkmk9" event={"ID":"02a4949e-8d4b-4192-a34f-2d4f55e0c8cc","Type":"ContainerStarted","Data":"6c0caddf32aaa006ec13d7a7802e7689d66fe099d364e7840813ca69f43da15a"} Jan 27 16:00:39 crc kubenswrapper[4966]: I0127 16:00:39.247704 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-pkmk9" podUID="02a4949e-8d4b-4192-a34f-2d4f55e0c8cc" containerName="registry-server" containerID="cri-o://6c0caddf32aaa006ec13d7a7802e7689d66fe099d364e7840813ca69f43da15a" gracePeriod=2 Jan 27 16:00:39 crc kubenswrapper[4966]: I0127 16:00:39.265623 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-pkmk9" podStartSLOduration=2.415571841 podStartE2EDuration="5.265601276s" podCreationTimestamp="2026-01-27 16:00:34 +0000 UTC" firstStartedPulling="2026-01-27 16:00:35.413179551 +0000 UTC m=+1101.715973039" lastFinishedPulling="2026-01-27 16:00:38.263208976 +0000 UTC m=+1104.566002474" observedRunningTime="2026-01-27 16:00:39.262233351 +0000 UTC m=+1105.565026879" watchObservedRunningTime="2026-01-27 16:00:39.265601276 +0000 UTC m=+1105.568394774" Jan 27 16:00:39 crc kubenswrapper[4966]: I0127 16:00:39.415475 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-48xxm"] Jan 27 16:00:39 crc kubenswrapper[4966]: W0127 16:00:39.426571 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57175838_b13a_4dc9_bf85_ed8668e3d88c.slice/crio-f9b8414063311fecf33fc90265499f024657dee52614446d0f0ba27eb8aba4a4 WatchSource:0}: Error finding container f9b8414063311fecf33fc90265499f024657dee52614446d0f0ba27eb8aba4a4: Status 404 returned error can't find the container with id f9b8414063311fecf33fc90265499f024657dee52614446d0f0ba27eb8aba4a4 Jan 27 16:00:39 crc kubenswrapper[4966]: I0127 16:00:39.701465 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pkmk9" Jan 27 16:00:39 crc kubenswrapper[4966]: I0127 16:00:39.763467 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wjs6\" (UniqueName: \"kubernetes.io/projected/02a4949e-8d4b-4192-a34f-2d4f55e0c8cc-kube-api-access-7wjs6\") pod \"02a4949e-8d4b-4192-a34f-2d4f55e0c8cc\" (UID: \"02a4949e-8d4b-4192-a34f-2d4f55e0c8cc\") " Jan 27 16:00:39 crc kubenswrapper[4966]: I0127 16:00:39.770826 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02a4949e-8d4b-4192-a34f-2d4f55e0c8cc-kube-api-access-7wjs6" (OuterVolumeSpecName: "kube-api-access-7wjs6") pod "02a4949e-8d4b-4192-a34f-2d4f55e0c8cc" (UID: "02a4949e-8d4b-4192-a34f-2d4f55e0c8cc"). InnerVolumeSpecName "kube-api-access-7wjs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:00:39 crc kubenswrapper[4966]: I0127 16:00:39.866008 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wjs6\" (UniqueName: \"kubernetes.io/projected/02a4949e-8d4b-4192-a34f-2d4f55e0c8cc-kube-api-access-7wjs6\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:40 crc kubenswrapper[4966]: I0127 16:00:40.119936 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:00:40 crc kubenswrapper[4966]: I0127 16:00:40.120009 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:00:40 crc kubenswrapper[4966]: I0127 16:00:40.258264 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-48xxm" event={"ID":"57175838-b13a-4dc9-bf85-ed8668e3d88c","Type":"ContainerStarted","Data":"5212cec5a0a5d6f38ebf2aaa68784012ddcd0e085ca8b77ba837e11485c578e2"} Jan 27 16:00:40 crc kubenswrapper[4966]: I0127 16:00:40.258316 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-48xxm" event={"ID":"57175838-b13a-4dc9-bf85-ed8668e3d88c","Type":"ContainerStarted","Data":"f9b8414063311fecf33fc90265499f024657dee52614446d0f0ba27eb8aba4a4"} Jan 27 16:00:40 crc kubenswrapper[4966]: I0127 16:00:40.260443 4966 generic.go:334] "Generic (PLEG): container finished" podID="02a4949e-8d4b-4192-a34f-2d4f55e0c8cc" containerID="6c0caddf32aaa006ec13d7a7802e7689d66fe099d364e7840813ca69f43da15a" exitCode=0 Jan 27 16:00:40 crc kubenswrapper[4966]: I0127 16:00:40.260508 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pkmk9" Jan 27 16:00:40 crc kubenswrapper[4966]: I0127 16:00:40.260535 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pkmk9" event={"ID":"02a4949e-8d4b-4192-a34f-2d4f55e0c8cc","Type":"ContainerDied","Data":"6c0caddf32aaa006ec13d7a7802e7689d66fe099d364e7840813ca69f43da15a"} Jan 27 16:00:40 crc kubenswrapper[4966]: I0127 16:00:40.260575 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pkmk9" event={"ID":"02a4949e-8d4b-4192-a34f-2d4f55e0c8cc","Type":"ContainerDied","Data":"65a00f6de2b37106c1284c5c3f307b9769aee2741a7428fee978a9aa9d21820e"} Jan 27 16:00:40 crc kubenswrapper[4966]: I0127 16:00:40.260596 4966 scope.go:117] "RemoveContainer" containerID="6c0caddf32aaa006ec13d7a7802e7689d66fe099d364e7840813ca69f43da15a" Jan 27 16:00:40 crc kubenswrapper[4966]: I0127 16:00:40.282144 4966 scope.go:117] "RemoveContainer" containerID="6c0caddf32aaa006ec13d7a7802e7689d66fe099d364e7840813ca69f43da15a" Jan 27 16:00:40 crc kubenswrapper[4966]: E0127 16:00:40.286615 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c0caddf32aaa006ec13d7a7802e7689d66fe099d364e7840813ca69f43da15a\": container with ID starting with 6c0caddf32aaa006ec13d7a7802e7689d66fe099d364e7840813ca69f43da15a not found: ID does not exist" containerID="6c0caddf32aaa006ec13d7a7802e7689d66fe099d364e7840813ca69f43da15a" Jan 27 16:00:40 crc kubenswrapper[4966]: I0127 16:00:40.286681 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c0caddf32aaa006ec13d7a7802e7689d66fe099d364e7840813ca69f43da15a"} err="failed to get container status \"6c0caddf32aaa006ec13d7a7802e7689d66fe099d364e7840813ca69f43da15a\": rpc error: code = NotFound desc = could not find container \"6c0caddf32aaa006ec13d7a7802e7689d66fe099d364e7840813ca69f43da15a\": container with ID starting with 6c0caddf32aaa006ec13d7a7802e7689d66fe099d364e7840813ca69f43da15a not found: ID does not exist" Jan 27 16:00:40 crc kubenswrapper[4966]: I0127 16:00:40.291344 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-48xxm" podStartSLOduration=2.242831358 podStartE2EDuration="2.291327269s" podCreationTimestamp="2026-01-27 16:00:38 +0000 UTC" firstStartedPulling="2026-01-27 16:00:39.43049348 +0000 UTC m=+1105.733286968" lastFinishedPulling="2026-01-27 16:00:39.478989401 +0000 UTC m=+1105.781782879" observedRunningTime="2026-01-27 16:00:40.28433812 +0000 UTC m=+1106.587131628" watchObservedRunningTime="2026-01-27 16:00:40.291327269 +0000 UTC m=+1106.594120757" Jan 27 16:00:40 crc kubenswrapper[4966]: I0127 16:00:40.309823 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-pkmk9"] Jan 27 16:00:40 crc kubenswrapper[4966]: I0127 16:00:40.320686 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-pkmk9"] Jan 27 16:00:40 crc kubenswrapper[4966]: I0127 16:00:40.471288 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" Jan 27 16:00:40 crc kubenswrapper[4966]: I0127 16:00:40.533278 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02a4949e-8d4b-4192-a34f-2d4f55e0c8cc" path="/var/lib/kubelet/pods/02a4949e-8d4b-4192-a34f-2d4f55e0c8cc/volumes" Jan 27 16:00:48 crc kubenswrapper[4966]: I0127 16:00:48.994349 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-48xxm" Jan 27 16:00:48 crc kubenswrapper[4966]: I0127 16:00:48.995086 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-48xxm" Jan 27 16:00:49 crc kubenswrapper[4966]: I0127 16:00:49.027119 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-48xxm" Jan 27 16:00:49 crc kubenswrapper[4966]: I0127 16:00:49.423105 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-48xxm" Jan 27 16:00:50 crc kubenswrapper[4966]: I0127 16:00:50.465978 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp"] Jan 27 16:00:50 crc kubenswrapper[4966]: E0127 16:00:50.466670 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a4949e-8d4b-4192-a34f-2d4f55e0c8cc" containerName="registry-server" Jan 27 16:00:50 crc kubenswrapper[4966]: I0127 16:00:50.466697 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a4949e-8d4b-4192-a34f-2d4f55e0c8cc" containerName="registry-server" Jan 27 16:00:50 crc kubenswrapper[4966]: I0127 16:00:50.467086 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a4949e-8d4b-4192-a34f-2d4f55e0c8cc" containerName="registry-server" Jan 27 16:00:50 crc kubenswrapper[4966]: I0127 16:00:50.469310 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp" Jan 27 16:00:50 crc kubenswrapper[4966]: I0127 16:00:50.472389 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-9pkqx" Jan 27 16:00:50 crc kubenswrapper[4966]: I0127 16:00:50.476180 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp"] Jan 27 16:00:50 crc kubenswrapper[4966]: I0127 16:00:50.485960 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-fpvwf" Jan 27 16:00:50 crc kubenswrapper[4966]: I0127 16:00:50.550186 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8lp2\" (UniqueName: \"kubernetes.io/projected/99316d5d-7260-4487-98dc-a531f3501aa0-kube-api-access-f8lp2\") pod \"1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp\" (UID: \"99316d5d-7260-4487-98dc-a531f3501aa0\") " pod="openstack-operators/1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp" Jan 27 16:00:50 crc kubenswrapper[4966]: I0127 16:00:50.550285 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99316d5d-7260-4487-98dc-a531f3501aa0-util\") pod \"1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp\" (UID: \"99316d5d-7260-4487-98dc-a531f3501aa0\") " pod="openstack-operators/1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp" Jan 27 16:00:50 crc kubenswrapper[4966]: I0127 16:00:50.550356 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99316d5d-7260-4487-98dc-a531f3501aa0-bundle\") pod \"1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp\" (UID: \"99316d5d-7260-4487-98dc-a531f3501aa0\") " pod="openstack-operators/1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp" Jan 27 16:00:50 crc kubenswrapper[4966]: I0127 16:00:50.651755 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8lp2\" (UniqueName: \"kubernetes.io/projected/99316d5d-7260-4487-98dc-a531f3501aa0-kube-api-access-f8lp2\") pod \"1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp\" (UID: \"99316d5d-7260-4487-98dc-a531f3501aa0\") " pod="openstack-operators/1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp" Jan 27 16:00:50 crc kubenswrapper[4966]: I0127 16:00:50.651814 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99316d5d-7260-4487-98dc-a531f3501aa0-util\") pod \"1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp\" (UID: \"99316d5d-7260-4487-98dc-a531f3501aa0\") " pod="openstack-operators/1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp" Jan 27 16:00:50 crc kubenswrapper[4966]: I0127 16:00:50.651845 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99316d5d-7260-4487-98dc-a531f3501aa0-bundle\") pod \"1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp\" (UID: \"99316d5d-7260-4487-98dc-a531f3501aa0\") " pod="openstack-operators/1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp" Jan 27 16:00:50 crc kubenswrapper[4966]: I0127 16:00:50.652428 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99316d5d-7260-4487-98dc-a531f3501aa0-util\") pod \"1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp\" (UID: \"99316d5d-7260-4487-98dc-a531f3501aa0\") " pod="openstack-operators/1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp" Jan 27 16:00:50 crc kubenswrapper[4966]: I0127 16:00:50.652510 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99316d5d-7260-4487-98dc-a531f3501aa0-bundle\") pod \"1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp\" (UID: \"99316d5d-7260-4487-98dc-a531f3501aa0\") " pod="openstack-operators/1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp" Jan 27 16:00:50 crc kubenswrapper[4966]: I0127 16:00:50.684057 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8lp2\" (UniqueName: \"kubernetes.io/projected/99316d5d-7260-4487-98dc-a531f3501aa0-kube-api-access-f8lp2\") pod \"1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp\" (UID: \"99316d5d-7260-4487-98dc-a531f3501aa0\") " pod="openstack-operators/1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp" Jan 27 16:00:50 crc kubenswrapper[4966]: I0127 16:00:50.793523 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp" Jan 27 16:00:51 crc kubenswrapper[4966]: I0127 16:00:51.268996 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp"] Jan 27 16:00:51 crc kubenswrapper[4966]: I0127 16:00:51.386283 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp" event={"ID":"99316d5d-7260-4487-98dc-a531f3501aa0","Type":"ContainerStarted","Data":"4fe4c76184c38852351a155f5508c878858058943ed7d1b6de7d1bca7406eb5d"} Jan 27 16:00:52 crc kubenswrapper[4966]: I0127 16:00:52.397604 4966 generic.go:334] "Generic (PLEG): container finished" podID="99316d5d-7260-4487-98dc-a531f3501aa0" containerID="9bf91a241206e58fd910aef54b121de7178fe06da051a4dd7f9f35a3f30bf52f" exitCode=0 Jan 27 16:00:52 crc kubenswrapper[4966]: I0127 16:00:52.397679 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp" event={"ID":"99316d5d-7260-4487-98dc-a531f3501aa0","Type":"ContainerDied","Data":"9bf91a241206e58fd910aef54b121de7178fe06da051a4dd7f9f35a3f30bf52f"} Jan 27 16:00:54 crc kubenswrapper[4966]: I0127 16:00:54.430079 4966 generic.go:334] "Generic (PLEG): container finished" podID="99316d5d-7260-4487-98dc-a531f3501aa0" containerID="72eb9d9e1994ec751846ad12e7eacd893c01f087064c1c11c1b3bc80c318bc12" exitCode=0 Jan 27 16:00:54 crc kubenswrapper[4966]: I0127 16:00:54.430176 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp" event={"ID":"99316d5d-7260-4487-98dc-a531f3501aa0","Type":"ContainerDied","Data":"72eb9d9e1994ec751846ad12e7eacd893c01f087064c1c11c1b3bc80c318bc12"} Jan 27 16:00:55 crc kubenswrapper[4966]: I0127 16:00:55.439827 4966 generic.go:334] "Generic (PLEG): container finished" podID="99316d5d-7260-4487-98dc-a531f3501aa0" containerID="0e0e55844dfb96dc23a2e4d6033dd7d83d8b643ab2f9a1dd26f6eed1b4b5e244" exitCode=0 Jan 27 16:00:55 crc kubenswrapper[4966]: I0127 16:00:55.439875 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp" event={"ID":"99316d5d-7260-4487-98dc-a531f3501aa0","Type":"ContainerDied","Data":"0e0e55844dfb96dc23a2e4d6033dd7d83d8b643ab2f9a1dd26f6eed1b4b5e244"} Jan 27 16:00:56 crc kubenswrapper[4966]: I0127 16:00:56.746349 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp" Jan 27 16:00:56 crc kubenswrapper[4966]: I0127 16:00:56.859737 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8lp2\" (UniqueName: \"kubernetes.io/projected/99316d5d-7260-4487-98dc-a531f3501aa0-kube-api-access-f8lp2\") pod \"99316d5d-7260-4487-98dc-a531f3501aa0\" (UID: \"99316d5d-7260-4487-98dc-a531f3501aa0\") " Jan 27 16:00:56 crc kubenswrapper[4966]: I0127 16:00:56.859845 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99316d5d-7260-4487-98dc-a531f3501aa0-bundle\") pod \"99316d5d-7260-4487-98dc-a531f3501aa0\" (UID: \"99316d5d-7260-4487-98dc-a531f3501aa0\") " Jan 27 16:00:56 crc kubenswrapper[4966]: I0127 16:00:56.859944 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99316d5d-7260-4487-98dc-a531f3501aa0-util\") pod \"99316d5d-7260-4487-98dc-a531f3501aa0\" (UID: \"99316d5d-7260-4487-98dc-a531f3501aa0\") " Jan 27 16:00:56 crc kubenswrapper[4966]: I0127 16:00:56.860605 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99316d5d-7260-4487-98dc-a531f3501aa0-bundle" (OuterVolumeSpecName: "bundle") pod "99316d5d-7260-4487-98dc-a531f3501aa0" (UID: "99316d5d-7260-4487-98dc-a531f3501aa0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:00:56 crc kubenswrapper[4966]: I0127 16:00:56.866583 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99316d5d-7260-4487-98dc-a531f3501aa0-kube-api-access-f8lp2" (OuterVolumeSpecName: "kube-api-access-f8lp2") pod "99316d5d-7260-4487-98dc-a531f3501aa0" (UID: "99316d5d-7260-4487-98dc-a531f3501aa0"). InnerVolumeSpecName "kube-api-access-f8lp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:00:56 crc kubenswrapper[4966]: I0127 16:00:56.962127 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8lp2\" (UniqueName: \"kubernetes.io/projected/99316d5d-7260-4487-98dc-a531f3501aa0-kube-api-access-f8lp2\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:56 crc kubenswrapper[4966]: I0127 16:00:56.962151 4966 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99316d5d-7260-4487-98dc-a531f3501aa0-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:56 crc kubenswrapper[4966]: I0127 16:00:56.997544 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99316d5d-7260-4487-98dc-a531f3501aa0-util" (OuterVolumeSpecName: "util") pod "99316d5d-7260-4487-98dc-a531f3501aa0" (UID: "99316d5d-7260-4487-98dc-a531f3501aa0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:00:57 crc kubenswrapper[4966]: I0127 16:00:57.063126 4966 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99316d5d-7260-4487-98dc-a531f3501aa0-util\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:57 crc kubenswrapper[4966]: I0127 16:00:57.462255 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp" event={"ID":"99316d5d-7260-4487-98dc-a531f3501aa0","Type":"ContainerDied","Data":"4fe4c76184c38852351a155f5508c878858058943ed7d1b6de7d1bca7406eb5d"} Jan 27 16:00:57 crc kubenswrapper[4966]: I0127 16:00:57.462313 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fe4c76184c38852351a155f5508c878858058943ed7d1b6de7d1bca7406eb5d" Jan 27 16:00:57 crc kubenswrapper[4966]: I0127 16:00:57.462341 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp" Jan 27 16:00:57 crc kubenswrapper[4966]: E0127 16:00:57.573082 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99316d5d_7260_4487_98dc_a531f3501aa0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99316d5d_7260_4487_98dc_a531f3501aa0.slice/crio-4fe4c76184c38852351a155f5508c878858058943ed7d1b6de7d1bca7406eb5d\": RecentStats: unable to find data in memory cache]" Jan 27 16:01:02 crc kubenswrapper[4966]: I0127 16:01:02.638910 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-75d854449c-9v6lm"] Jan 27 16:01:02 crc kubenswrapper[4966]: E0127 16:01:02.639843 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99316d5d-7260-4487-98dc-a531f3501aa0" containerName="extract" Jan 27 16:01:02 crc kubenswrapper[4966]: I0127 16:01:02.639861 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="99316d5d-7260-4487-98dc-a531f3501aa0" containerName="extract" Jan 27 16:01:02 crc kubenswrapper[4966]: E0127 16:01:02.639882 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99316d5d-7260-4487-98dc-a531f3501aa0" containerName="pull" Jan 27 16:01:02 crc kubenswrapper[4966]: I0127 16:01:02.639889 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="99316d5d-7260-4487-98dc-a531f3501aa0" containerName="pull" Jan 27 16:01:02 crc kubenswrapper[4966]: E0127 16:01:02.639950 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99316d5d-7260-4487-98dc-a531f3501aa0" containerName="util" Jan 27 16:01:02 crc kubenswrapper[4966]: I0127 16:01:02.639959 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="99316d5d-7260-4487-98dc-a531f3501aa0" containerName="util" Jan 27 16:01:02 crc kubenswrapper[4966]: I0127 16:01:02.640614 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="99316d5d-7260-4487-98dc-a531f3501aa0" containerName="extract" Jan 27 16:01:02 crc kubenswrapper[4966]: I0127 16:01:02.644307 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-75d854449c-9v6lm" Jan 27 16:01:02 crc kubenswrapper[4966]: I0127 16:01:02.646756 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-td2gx" Jan 27 16:01:02 crc kubenswrapper[4966]: I0127 16:01:02.671111 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-75d854449c-9v6lm"] Jan 27 16:01:02 crc kubenswrapper[4966]: I0127 16:01:02.764666 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt6dk\" (UniqueName: \"kubernetes.io/projected/cfd02a37-95ae-43f0-9e50-2e9d78202bd9-kube-api-access-xt6dk\") pod \"openstack-operator-controller-init-75d854449c-9v6lm\" (UID: \"cfd02a37-95ae-43f0-9e50-2e9d78202bd9\") " pod="openstack-operators/openstack-operator-controller-init-75d854449c-9v6lm" Jan 27 16:01:02 crc kubenswrapper[4966]: I0127 16:01:02.867052 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt6dk\" (UniqueName: \"kubernetes.io/projected/cfd02a37-95ae-43f0-9e50-2e9d78202bd9-kube-api-access-xt6dk\") pod \"openstack-operator-controller-init-75d854449c-9v6lm\" (UID: \"cfd02a37-95ae-43f0-9e50-2e9d78202bd9\") " pod="openstack-operators/openstack-operator-controller-init-75d854449c-9v6lm" Jan 27 16:01:02 crc kubenswrapper[4966]: I0127 16:01:02.886944 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt6dk\" (UniqueName: \"kubernetes.io/projected/cfd02a37-95ae-43f0-9e50-2e9d78202bd9-kube-api-access-xt6dk\") pod \"openstack-operator-controller-init-75d854449c-9v6lm\" (UID: \"cfd02a37-95ae-43f0-9e50-2e9d78202bd9\") " pod="openstack-operators/openstack-operator-controller-init-75d854449c-9v6lm" Jan 27 16:01:02 crc kubenswrapper[4966]: I0127 16:01:02.961181 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-75d854449c-9v6lm" Jan 27 16:01:03 crc kubenswrapper[4966]: I0127 16:01:03.437163 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-75d854449c-9v6lm"] Jan 27 16:01:03 crc kubenswrapper[4966]: W0127 16:01:03.441137 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfd02a37_95ae_43f0_9e50_2e9d78202bd9.slice/crio-12a1f6ae29fd6727db20c04d8abb43a8ccae58df563e01f824d3794500f66647 WatchSource:0}: Error finding container 12a1f6ae29fd6727db20c04d8abb43a8ccae58df563e01f824d3794500f66647: Status 404 returned error can't find the container with id 12a1f6ae29fd6727db20c04d8abb43a8ccae58df563e01f824d3794500f66647 Jan 27 16:01:03 crc kubenswrapper[4966]: I0127 16:01:03.513453 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-75d854449c-9v6lm" event={"ID":"cfd02a37-95ae-43f0-9e50-2e9d78202bd9","Type":"ContainerStarted","Data":"12a1f6ae29fd6727db20c04d8abb43a8ccae58df563e01f824d3794500f66647"} Jan 27 16:01:07 crc kubenswrapper[4966]: I0127 16:01:07.565822 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-75d854449c-9v6lm" event={"ID":"cfd02a37-95ae-43f0-9e50-2e9d78202bd9","Type":"ContainerStarted","Data":"92f0f637e281bf322149d6dc79b06877d5129411f5a3954c300ce11b4ed915e8"} Jan 27 16:01:07 crc kubenswrapper[4966]: I0127 16:01:07.566463 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-75d854449c-9v6lm" Jan 27 16:01:07 crc kubenswrapper[4966]: I0127 16:01:07.612637 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-75d854449c-9v6lm" podStartSLOduration=1.7917347540000002 podStartE2EDuration="5.612615757s" podCreationTimestamp="2026-01-27 16:01:02 +0000 UTC" firstStartedPulling="2026-01-27 16:01:03.445126449 +0000 UTC m=+1129.747919927" lastFinishedPulling="2026-01-27 16:01:07.266007422 +0000 UTC m=+1133.568800930" observedRunningTime="2026-01-27 16:01:07.606162604 +0000 UTC m=+1133.908956142" watchObservedRunningTime="2026-01-27 16:01:07.612615757 +0000 UTC m=+1133.915409245" Jan 27 16:01:10 crc kubenswrapper[4966]: I0127 16:01:10.120254 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:01:10 crc kubenswrapper[4966]: I0127 16:01:10.120689 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:01:10 crc kubenswrapper[4966]: I0127 16:01:10.120763 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 16:01:10 crc kubenswrapper[4966]: I0127 16:01:10.121807 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ea82bc681b618d3fe42f05ec74306309a62aca633f08e8c15e2eb2ef6d9d0842"} pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:01:10 crc kubenswrapper[4966]: I0127 16:01:10.121926 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" containerID="cri-o://ea82bc681b618d3fe42f05ec74306309a62aca633f08e8c15e2eb2ef6d9d0842" gracePeriod=600 Jan 27 16:01:10 crc kubenswrapper[4966]: I0127 16:01:10.607345 4966 generic.go:334] "Generic (PLEG): container finished" podID="75889828-fc5d-4516-a3c4-db3affd4f810" containerID="ea82bc681b618d3fe42f05ec74306309a62aca633f08e8c15e2eb2ef6d9d0842" exitCode=0 Jan 27 16:01:10 crc kubenswrapper[4966]: I0127 16:01:10.607555 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerDied","Data":"ea82bc681b618d3fe42f05ec74306309a62aca633f08e8c15e2eb2ef6d9d0842"} Jan 27 16:01:10 crc kubenswrapper[4966]: I0127 16:01:10.607983 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"9cad2005eaacff8196cf8bd744c3709abce6b9766c6adb25e972b7211933a53f"} Jan 27 16:01:10 crc kubenswrapper[4966]: I0127 16:01:10.608067 4966 scope.go:117] "RemoveContainer" containerID="3d5bc75034aa8f67957594a0b69bd77e9edbe97abc49cd6f918ee618fb39479c" Jan 27 16:01:12 crc kubenswrapper[4966]: I0127 16:01:12.963851 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-75d854449c-9v6lm" Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.758414 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-9s62h"] Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.760305 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9s62h" Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.763178 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-fr2wp" Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.768997 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h"] Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.770307 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h" Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.773362 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-jjp9c" Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.776499 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-9s62h"] Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.835244 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h"] Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.862256 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h7jm\" (UniqueName: \"kubernetes.io/projected/093d4126-d96d-475a-9519-020f2f73a742-kube-api-access-5h7jm\") pod \"cinder-operator-controller-manager-7478f7dbf9-hqm7h\" (UID: \"093d4126-d96d-475a-9519-020f2f73a742\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h" Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.862440 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kj6x\" (UniqueName: \"kubernetes.io/projected/eb03df91-4797-41be-a7fb-7ca572014c88-kube-api-access-7kj6x\") pod \"barbican-operator-controller-manager-7f86f8796f-9s62h\" (UID: \"eb03df91-4797-41be-a7fb-7ca572014c88\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9s62h" Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.874107 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr"] Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.882279 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr" Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.889065 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-dx9vt" Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.909612 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-88lhp"] Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.911268 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-88lhp" Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.915551 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-fg4xg" Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.957784 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr"] Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.963806 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kj6x\" (UniqueName: \"kubernetes.io/projected/eb03df91-4797-41be-a7fb-7ca572014c88-kube-api-access-7kj6x\") pod \"barbican-operator-controller-manager-7f86f8796f-9s62h\" (UID: \"eb03df91-4797-41be-a7fb-7ca572014c88\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9s62h" Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.963880 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sklw5\" (UniqueName: \"kubernetes.io/projected/45594823-cdbb-4586-95d2-f2af9f6460b9-kube-api-access-sklw5\") pod \"designate-operator-controller-manager-b45d7bf98-76ffr\" (UID: \"45594823-cdbb-4586-95d2-f2af9f6460b9\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr" Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.963956 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z27ns\" (UniqueName: \"kubernetes.io/projected/9dcc8f2a-06d2-493e-b0ce-50120cef400e-kube-api-access-z27ns\") pod \"glance-operator-controller-manager-78fdd796fd-88lhp\" (UID: \"9dcc8f2a-06d2-493e-b0ce-50120cef400e\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-88lhp" Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.963996 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h7jm\" (UniqueName: \"kubernetes.io/projected/093d4126-d96d-475a-9519-020f2f73a742-kube-api-access-5h7jm\") pod \"cinder-operator-controller-manager-7478f7dbf9-hqm7h\" (UID: \"093d4126-d96d-475a-9519-020f2f73a742\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h" Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.986595 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-88lhp"] Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.997957 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-j7j9c"] Jan 27 16:01:32 crc kubenswrapper[4966]: I0127 16:01:32.999178 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j7j9c" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.000920 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kj6x\" (UniqueName: \"kubernetes.io/projected/eb03df91-4797-41be-a7fb-7ca572014c88-kube-api-access-7kj6x\") pod \"barbican-operator-controller-manager-7f86f8796f-9s62h\" (UID: \"eb03df91-4797-41be-a7fb-7ca572014c88\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9s62h" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.001102 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h7jm\" (UniqueName: \"kubernetes.io/projected/093d4126-d96d-475a-9519-020f2f73a742-kube-api-access-5h7jm\") pod \"cinder-operator-controller-manager-7478f7dbf9-hqm7h\" (UID: \"093d4126-d96d-475a-9519-020f2f73a742\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.001870 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-lmsb8" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.016479 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-j7j9c"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.041223 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-77jlm"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.048341 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-77jlm" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.050362 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.051623 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.055319 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-fpk87" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.055547 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.057042 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-77jlm"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.065257 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-ljgdv" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.065433 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.066394 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwwrq\" (UniqueName: \"kubernetes.io/projected/624197a8-447a-4004-a1e0-679ce29dbe86-kube-api-access-mwwrq\") pod \"heat-operator-controller-manager-594c8c9d5d-j7j9c\" (UID: \"624197a8-447a-4004-a1e0-679ce29dbe86\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j7j9c" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.066420 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfa058e6-1d6f-4dc2-8058-c00b201175b5-cert\") pod \"infra-operator-controller-manager-694cf4f878-qwc6v\" (UID: \"cfa058e6-1d6f-4dc2-8058-c00b201175b5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.066451 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sklw5\" (UniqueName: \"kubernetes.io/projected/45594823-cdbb-4586-95d2-f2af9f6460b9-kube-api-access-sklw5\") pod \"designate-operator-controller-manager-b45d7bf98-76ffr\" (UID: \"45594823-cdbb-4586-95d2-f2af9f6460b9\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.066495 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z27ns\" (UniqueName: \"kubernetes.io/projected/9dcc8f2a-06d2-493e-b0ce-50120cef400e-kube-api-access-z27ns\") pod \"glance-operator-controller-manager-78fdd796fd-88lhp\" (UID: \"9dcc8f2a-06d2-493e-b0ce-50120cef400e\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-88lhp" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.066527 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntjc5\" (UniqueName: \"kubernetes.io/projected/ed16ab57-79cc-42b3-9ae5-663ba4c7b8ff-kube-api-access-ntjc5\") pod \"horizon-operator-controller-manager-77d5c5b54f-77jlm\" (UID: \"ed16ab57-79cc-42b3-9ae5-663ba4c7b8ff\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-77jlm" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.066558 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt4kn\" (UniqueName: \"kubernetes.io/projected/cfa058e6-1d6f-4dc2-8058-c00b201175b5-kube-api-access-dt4kn\") pod \"infra-operator-controller-manager-694cf4f878-qwc6v\" (UID: \"cfa058e6-1d6f-4dc2-8058-c00b201175b5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.071770 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-j6s8k"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.072856 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-j6s8k" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.076567 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-t6vjv" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.081127 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.082513 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.084879 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-65sm5" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.089792 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-j6s8k"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.095735 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z27ns\" (UniqueName: \"kubernetes.io/projected/9dcc8f2a-06d2-493e-b0ce-50120cef400e-kube-api-access-z27ns\") pod \"glance-operator-controller-manager-78fdd796fd-88lhp\" (UID: \"9dcc8f2a-06d2-493e-b0ce-50120cef400e\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-88lhp" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.098047 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.101192 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9s62h" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.107887 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.107964 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sklw5\" (UniqueName: \"kubernetes.io/projected/45594823-cdbb-4586-95d2-f2af9f6460b9-kube-api-access-sklw5\") pod \"designate-operator-controller-manager-b45d7bf98-76ffr\" (UID: \"45594823-cdbb-4586-95d2-f2af9f6460b9\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.109585 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.111129 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-snjgh" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.115434 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.115744 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.117264 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.118956 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-bdkpm" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.122519 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.136930 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.164290 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-h6qtl"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.165346 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-h6qtl" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.168487 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb7w8\" (UniqueName: \"kubernetes.io/projected/3ae401e5-feea-47d3-9c86-1e33635a461a-kube-api-access-kb7w8\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw\" (UID: \"3ae401e5-feea-47d3-9c86-1e33635a461a\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.168524 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwwrq\" (UniqueName: \"kubernetes.io/projected/624197a8-447a-4004-a1e0-679ce29dbe86-kube-api-access-mwwrq\") pod \"heat-operator-controller-manager-594c8c9d5d-j7j9c\" (UID: \"624197a8-447a-4004-a1e0-679ce29dbe86\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j7j9c" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.168550 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfa058e6-1d6f-4dc2-8058-c00b201175b5-cert\") pod \"infra-operator-controller-manager-694cf4f878-qwc6v\" (UID: \"cfa058e6-1d6f-4dc2-8058-c00b201175b5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" Jan 27 16:01:33 crc kubenswrapper[4966]: E0127 16:01:33.168637 4966 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 16:01:33 crc kubenswrapper[4966]: E0127 16:01:33.168682 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfa058e6-1d6f-4dc2-8058-c00b201175b5-cert podName:cfa058e6-1d6f-4dc2-8058-c00b201175b5 nodeName:}" failed. No retries permitted until 2026-01-27 16:01:33.668665856 +0000 UTC m=+1159.971459344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cfa058e6-1d6f-4dc2-8058-c00b201175b5-cert") pod "infra-operator-controller-manager-694cf4f878-qwc6v" (UID: "cfa058e6-1d6f-4dc2-8058-c00b201175b5") : secret "infra-operator-webhook-server-cert" not found Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.168782 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntjc5\" (UniqueName: \"kubernetes.io/projected/ed16ab57-79cc-42b3-9ae5-663ba4c7b8ff-kube-api-access-ntjc5\") pod \"horizon-operator-controller-manager-77d5c5b54f-77jlm\" (UID: \"ed16ab57-79cc-42b3-9ae5-663ba4c7b8ff\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-77jlm" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.168856 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt4kn\" (UniqueName: \"kubernetes.io/projected/cfa058e6-1d6f-4dc2-8058-c00b201175b5-kube-api-access-dt4kn\") pod \"infra-operator-controller-manager-694cf4f878-qwc6v\" (UID: \"cfa058e6-1d6f-4dc2-8058-c00b201175b5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.168916 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrwhw\" (UniqueName: \"kubernetes.io/projected/dd40e2cd-59aa-442b-b27a-209632cba6e4-kube-api-access-xrwhw\") pod \"keystone-operator-controller-manager-b8b6d4659-j6s8k\" (UID: \"dd40e2cd-59aa-442b-b27a-209632cba6e4\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-j6s8k" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.168963 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqfmj\" (UniqueName: \"kubernetes.io/projected/e2cfe3d1-d500-418e-bc6b-4da3482999c3-kube-api-access-xqfmj\") pod \"ironic-operator-controller-manager-598f7747c9-rzghf\" (UID: \"e2cfe3d1-d500-418e-bc6b-4da3482999c3\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.169048 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p29l\" (UniqueName: \"kubernetes.io/projected/08ac68d1-220d-4098-9eed-6d0e3b752e5d-kube-api-access-4p29l\") pod \"manila-operator-controller-manager-78c6999f6f-6d5tv\" (UID: \"08ac68d1-220d-4098-9eed-6d0e3b752e5d\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.172817 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-9rs4n" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.185009 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntjc5\" (UniqueName: \"kubernetes.io/projected/ed16ab57-79cc-42b3-9ae5-663ba4c7b8ff-kube-api-access-ntjc5\") pod \"horizon-operator-controller-manager-77d5c5b54f-77jlm\" (UID: \"ed16ab57-79cc-42b3-9ae5-663ba4c7b8ff\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-77jlm" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.185081 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-h6qtl"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.193392 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.193661 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt4kn\" (UniqueName: \"kubernetes.io/projected/cfa058e6-1d6f-4dc2-8058-c00b201175b5-kube-api-access-dt4kn\") pod \"infra-operator-controller-manager-694cf4f878-qwc6v\" (UID: \"cfa058e6-1d6f-4dc2-8058-c00b201175b5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.194613 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.194713 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.195862 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-cz6ct" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.197252 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwwrq\" (UniqueName: \"kubernetes.io/projected/624197a8-447a-4004-a1e0-679ce29dbe86-kube-api-access-mwwrq\") pod \"heat-operator-controller-manager-594c8c9d5d-j7j9c\" (UID: \"624197a8-447a-4004-a1e0-679ce29dbe86\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j7j9c" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.200993 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.204269 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.206202 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-dwr75" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.207186 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-rv5wp"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.213871 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rv5wp" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.218606 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-7h6s2" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.218995 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.221278 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.222594 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.227219 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.227449 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-qsljm" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.229667 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.237928 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.241728 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.241785 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-88lhp" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.243499 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-b8scn" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.245755 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.252317 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-rv5wp"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.264692 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.273620 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cvjc\" (UniqueName: \"kubernetes.io/projected/f096bdf7-f589-4344-b71f-ab9db2eded5f-kube-api-access-8cvjc\") pod \"ovn-operator-controller-manager-6f75f45d54-rv5wp\" (UID: \"f096bdf7-f589-4344-b71f-ab9db2eded5f\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rv5wp" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.273944 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s44p\" (UniqueName: \"kubernetes.io/projected/20e54080-e732-4925-b0c2-35669744821d-kube-api-access-9s44p\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9\" (UID: \"20e54080-e732-4925-b0c2-35669744821d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.274049 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrwhw\" (UniqueName: \"kubernetes.io/projected/dd40e2cd-59aa-442b-b27a-209632cba6e4-kube-api-access-xrwhw\") pod \"keystone-operator-controller-manager-b8b6d4659-j6s8k\" (UID: \"dd40e2cd-59aa-442b-b27a-209632cba6e4\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-j6s8k" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.274173 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqfmj\" (UniqueName: \"kubernetes.io/projected/e2cfe3d1-d500-418e-bc6b-4da3482999c3-kube-api-access-xqfmj\") pod \"ironic-operator-controller-manager-598f7747c9-rzghf\" (UID: \"e2cfe3d1-d500-418e-bc6b-4da3482999c3\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.274300 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p29l\" (UniqueName: \"kubernetes.io/projected/08ac68d1-220d-4098-9eed-6d0e3b752e5d-kube-api-access-4p29l\") pod \"manila-operator-controller-manager-78c6999f6f-6d5tv\" (UID: \"08ac68d1-220d-4098-9eed-6d0e3b752e5d\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.274418 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xtdg\" (UniqueName: \"kubernetes.io/projected/871381eb-c218-433c-a004-fea884f4ced0-kube-api-access-6xtdg\") pod \"neutron-operator-controller-manager-78d58447c5-h6qtl\" (UID: \"871381eb-c218-433c-a004-fea884f4ced0\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-h6qtl" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.274535 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fdhz\" (UniqueName: \"kubernetes.io/projected/64b84834-e9db-4f50-a7c7-6d24302652d3-kube-api-access-7fdhz\") pod \"nova-operator-controller-manager-7bdb645866-sttbs\" (UID: \"64b84834-e9db-4f50-a7c7-6d24302652d3\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.274639 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb7w8\" (UniqueName: \"kubernetes.io/projected/3ae401e5-feea-47d3-9c86-1e33635a461a-kube-api-access-kb7w8\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw\" (UID: \"3ae401e5-feea-47d3-9c86-1e33635a461a\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.274743 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpxq2\" (UniqueName: \"kubernetes.io/projected/dd6f6600-3072-42e6-a8ca-5e72c960425a-kube-api-access-rpxq2\") pod \"placement-operator-controller-manager-79d5ccc684-kxj69\" (UID: \"dd6f6600-3072-42e6-a8ca-5e72c960425a\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.274849 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20e54080-e732-4925-b0c2-35669744821d-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9\" (UID: \"20e54080-e732-4925-b0c2-35669744821d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.274942 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzzzd\" (UniqueName: \"kubernetes.io/projected/6006cb9c-d22f-47b1-b8b6-cb999ecab7df-kube-api-access-mzzzd\") pod \"octavia-operator-controller-manager-5f4cd88d46-2n8dp\" (UID: \"6006cb9c-d22f-47b1-b8b6-cb999ecab7df\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.306927 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqfmj\" (UniqueName: \"kubernetes.io/projected/e2cfe3d1-d500-418e-bc6b-4da3482999c3-kube-api-access-xqfmj\") pod \"ironic-operator-controller-manager-598f7747c9-rzghf\" (UID: \"e2cfe3d1-d500-418e-bc6b-4da3482999c3\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.307554 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb7w8\" (UniqueName: \"kubernetes.io/projected/3ae401e5-feea-47d3-9c86-1e33635a461a-kube-api-access-kb7w8\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw\" (UID: \"3ae401e5-feea-47d3-9c86-1e33635a461a\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.311008 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrwhw\" (UniqueName: \"kubernetes.io/projected/dd40e2cd-59aa-442b-b27a-209632cba6e4-kube-api-access-xrwhw\") pod \"keystone-operator-controller-manager-b8b6d4659-j6s8k\" (UID: \"dd40e2cd-59aa-442b-b27a-209632cba6e4\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-j6s8k" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.311567 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p29l\" (UniqueName: \"kubernetes.io/projected/08ac68d1-220d-4098-9eed-6d0e3b752e5d-kube-api-access-4p29l\") pod \"manila-operator-controller-manager-78c6999f6f-6d5tv\" (UID: \"08ac68d1-220d-4098-9eed-6d0e3b752e5d\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.317002 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.324432 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.326147 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-g6cjh" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.327535 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.382087 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-8bb444544-qmbfx"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.382315 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76z7m\" (UniqueName: \"kubernetes.io/projected/8645d6d2-f7cd-4578-9a1a-8b07beeae08c-kube-api-access-76z7m\") pod \"swift-operator-controller-manager-547cbdb99f-zdjjc\" (UID: \"8645d6d2-f7cd-4578-9a1a-8b07beeae08c\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.383326 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpxq2\" (UniqueName: \"kubernetes.io/projected/dd6f6600-3072-42e6-a8ca-5e72c960425a-kube-api-access-rpxq2\") pod \"placement-operator-controller-manager-79d5ccc684-kxj69\" (UID: \"dd6f6600-3072-42e6-a8ca-5e72c960425a\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.384074 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j7j9c" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.386475 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20e54080-e732-4925-b0c2-35669744821d-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9\" (UID: \"20e54080-e732-4925-b0c2-35669744821d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.386519 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzzzd\" (UniqueName: \"kubernetes.io/projected/6006cb9c-d22f-47b1-b8b6-cb999ecab7df-kube-api-access-mzzzd\") pod \"octavia-operator-controller-manager-5f4cd88d46-2n8dp\" (UID: \"6006cb9c-d22f-47b1-b8b6-cb999ecab7df\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.386615 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cvjc\" (UniqueName: \"kubernetes.io/projected/f096bdf7-f589-4344-b71f-ab9db2eded5f-kube-api-access-8cvjc\") pod \"ovn-operator-controller-manager-6f75f45d54-rv5wp\" (UID: \"f096bdf7-f589-4344-b71f-ab9db2eded5f\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rv5wp" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.386710 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s44p\" (UniqueName: \"kubernetes.io/projected/20e54080-e732-4925-b0c2-35669744821d-kube-api-access-9s44p\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9\" (UID: \"20e54080-e732-4925-b0c2-35669744821d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.386878 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xtdg\" (UniqueName: \"kubernetes.io/projected/871381eb-c218-433c-a004-fea884f4ced0-kube-api-access-6xtdg\") pod \"neutron-operator-controller-manager-78d58447c5-h6qtl\" (UID: \"871381eb-c218-433c-a004-fea884f4ced0\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-h6qtl" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.386926 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fdhz\" (UniqueName: \"kubernetes.io/projected/64b84834-e9db-4f50-a7c7-6d24302652d3-kube-api-access-7fdhz\") pod \"nova-operator-controller-manager-7bdb645866-sttbs\" (UID: \"64b84834-e9db-4f50-a7c7-6d24302652d3\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs" Jan 27 16:01:33 crc kubenswrapper[4966]: E0127 16:01:33.386858 4966 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:01:33 crc kubenswrapper[4966]: E0127 16:01:33.387160 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20e54080-e732-4925-b0c2-35669744821d-cert podName:20e54080-e732-4925-b0c2-35669744821d nodeName:}" failed. No retries permitted until 2026-01-27 16:01:33.88714054 +0000 UTC m=+1160.189934028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/20e54080-e732-4925-b0c2-35669744821d-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" (UID: "20e54080-e732-4925-b0c2-35669744821d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.388434 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-8bb444544-qmbfx" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.393427 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-6h8hx" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.413465 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpxq2\" (UniqueName: \"kubernetes.io/projected/dd6f6600-3072-42e6-a8ca-5e72c960425a-kube-api-access-rpxq2\") pod \"placement-operator-controller-manager-79d5ccc684-kxj69\" (UID: \"dd6f6600-3072-42e6-a8ca-5e72c960425a\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.417666 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fdhz\" (UniqueName: \"kubernetes.io/projected/64b84834-e9db-4f50-a7c7-6d24302652d3-kube-api-access-7fdhz\") pod \"nova-operator-controller-manager-7bdb645866-sttbs\" (UID: \"64b84834-e9db-4f50-a7c7-6d24302652d3\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.424391 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-8bb444544-qmbfx"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.426657 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xtdg\" (UniqueName: \"kubernetes.io/projected/871381eb-c218-433c-a004-fea884f4ced0-kube-api-access-6xtdg\") pod \"neutron-operator-controller-manager-78d58447c5-h6qtl\" (UID: \"871381eb-c218-433c-a004-fea884f4ced0\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-h6qtl" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.429025 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s44p\" (UniqueName: \"kubernetes.io/projected/20e54080-e732-4925-b0c2-35669744821d-kube-api-access-9s44p\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9\" (UID: \"20e54080-e732-4925-b0c2-35669744821d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.429042 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzzzd\" (UniqueName: \"kubernetes.io/projected/6006cb9c-d22f-47b1-b8b6-cb999ecab7df-kube-api-access-mzzzd\") pod \"octavia-operator-controller-manager-5f4cd88d46-2n8dp\" (UID: \"6006cb9c-d22f-47b1-b8b6-cb999ecab7df\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.437775 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-77jlm" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.443580 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.447346 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cvjc\" (UniqueName: \"kubernetes.io/projected/f096bdf7-f589-4344-b71f-ab9db2eded5f-kube-api-access-8cvjc\") pod \"ovn-operator-controller-manager-6f75f45d54-rv5wp\" (UID: \"f096bdf7-f589-4344-b71f-ab9db2eded5f\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rv5wp" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.480727 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.489697 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p28xb\" (UniqueName: \"kubernetes.io/projected/734cfb67-80ec-42a1-8d52-298ae82e1a6b-kube-api-access-p28xb\") pod \"telemetry-operator-controller-manager-8bb444544-qmbfx\" (UID: \"734cfb67-80ec-42a1-8d52-298ae82e1a6b\") " pod="openstack-operators/telemetry-operator-controller-manager-8bb444544-qmbfx" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.489838 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76z7m\" (UniqueName: \"kubernetes.io/projected/8645d6d2-f7cd-4578-9a1a-8b07beeae08c-kube-api-access-76z7m\") pod \"swift-operator-controller-manager-547cbdb99f-zdjjc\" (UID: \"8645d6d2-f7cd-4578-9a1a-8b07beeae08c\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.501004 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-h6qtl" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.521819 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76z7m\" (UniqueName: \"kubernetes.io/projected/8645d6d2-f7cd-4578-9a1a-8b07beeae08c-kube-api-access-76z7m\") pod \"swift-operator-controller-manager-547cbdb99f-zdjjc\" (UID: \"8645d6d2-f7cd-4578-9a1a-8b07beeae08c\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.524738 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.531145 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-j6s8k" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.547881 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-crddt"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.549137 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-crddt" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.552931 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-jhqgt" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.562626 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.573535 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.577099 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-crddt"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.581864 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rv5wp" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.592551 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p28xb\" (UniqueName: \"kubernetes.io/projected/734cfb67-80ec-42a1-8d52-298ae82e1a6b-kube-api-access-p28xb\") pod \"telemetry-operator-controller-manager-8bb444544-qmbfx\" (UID: \"734cfb67-80ec-42a1-8d52-298ae82e1a6b\") " pod="openstack-operators/telemetry-operator-controller-manager-8bb444544-qmbfx" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.592779 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzmz6\" (UniqueName: \"kubernetes.io/projected/3bc21fb1-fc35-42e4-ab0f-bdd047cd05d7-kube-api-access-wzmz6\") pod \"test-operator-controller-manager-69797bbcbd-crddt\" (UID: \"3bc21fb1-fc35-42e4-ab0f-bdd047cd05d7\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-crddt" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.612675 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p28xb\" (UniqueName: \"kubernetes.io/projected/734cfb67-80ec-42a1-8d52-298ae82e1a6b-kube-api-access-p28xb\") pod \"telemetry-operator-controller-manager-8bb444544-qmbfx\" (UID: \"734cfb67-80ec-42a1-8d52-298ae82e1a6b\") " pod="openstack-operators/telemetry-operator-controller-manager-8bb444544-qmbfx" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.651270 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-l5xd2"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.653197 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-l5xd2" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.657466 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-f8vfd" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.662180 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-l5xd2"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.676696 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.678780 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.680797 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.680830 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.681460 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-nqtq6" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.694046 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.694593 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-webhook-certs\") pod \"openstack-operator-controller-manager-69869d7dcf-h42mh\" (UID: \"e49e9fb2-a5f0-4106-b239-93d488e4f515\") " pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.694700 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfa058e6-1d6f-4dc2-8058-c00b201175b5-cert\") pod \"infra-operator-controller-manager-694cf4f878-qwc6v\" (UID: \"cfa058e6-1d6f-4dc2-8058-c00b201175b5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.694765 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq6hw\" (UniqueName: \"kubernetes.io/projected/434d2d44-cb00-40d2-90b5-64dd65faadc8-kube-api-access-bq6hw\") pod \"watcher-operator-controller-manager-564965969-l5xd2\" (UID: \"434d2d44-cb00-40d2-90b5-64dd65faadc8\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-l5xd2" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.694797 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzmz6\" (UniqueName: \"kubernetes.io/projected/3bc21fb1-fc35-42e4-ab0f-bdd047cd05d7-kube-api-access-wzmz6\") pod \"test-operator-controller-manager-69797bbcbd-crddt\" (UID: \"3bc21fb1-fc35-42e4-ab0f-bdd047cd05d7\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-crddt" Jan 27 16:01:33 crc kubenswrapper[4966]: E0127 16:01:33.694959 4966 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 16:01:33 crc kubenswrapper[4966]: E0127 16:01:33.695023 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfa058e6-1d6f-4dc2-8058-c00b201175b5-cert podName:cfa058e6-1d6f-4dc2-8058-c00b201175b5 nodeName:}" failed. No retries permitted until 2026-01-27 16:01:34.69500387 +0000 UTC m=+1160.997797348 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cfa058e6-1d6f-4dc2-8058-c00b201175b5-cert") pod "infra-operator-controller-manager-694cf4f878-qwc6v" (UID: "cfa058e6-1d6f-4dc2-8058-c00b201175b5") : secret "infra-operator-webhook-server-cert" not found Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.695332 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-metrics-certs\") pod \"openstack-operator-controller-manager-69869d7dcf-h42mh\" (UID: \"e49e9fb2-a5f0-4106-b239-93d488e4f515\") " pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.695674 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv825\" (UniqueName: \"kubernetes.io/projected/e49e9fb2-a5f0-4106-b239-93d488e4f515-kube-api-access-bv825\") pod \"openstack-operator-controller-manager-69869d7dcf-h42mh\" (UID: \"e49e9fb2-a5f0-4106-b239-93d488e4f515\") " pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.712352 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.720606 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzmz6\" (UniqueName: \"kubernetes.io/projected/3bc21fb1-fc35-42e4-ab0f-bdd047cd05d7-kube-api-access-wzmz6\") pod \"test-operator-controller-manager-69797bbcbd-crddt\" (UID: \"3bc21fb1-fc35-42e4-ab0f-bdd047cd05d7\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-crddt" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.725523 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.733386 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rvh45"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.735877 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rvh45" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.737557 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-tqkv6" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.740420 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-8bb444544-qmbfx" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.754008 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rvh45"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.755041 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-crddt" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.798595 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqqt2\" (UniqueName: \"kubernetes.io/projected/0af070d2-e4fd-488e-abd5-c8ae5915d089-kube-api-access-rqqt2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-rvh45\" (UID: \"0af070d2-e4fd-488e-abd5-c8ae5915d089\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rvh45" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.798661 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-webhook-certs\") pod \"openstack-operator-controller-manager-69869d7dcf-h42mh\" (UID: \"e49e9fb2-a5f0-4106-b239-93d488e4f515\") " pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.798791 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq6hw\" (UniqueName: \"kubernetes.io/projected/434d2d44-cb00-40d2-90b5-64dd65faadc8-kube-api-access-bq6hw\") pod \"watcher-operator-controller-manager-564965969-l5xd2\" (UID: \"434d2d44-cb00-40d2-90b5-64dd65faadc8\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-l5xd2" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.798839 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-metrics-certs\") pod \"openstack-operator-controller-manager-69869d7dcf-h42mh\" (UID: \"e49e9fb2-a5f0-4106-b239-93d488e4f515\") " pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.798889 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv825\" (UniqueName: \"kubernetes.io/projected/e49e9fb2-a5f0-4106-b239-93d488e4f515-kube-api-access-bv825\") pod \"openstack-operator-controller-manager-69869d7dcf-h42mh\" (UID: \"e49e9fb2-a5f0-4106-b239-93d488e4f515\") " pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:01:33 crc kubenswrapper[4966]: E0127 16:01:33.799383 4966 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 16:01:33 crc kubenswrapper[4966]: E0127 16:01:33.799425 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-webhook-certs podName:e49e9fb2-a5f0-4106-b239-93d488e4f515 nodeName:}" failed. No retries permitted until 2026-01-27 16:01:34.299407406 +0000 UTC m=+1160.602200894 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-webhook-certs") pod "openstack-operator-controller-manager-69869d7dcf-h42mh" (UID: "e49e9fb2-a5f0-4106-b239-93d488e4f515") : secret "webhook-server-cert" not found Jan 27 16:01:33 crc kubenswrapper[4966]: E0127 16:01:33.800682 4966 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 16:01:33 crc kubenswrapper[4966]: E0127 16:01:33.800715 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-metrics-certs podName:e49e9fb2-a5f0-4106-b239-93d488e4f515 nodeName:}" failed. No retries permitted until 2026-01-27 16:01:34.300704537 +0000 UTC m=+1160.603498015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-metrics-certs") pod "openstack-operator-controller-manager-69869d7dcf-h42mh" (UID: "e49e9fb2-a5f0-4106-b239-93d488e4f515") : secret "metrics-server-cert" not found Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.820207 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv825\" (UniqueName: \"kubernetes.io/projected/e49e9fb2-a5f0-4106-b239-93d488e4f515-kube-api-access-bv825\") pod \"openstack-operator-controller-manager-69869d7dcf-h42mh\" (UID: \"e49e9fb2-a5f0-4106-b239-93d488e4f515\") " pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.823877 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-9s62h"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.827329 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq6hw\" (UniqueName: \"kubernetes.io/projected/434d2d44-cb00-40d2-90b5-64dd65faadc8-kube-api-access-bq6hw\") pod \"watcher-operator-controller-manager-564965969-l5xd2\" (UID: \"434d2d44-cb00-40d2-90b5-64dd65faadc8\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-l5xd2" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.847854 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h"] Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.900479 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqqt2\" (UniqueName: \"kubernetes.io/projected/0af070d2-e4fd-488e-abd5-c8ae5915d089-kube-api-access-rqqt2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-rvh45\" (UID: \"0af070d2-e4fd-488e-abd5-c8ae5915d089\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rvh45" Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.901182 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20e54080-e732-4925-b0c2-35669744821d-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9\" (UID: \"20e54080-e732-4925-b0c2-35669744821d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" Jan 27 16:01:33 crc kubenswrapper[4966]: E0127 16:01:33.901321 4966 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:01:33 crc kubenswrapper[4966]: E0127 16:01:33.901378 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20e54080-e732-4925-b0c2-35669744821d-cert podName:20e54080-e732-4925-b0c2-35669744821d nodeName:}" failed. No retries permitted until 2026-01-27 16:01:34.901363985 +0000 UTC m=+1161.204157473 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/20e54080-e732-4925-b0c2-35669744821d-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" (UID: "20e54080-e732-4925-b0c2-35669744821d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:01:33 crc kubenswrapper[4966]: I0127 16:01:33.918352 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqqt2\" (UniqueName: \"kubernetes.io/projected/0af070d2-e4fd-488e-abd5-c8ae5915d089-kube-api-access-rqqt2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-rvh45\" (UID: \"0af070d2-e4fd-488e-abd5-c8ae5915d089\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rvh45" Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.037624 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr"] Jan 27 16:01:34 crc kubenswrapper[4966]: W0127 16:01:34.042053 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod624197a8_447a_4004_a1e0_679ce29dbe86.slice/crio-f719645acaeb8edefa47a33678f11dc676be7fbcbb70beda9987fee47c915e16 WatchSource:0}: Error finding container f719645acaeb8edefa47a33678f11dc676be7fbcbb70beda9987fee47c915e16: Status 404 returned error can't find the container with id f719645acaeb8edefa47a33678f11dc676be7fbcbb70beda9987fee47c915e16 Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.057373 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-j7j9c"] Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.077517 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-l5xd2" Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.146090 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rvh45" Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.200657 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-88lhp"] Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.312123 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-metrics-certs\") pod \"openstack-operator-controller-manager-69869d7dcf-h42mh\" (UID: \"e49e9fb2-a5f0-4106-b239-93d488e4f515\") " pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.312278 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-webhook-certs\") pod \"openstack-operator-controller-manager-69869d7dcf-h42mh\" (UID: \"e49e9fb2-a5f0-4106-b239-93d488e4f515\") " pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:01:34 crc kubenswrapper[4966]: E0127 16:01:34.312330 4966 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 16:01:34 crc kubenswrapper[4966]: E0127 16:01:34.312429 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-metrics-certs podName:e49e9fb2-a5f0-4106-b239-93d488e4f515 nodeName:}" failed. No retries permitted until 2026-01-27 16:01:35.312406152 +0000 UTC m=+1161.615199640 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-metrics-certs") pod "openstack-operator-controller-manager-69869d7dcf-h42mh" (UID: "e49e9fb2-a5f0-4106-b239-93d488e4f515") : secret "metrics-server-cert" not found Jan 27 16:01:34 crc kubenswrapper[4966]: E0127 16:01:34.312446 4966 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 16:01:34 crc kubenswrapper[4966]: E0127 16:01:34.312516 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-webhook-certs podName:e49e9fb2-a5f0-4106-b239-93d488e4f515 nodeName:}" failed. No retries permitted until 2026-01-27 16:01:35.312491924 +0000 UTC m=+1161.615285412 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-webhook-certs") pod "openstack-operator-controller-manager-69869d7dcf-h42mh" (UID: "e49e9fb2-a5f0-4106-b239-93d488e4f515") : secret "webhook-server-cert" not found Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.419493 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv"] Jan 27 16:01:34 crc kubenswrapper[4966]: W0127 16:01:34.448981 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded16ab57_79cc_42b3_9ae5_663ba4c7b8ff.slice/crio-7c0e9e418448dd3d0c83a4773edf31d4bfba7d32679061c75479875be3c03ed5 WatchSource:0}: Error finding container 7c0e9e418448dd3d0c83a4773edf31d4bfba7d32679061c75479875be3c03ed5: Status 404 returned error can't find the container with id 7c0e9e418448dd3d0c83a4773edf31d4bfba7d32679061c75479875be3c03ed5 Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.450981 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw"] Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.464541 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-77jlm"] Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.643513 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp"] Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.655648 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-j6s8k"] Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.720873 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfa058e6-1d6f-4dc2-8058-c00b201175b5-cert\") pod \"infra-operator-controller-manager-694cf4f878-qwc6v\" (UID: \"cfa058e6-1d6f-4dc2-8058-c00b201175b5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" Jan 27 16:01:34 crc kubenswrapper[4966]: E0127 16:01:34.721287 4966 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 16:01:34 crc kubenswrapper[4966]: E0127 16:01:34.723537 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfa058e6-1d6f-4dc2-8058-c00b201175b5-cert podName:cfa058e6-1d6f-4dc2-8058-c00b201175b5 nodeName:}" failed. No retries permitted until 2026-01-27 16:01:36.7234884 +0000 UTC m=+1163.026281878 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cfa058e6-1d6f-4dc2-8058-c00b201175b5-cert") pod "infra-operator-controller-manager-694cf4f878-qwc6v" (UID: "cfa058e6-1d6f-4dc2-8058-c00b201175b5") : secret "infra-operator-webhook-server-cert" not found Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.825045 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp" event={"ID":"6006cb9c-d22f-47b1-b8b6-cb999ecab7df","Type":"ContainerStarted","Data":"af292e1d5a56ed2cd736ed3ba388ad108e5c4a951d136e88b363ee1b382ec804"} Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.826884 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9s62h" event={"ID":"eb03df91-4797-41be-a7fb-7ca572014c88","Type":"ContainerStarted","Data":"d8c65cf0327ed2a4a5f0552e82fbc07ff92019f04be40468eb4a210dbc385dfc"} Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.827833 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j7j9c" event={"ID":"624197a8-447a-4004-a1e0-679ce29dbe86","Type":"ContainerStarted","Data":"f719645acaeb8edefa47a33678f11dc676be7fbcbb70beda9987fee47c915e16"} Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.828812 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr" event={"ID":"45594823-cdbb-4586-95d2-f2af9f6460b9","Type":"ContainerStarted","Data":"a6e9a8d138efcf5cb92e5d35e9942bf701fadaa989f72c250b25f6715ac9be41"} Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.830044 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h" event={"ID":"093d4126-d96d-475a-9519-020f2f73a742","Type":"ContainerStarted","Data":"c4dd6ad0a44b915e856e383d7c99740d1b7b264b4a9927697888fd3b0e50af8b"} Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.831219 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw" event={"ID":"3ae401e5-feea-47d3-9c86-1e33635a461a","Type":"ContainerStarted","Data":"0087ee7e9a379aa71d9967ebdba0eedce0af0bf18ed2413e1f9e1c2e260e3c5b"} Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.832104 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-77jlm" event={"ID":"ed16ab57-79cc-42b3-9ae5-663ba4c7b8ff","Type":"ContainerStarted","Data":"7c0e9e418448dd3d0c83a4773edf31d4bfba7d32679061c75479875be3c03ed5"} Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.834090 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv" event={"ID":"08ac68d1-220d-4098-9eed-6d0e3b752e5d","Type":"ContainerStarted","Data":"00684b547580dadd95a6d9f662512a601325370384ba2933b5cbeee9fa0a39e1"} Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.835226 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-88lhp" event={"ID":"9dcc8f2a-06d2-493e-b0ce-50120cef400e","Type":"ContainerStarted","Data":"cf7d47bd5ff7ae3ed5af223d66caea1eb7a79dd6ed44ca02cd5dee1648eb5a33"} Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.836403 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-j6s8k" event={"ID":"dd40e2cd-59aa-442b-b27a-209632cba6e4","Type":"ContainerStarted","Data":"cea91b3479e720407882a5d2160a7210e1a3d1f911d4c1ecb012cd227c44ec88"} Jan 27 16:01:34 crc kubenswrapper[4966]: I0127 16:01:34.924328 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20e54080-e732-4925-b0c2-35669744821d-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9\" (UID: \"20e54080-e732-4925-b0c2-35669744821d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" Jan 27 16:01:34 crc kubenswrapper[4966]: E0127 16:01:34.924517 4966 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:01:34 crc kubenswrapper[4966]: E0127 16:01:34.924611 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20e54080-e732-4925-b0c2-35669744821d-cert podName:20e54080-e732-4925-b0c2-35669744821d nodeName:}" failed. No retries permitted until 2026-01-27 16:01:36.924589279 +0000 UTC m=+1163.227382777 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/20e54080-e732-4925-b0c2-35669744821d-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" (UID: "20e54080-e732-4925-b0c2-35669744821d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.301075 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69"] Jan 27 16:01:35 crc kubenswrapper[4966]: W0127 16:01:35.329912 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf096bdf7_f589_4344_b71f_ab9db2eded5f.slice/crio-f75bc9d1c1440b5854332246e650279aaf033d4ba4a72408ab4035875aca1678 WatchSource:0}: Error finding container f75bc9d1c1440b5854332246e650279aaf033d4ba4a72408ab4035875aca1678: Status 404 returned error can't find the container with id f75bc9d1c1440b5854332246e650279aaf033d4ba4a72408ab4035875aca1678 Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.330864 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-metrics-certs\") pod \"openstack-operator-controller-manager-69869d7dcf-h42mh\" (UID: \"e49e9fb2-a5f0-4106-b239-93d488e4f515\") " pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.331018 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-webhook-certs\") pod \"openstack-operator-controller-manager-69869d7dcf-h42mh\" (UID: \"e49e9fb2-a5f0-4106-b239-93d488e4f515\") " pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:01:35 crc kubenswrapper[4966]: E0127 16:01:35.331257 4966 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 16:01:35 crc kubenswrapper[4966]: E0127 16:01:35.331320 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-webhook-certs podName:e49e9fb2-a5f0-4106-b239-93d488e4f515 nodeName:}" failed. No retries permitted until 2026-01-27 16:01:37.33130079 +0000 UTC m=+1163.634094278 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-webhook-certs") pod "openstack-operator-controller-manager-69869d7dcf-h42mh" (UID: "e49e9fb2-a5f0-4106-b239-93d488e4f515") : secret "webhook-server-cert" not found Jan 27 16:01:35 crc kubenswrapper[4966]: E0127 16:01:35.331695 4966 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 16:01:35 crc kubenswrapper[4966]: E0127 16:01:35.331739 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-metrics-certs podName:e49e9fb2-a5f0-4106-b239-93d488e4f515 nodeName:}" failed. No retries permitted until 2026-01-27 16:01:37.331726074 +0000 UTC m=+1163.634519562 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-metrics-certs") pod "openstack-operator-controller-manager-69869d7dcf-h42mh" (UID: "e49e9fb2-a5f0-4106-b239-93d488e4f515") : secret "metrics-server-cert" not found Jan 27 16:01:35 crc kubenswrapper[4966]: W0127 16:01:35.346535 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd6f6600_3072_42e6_a8ca_5e72c960425a.slice/crio-7ade99d2acc5963ce950181588c72ec0ccbdb00858e5a426a5a91331562fd653 WatchSource:0}: Error finding container 7ade99d2acc5963ce950181588c72ec0ccbdb00858e5a426a5a91331562fd653: Status 404 returned error can't find the container with id 7ade99d2acc5963ce950181588c72ec0ccbdb00858e5a426a5a91331562fd653 Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.363009 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-h6qtl"] Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.376020 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-rv5wp"] Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.385974 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf"] Jan 27 16:01:35 crc kubenswrapper[4966]: W0127 16:01:35.398326 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod734cfb67_80ec_42a1_8d52_298ae82e1a6b.slice/crio-cfd2831f26c10df51119f0679a8aaf2303128e5b86a62b2a0fcad588ce1fb583 WatchSource:0}: Error finding container cfd2831f26c10df51119f0679a8aaf2303128e5b86a62b2a0fcad588ce1fb583: Status 404 returned error can't find the container with id cfd2831f26c10df51119f0679a8aaf2303128e5b86a62b2a0fcad588ce1fb583 Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.400695 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-8bb444544-qmbfx"] Jan 27 16:01:35 crc kubenswrapper[4966]: W0127 16:01:35.405504 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod434d2d44_cb00_40d2_90b5_64dd65faadc8.slice/crio-5b00fa0f6fd6e8b34c6d6a9b36d640dbd12db2f72ea466b701fa6d948b284e68 WatchSource:0}: Error finding container 5b00fa0f6fd6e8b34c6d6a9b36d640dbd12db2f72ea466b701fa6d948b284e68: Status 404 returned error can't find the container with id 5b00fa0f6fd6e8b34c6d6a9b36d640dbd12db2f72ea466b701fa6d948b284e68 Jan 27 16:01:35 crc kubenswrapper[4966]: E0127 16:01:35.417494 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.164:5001/openstack-k8s-operators/telemetry-operator:1910d239c45618b2e0fb12ecdc7daaef325b114b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p28xb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-8bb444544-qmbfx_openstack-operators(734cfb67-80ec-42a1-8d52-298ae82e1a6b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 16:01:35 crc kubenswrapper[4966]: E0127 16:01:35.421245 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-8bb444544-qmbfx" podUID="734cfb67-80ec-42a1-8d52-298ae82e1a6b" Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.425405 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc"] Jan 27 16:01:35 crc kubenswrapper[4966]: E0127 16:01:35.426601 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wzmz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-crddt_openstack-operators(3bc21fb1-fc35-42e4-ab0f-bdd047cd05d7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 16:01:35 crc kubenswrapper[4966]: E0127 16:01:35.428044 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-crddt" podUID="3bc21fb1-fc35-42e4-ab0f-bdd047cd05d7" Jan 27 16:01:35 crc kubenswrapper[4966]: E0127 16:01:35.429393 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bq6hw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-l5xd2_openstack-operators(434d2d44-cb00-40d2-90b5-64dd65faadc8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 16:01:35 crc kubenswrapper[4966]: E0127 16:01:35.430747 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-l5xd2" podUID="434d2d44-cb00-40d2-90b5-64dd65faadc8" Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.439032 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs"] Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.446234 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-crddt"] Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.465156 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rvh45"] Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.476303 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-l5xd2"] Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.846875 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-crddt" event={"ID":"3bc21fb1-fc35-42e4-ab0f-bdd047cd05d7","Type":"ContainerStarted","Data":"37b4e91a3675b1df71e5a17f5a01308d8504cd980761a01504444e2c5e6a2d6f"} Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.848992 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc" event={"ID":"8645d6d2-f7cd-4578-9a1a-8b07beeae08c","Type":"ContainerStarted","Data":"a4de67e9591602486cd325e1deb3a7baacd635c82f4fe42332a24c906edc84e7"} Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.850837 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69" event={"ID":"dd6f6600-3072-42e6-a8ca-5e72c960425a","Type":"ContainerStarted","Data":"7ade99d2acc5963ce950181588c72ec0ccbdb00858e5a426a5a91331562fd653"} Jan 27 16:01:35 crc kubenswrapper[4966]: E0127 16:01:35.851119 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-crddt" podUID="3bc21fb1-fc35-42e4-ab0f-bdd047cd05d7" Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.854074 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rvh45" event={"ID":"0af070d2-e4fd-488e-abd5-c8ae5915d089","Type":"ContainerStarted","Data":"16996a6ec4425f2b6e517a657fde9049959f6710a20898276f4059eb7c7c49ad"} Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.855715 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rv5wp" event={"ID":"f096bdf7-f589-4344-b71f-ab9db2eded5f","Type":"ContainerStarted","Data":"f75bc9d1c1440b5854332246e650279aaf033d4ba4a72408ab4035875aca1678"} Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.857480 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-h6qtl" event={"ID":"871381eb-c218-433c-a004-fea884f4ced0","Type":"ContainerStarted","Data":"f813cad0903b68b197e6479530ae9752619189dffbbb016124525e1e4ed07068"} Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.858431 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-8bb444544-qmbfx" event={"ID":"734cfb67-80ec-42a1-8d52-298ae82e1a6b","Type":"ContainerStarted","Data":"cfd2831f26c10df51119f0679a8aaf2303128e5b86a62b2a0fcad588ce1fb583"} Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.860792 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-l5xd2" event={"ID":"434d2d44-cb00-40d2-90b5-64dd65faadc8","Type":"ContainerStarted","Data":"5b00fa0f6fd6e8b34c6d6a9b36d640dbd12db2f72ea466b701fa6d948b284e68"} Jan 27 16:01:35 crc kubenswrapper[4966]: E0127 16:01:35.862557 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-l5xd2" podUID="434d2d44-cb00-40d2-90b5-64dd65faadc8" Jan 27 16:01:35 crc kubenswrapper[4966]: E0127 16:01:35.865567 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.164:5001/openstack-k8s-operators/telemetry-operator:1910d239c45618b2e0fb12ecdc7daaef325b114b\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-8bb444544-qmbfx" podUID="734cfb67-80ec-42a1-8d52-298ae82e1a6b" Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.866170 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf" event={"ID":"e2cfe3d1-d500-418e-bc6b-4da3482999c3","Type":"ContainerStarted","Data":"7ea14ec3e1b4354746c5e9b655c443582c5a9c82ae59249ef7e85d599c316d6c"} Jan 27 16:01:35 crc kubenswrapper[4966]: I0127 16:01:35.867573 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs" event={"ID":"64b84834-e9db-4f50-a7c7-6d24302652d3","Type":"ContainerStarted","Data":"4ba2ee3b1e3af1a0a6dc28d7c9c39f484f77a97bc3c0bd055963fa9f77264c4d"} Jan 27 16:01:36 crc kubenswrapper[4966]: I0127 16:01:36.787937 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfa058e6-1d6f-4dc2-8058-c00b201175b5-cert\") pod \"infra-operator-controller-manager-694cf4f878-qwc6v\" (UID: \"cfa058e6-1d6f-4dc2-8058-c00b201175b5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" Jan 27 16:01:36 crc kubenswrapper[4966]: E0127 16:01:36.788104 4966 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 16:01:36 crc kubenswrapper[4966]: E0127 16:01:36.788185 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfa058e6-1d6f-4dc2-8058-c00b201175b5-cert podName:cfa058e6-1d6f-4dc2-8058-c00b201175b5 nodeName:}" failed. No retries permitted until 2026-01-27 16:01:40.788165951 +0000 UTC m=+1167.090959429 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cfa058e6-1d6f-4dc2-8058-c00b201175b5-cert") pod "infra-operator-controller-manager-694cf4f878-qwc6v" (UID: "cfa058e6-1d6f-4dc2-8058-c00b201175b5") : secret "infra-operator-webhook-server-cert" not found Jan 27 16:01:36 crc kubenswrapper[4966]: E0127 16:01:36.883950 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.164:5001/openstack-k8s-operators/telemetry-operator:1910d239c45618b2e0fb12ecdc7daaef325b114b\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-8bb444544-qmbfx" podUID="734cfb67-80ec-42a1-8d52-298ae82e1a6b" Jan 27 16:01:36 crc kubenswrapper[4966]: E0127 16:01:36.884003 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-l5xd2" podUID="434d2d44-cb00-40d2-90b5-64dd65faadc8" Jan 27 16:01:36 crc kubenswrapper[4966]: E0127 16:01:36.886066 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-crddt" podUID="3bc21fb1-fc35-42e4-ab0f-bdd047cd05d7" Jan 27 16:01:36 crc kubenswrapper[4966]: I0127 16:01:36.993500 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20e54080-e732-4925-b0c2-35669744821d-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9\" (UID: \"20e54080-e732-4925-b0c2-35669744821d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" Jan 27 16:01:36 crc kubenswrapper[4966]: E0127 16:01:36.994427 4966 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:01:36 crc kubenswrapper[4966]: E0127 16:01:36.994477 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20e54080-e732-4925-b0c2-35669744821d-cert podName:20e54080-e732-4925-b0c2-35669744821d nodeName:}" failed. No retries permitted until 2026-01-27 16:01:40.994464063 +0000 UTC m=+1167.297257551 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/20e54080-e732-4925-b0c2-35669744821d-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" (UID: "20e54080-e732-4925-b0c2-35669744821d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:01:37 crc kubenswrapper[4966]: I0127 16:01:37.400510 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-metrics-certs\") pod \"openstack-operator-controller-manager-69869d7dcf-h42mh\" (UID: \"e49e9fb2-a5f0-4106-b239-93d488e4f515\") " pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:01:37 crc kubenswrapper[4966]: I0127 16:01:37.400967 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-webhook-certs\") pod \"openstack-operator-controller-manager-69869d7dcf-h42mh\" (UID: \"e49e9fb2-a5f0-4106-b239-93d488e4f515\") " pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:01:37 crc kubenswrapper[4966]: E0127 16:01:37.401136 4966 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 16:01:37 crc kubenswrapper[4966]: E0127 16:01:37.401192 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-webhook-certs podName:e49e9fb2-a5f0-4106-b239-93d488e4f515 nodeName:}" failed. No retries permitted until 2026-01-27 16:01:41.401172794 +0000 UTC m=+1167.703966282 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-webhook-certs") pod "openstack-operator-controller-manager-69869d7dcf-h42mh" (UID: "e49e9fb2-a5f0-4106-b239-93d488e4f515") : secret "webhook-server-cert" not found Jan 27 16:01:37 crc kubenswrapper[4966]: E0127 16:01:37.401568 4966 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 16:01:37 crc kubenswrapper[4966]: E0127 16:01:37.401599 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-metrics-certs podName:e49e9fb2-a5f0-4106-b239-93d488e4f515 nodeName:}" failed. No retries permitted until 2026-01-27 16:01:41.401590076 +0000 UTC m=+1167.704383564 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-metrics-certs") pod "openstack-operator-controller-manager-69869d7dcf-h42mh" (UID: "e49e9fb2-a5f0-4106-b239-93d488e4f515") : secret "metrics-server-cert" not found Jan 27 16:01:40 crc kubenswrapper[4966]: I0127 16:01:40.870135 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfa058e6-1d6f-4dc2-8058-c00b201175b5-cert\") pod \"infra-operator-controller-manager-694cf4f878-qwc6v\" (UID: \"cfa058e6-1d6f-4dc2-8058-c00b201175b5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" Jan 27 16:01:40 crc kubenswrapper[4966]: E0127 16:01:40.870295 4966 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 16:01:40 crc kubenswrapper[4966]: E0127 16:01:40.870622 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfa058e6-1d6f-4dc2-8058-c00b201175b5-cert podName:cfa058e6-1d6f-4dc2-8058-c00b201175b5 nodeName:}" failed. No retries permitted until 2026-01-27 16:01:48.870608129 +0000 UTC m=+1175.173401617 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cfa058e6-1d6f-4dc2-8058-c00b201175b5-cert") pod "infra-operator-controller-manager-694cf4f878-qwc6v" (UID: "cfa058e6-1d6f-4dc2-8058-c00b201175b5") : secret "infra-operator-webhook-server-cert" not found Jan 27 16:01:41 crc kubenswrapper[4966]: I0127 16:01:41.074382 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20e54080-e732-4925-b0c2-35669744821d-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9\" (UID: \"20e54080-e732-4925-b0c2-35669744821d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" Jan 27 16:01:41 crc kubenswrapper[4966]: E0127 16:01:41.074600 4966 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:01:41 crc kubenswrapper[4966]: E0127 16:01:41.074691 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20e54080-e732-4925-b0c2-35669744821d-cert podName:20e54080-e732-4925-b0c2-35669744821d nodeName:}" failed. No retries permitted until 2026-01-27 16:01:49.074672532 +0000 UTC m=+1175.377466020 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/20e54080-e732-4925-b0c2-35669744821d-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" (UID: "20e54080-e732-4925-b0c2-35669744821d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:01:41 crc kubenswrapper[4966]: I0127 16:01:41.481436 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-metrics-certs\") pod \"openstack-operator-controller-manager-69869d7dcf-h42mh\" (UID: \"e49e9fb2-a5f0-4106-b239-93d488e4f515\") " pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:01:41 crc kubenswrapper[4966]: I0127 16:01:41.481568 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-webhook-certs\") pod \"openstack-operator-controller-manager-69869d7dcf-h42mh\" (UID: \"e49e9fb2-a5f0-4106-b239-93d488e4f515\") " pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:01:41 crc kubenswrapper[4966]: E0127 16:01:41.481627 4966 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 16:01:41 crc kubenswrapper[4966]: E0127 16:01:41.481701 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-metrics-certs podName:e49e9fb2-a5f0-4106-b239-93d488e4f515 nodeName:}" failed. No retries permitted until 2026-01-27 16:01:49.481682622 +0000 UTC m=+1175.784476120 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-metrics-certs") pod "openstack-operator-controller-manager-69869d7dcf-h42mh" (UID: "e49e9fb2-a5f0-4106-b239-93d488e4f515") : secret "metrics-server-cert" not found Jan 27 16:01:41 crc kubenswrapper[4966]: E0127 16:01:41.481791 4966 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 16:01:41 crc kubenswrapper[4966]: E0127 16:01:41.481918 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-webhook-certs podName:e49e9fb2-a5f0-4106-b239-93d488e4f515 nodeName:}" failed. No retries permitted until 2026-01-27 16:01:49.481874579 +0000 UTC m=+1175.784668077 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-webhook-certs") pod "openstack-operator-controller-manager-69869d7dcf-h42mh" (UID: "e49e9fb2-a5f0-4106-b239-93d488e4f515") : secret "webhook-server-cert" not found Jan 27 16:01:48 crc kubenswrapper[4966]: I0127 16:01:48.941626 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfa058e6-1d6f-4dc2-8058-c00b201175b5-cert\") pod \"infra-operator-controller-manager-694cf4f878-qwc6v\" (UID: \"cfa058e6-1d6f-4dc2-8058-c00b201175b5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" Jan 27 16:01:48 crc kubenswrapper[4966]: E0127 16:01:48.941866 4966 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 16:01:48 crc kubenswrapper[4966]: E0127 16:01:48.943568 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfa058e6-1d6f-4dc2-8058-c00b201175b5-cert podName:cfa058e6-1d6f-4dc2-8058-c00b201175b5 nodeName:}" failed. No retries permitted until 2026-01-27 16:02:04.943541813 +0000 UTC m=+1191.246335291 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cfa058e6-1d6f-4dc2-8058-c00b201175b5-cert") pod "infra-operator-controller-manager-694cf4f878-qwc6v" (UID: "cfa058e6-1d6f-4dc2-8058-c00b201175b5") : secret "infra-operator-webhook-server-cert" not found Jan 27 16:01:49 crc kubenswrapper[4966]: I0127 16:01:49.147029 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20e54080-e732-4925-b0c2-35669744821d-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9\" (UID: \"20e54080-e732-4925-b0c2-35669744821d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" Jan 27 16:01:49 crc kubenswrapper[4966]: E0127 16:01:49.147200 4966 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:01:49 crc kubenswrapper[4966]: E0127 16:01:49.147258 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20e54080-e732-4925-b0c2-35669744821d-cert podName:20e54080-e732-4925-b0c2-35669744821d nodeName:}" failed. No retries permitted until 2026-01-27 16:02:05.147241754 +0000 UTC m=+1191.450035242 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/20e54080-e732-4925-b0c2-35669744821d-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" (UID: "20e54080-e732-4925-b0c2-35669744821d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:01:49 crc kubenswrapper[4966]: I0127 16:01:49.553766 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-webhook-certs\") pod \"openstack-operator-controller-manager-69869d7dcf-h42mh\" (UID: \"e49e9fb2-a5f0-4106-b239-93d488e4f515\") " pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:01:49 crc kubenswrapper[4966]: E0127 16:01:49.554061 4966 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 16:01:49 crc kubenswrapper[4966]: I0127 16:01:49.554169 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-metrics-certs\") pod \"openstack-operator-controller-manager-69869d7dcf-h42mh\" (UID: \"e49e9fb2-a5f0-4106-b239-93d488e4f515\") " pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:01:49 crc kubenswrapper[4966]: E0127 16:01:49.554211 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-webhook-certs podName:e49e9fb2-a5f0-4106-b239-93d488e4f515 nodeName:}" failed. No retries permitted until 2026-01-27 16:02:05.554171422 +0000 UTC m=+1191.856965060 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-webhook-certs") pod "openstack-operator-controller-manager-69869d7dcf-h42mh" (UID: "e49e9fb2-a5f0-4106-b239-93d488e4f515") : secret "webhook-server-cert" not found Jan 27 16:01:49 crc kubenswrapper[4966]: E0127 16:01:49.554366 4966 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 16:01:49 crc kubenswrapper[4966]: E0127 16:01:49.554465 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-metrics-certs podName:e49e9fb2-a5f0-4106-b239-93d488e4f515 nodeName:}" failed. No retries permitted until 2026-01-27 16:02:05.55443996 +0000 UTC m=+1191.857233588 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-metrics-certs") pod "openstack-operator-controller-manager-69869d7dcf-h42mh" (UID: "e49e9fb2-a5f0-4106-b239-93d488e4f515") : secret "metrics-server-cert" not found Jan 27 16:01:55 crc kubenswrapper[4966]: E0127 16:01:55.164554 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 27 16:01:55 crc kubenswrapper[4966]: E0127 16:01:55.165225 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sklw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-76ffr_openstack-operators(45594823-cdbb-4586-95d2-f2af9f6460b9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 16:01:55 crc kubenswrapper[4966]: E0127 16:01:55.166355 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr" podUID="45594823-cdbb-4586-95d2-f2af9f6460b9" Jan 27 16:01:56 crc kubenswrapper[4966]: E0127 16:01:56.057361 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr" podUID="45594823-cdbb-4586-95d2-f2af9f6460b9" Jan 27 16:01:56 crc kubenswrapper[4966]: E0127 16:01:56.324260 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd" Jan 27 16:01:56 crc kubenswrapper[4966]: E0127 16:01:56.329204 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7kj6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7f86f8796f-9s62h_openstack-operators(eb03df91-4797-41be-a7fb-7ca572014c88): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 16:01:56 crc kubenswrapper[4966]: E0127 16:01:56.331422 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9s62h" podUID="eb03df91-4797-41be-a7fb-7ca572014c88" Jan 27 16:01:57 crc kubenswrapper[4966]: E0127 16:01:57.065640 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9s62h" podUID="eb03df91-4797-41be-a7fb-7ca572014c88" Jan 27 16:01:58 crc kubenswrapper[4966]: E0127 16:01:58.564497 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 27 16:01:58 crc kubenswrapper[4966]: E0127 16:01:58.565011 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4p29l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-6d5tv_openstack-operators(08ac68d1-220d-4098-9eed-6d0e3b752e5d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 16:01:58 crc kubenswrapper[4966]: E0127 16:01:58.566517 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv" podUID="08ac68d1-220d-4098-9eed-6d0e3b752e5d" Jan 27 16:01:59 crc kubenswrapper[4966]: E0127 16:01:59.084125 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv" podUID="08ac68d1-220d-4098-9eed-6d0e3b752e5d" Jan 27 16:02:00 crc kubenswrapper[4966]: E0127 16:02:00.169383 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 27 16:02:00 crc kubenswrapper[4966]: E0127 16:02:00.169943 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-76z7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-zdjjc_openstack-operators(8645d6d2-f7cd-4578-9a1a-8b07beeae08c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 16:02:00 crc kubenswrapper[4966]: E0127 16:02:00.171356 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc" podUID="8645d6d2-f7cd-4578-9a1a-8b07beeae08c" Jan 27 16:02:00 crc kubenswrapper[4966]: E0127 16:02:00.887605 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd" Jan 27 16:02:00 crc kubenswrapper[4966]: E0127 16:02:00.887789 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mzzzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-2n8dp_openstack-operators(6006cb9c-d22f-47b1-b8b6-cb999ecab7df): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 16:02:00 crc kubenswrapper[4966]: E0127 16:02:00.890022 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp" podUID="6006cb9c-d22f-47b1-b8b6-cb999ecab7df" Jan 27 16:02:01 crc kubenswrapper[4966]: E0127 16:02:01.102737 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp" podUID="6006cb9c-d22f-47b1-b8b6-cb999ecab7df" Jan 27 16:02:01 crc kubenswrapper[4966]: E0127 16:02:01.102950 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc" podUID="8645d6d2-f7cd-4578-9a1a-8b07beeae08c" Jan 27 16:02:01 crc kubenswrapper[4966]: E0127 16:02:01.412842 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e" Jan 27 16:02:01 crc kubenswrapper[4966]: E0127 16:02:01.413123 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xqfmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-598f7747c9-rzghf_openstack-operators(e2cfe3d1-d500-418e-bc6b-4da3482999c3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 16:02:01 crc kubenswrapper[4966]: E0127 16:02:01.414422 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf" podUID="e2cfe3d1-d500-418e-bc6b-4da3482999c3" Jan 27 16:02:02 crc kubenswrapper[4966]: E0127 16:02:02.111627 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf" podUID="e2cfe3d1-d500-418e-bc6b-4da3482999c3" Jan 27 16:02:02 crc kubenswrapper[4966]: E0127 16:02:02.337860 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 27 16:02:02 crc kubenswrapper[4966]: E0127 16:02:02.338246 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xrwhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-j6s8k_openstack-operators(dd40e2cd-59aa-442b-b27a-209632cba6e4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 16:02:02 crc kubenswrapper[4966]: E0127 16:02:02.339559 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-j6s8k" podUID="dd40e2cd-59aa-442b-b27a-209632cba6e4" Jan 27 16:02:02 crc kubenswrapper[4966]: E0127 16:02:02.681484 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 27 16:02:02 crc kubenswrapper[4966]: E0127 16:02:02.681934 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rqqt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-rvh45_openstack-operators(0af070d2-e4fd-488e-abd5-c8ae5915d089): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 16:02:02 crc kubenswrapper[4966]: E0127 16:02:02.683059 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rvh45" podUID="0af070d2-e4fd-488e-abd5-c8ae5915d089" Jan 27 16:02:03 crc kubenswrapper[4966]: E0127 16:02:03.129206 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-j6s8k" podUID="dd40e2cd-59aa-442b-b27a-209632cba6e4" Jan 27 16:02:03 crc kubenswrapper[4966]: E0127 16:02:03.129476 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rvh45" podUID="0af070d2-e4fd-488e-abd5-c8ae5915d089" Jan 27 16:02:03 crc kubenswrapper[4966]: E0127 16:02:03.219753 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d" Jan 27 16:02:03 crc kubenswrapper[4966]: E0127 16:02:03.219993 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rpxq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-kxj69_openstack-operators(dd6f6600-3072-42e6-a8ca-5e72c960425a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 16:02:03 crc kubenswrapper[4966]: E0127 16:02:03.221214 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69" podUID="dd6f6600-3072-42e6-a8ca-5e72c960425a" Jan 27 16:02:04 crc kubenswrapper[4966]: E0127 16:02:04.146862 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69" podUID="dd6f6600-3072-42e6-a8ca-5e72c960425a" Jan 27 16:02:04 crc kubenswrapper[4966]: I0127 16:02:04.962431 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfa058e6-1d6f-4dc2-8058-c00b201175b5-cert\") pod \"infra-operator-controller-manager-694cf4f878-qwc6v\" (UID: \"cfa058e6-1d6f-4dc2-8058-c00b201175b5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" Jan 27 16:02:04 crc kubenswrapper[4966]: I0127 16:02:04.968367 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfa058e6-1d6f-4dc2-8058-c00b201175b5-cert\") pod \"infra-operator-controller-manager-694cf4f878-qwc6v\" (UID: \"cfa058e6-1d6f-4dc2-8058-c00b201175b5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" Jan 27 16:02:05 crc kubenswrapper[4966]: I0127 16:02:05.006339 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" Jan 27 16:02:05 crc kubenswrapper[4966]: I0127 16:02:05.165578 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20e54080-e732-4925-b0c2-35669744821d-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9\" (UID: \"20e54080-e732-4925-b0c2-35669744821d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" Jan 27 16:02:05 crc kubenswrapper[4966]: I0127 16:02:05.170817 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20e54080-e732-4925-b0c2-35669744821d-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9\" (UID: \"20e54080-e732-4925-b0c2-35669744821d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" Jan 27 16:02:05 crc kubenswrapper[4966]: I0127 16:02:05.402886 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" Jan 27 16:02:05 crc kubenswrapper[4966]: I0127 16:02:05.574687 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-webhook-certs\") pod \"openstack-operator-controller-manager-69869d7dcf-h42mh\" (UID: \"e49e9fb2-a5f0-4106-b239-93d488e4f515\") " pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:02:05 crc kubenswrapper[4966]: I0127 16:02:05.574809 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-metrics-certs\") pod \"openstack-operator-controller-manager-69869d7dcf-h42mh\" (UID: \"e49e9fb2-a5f0-4106-b239-93d488e4f515\") " pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:02:05 crc kubenswrapper[4966]: I0127 16:02:05.579361 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-webhook-certs\") pod \"openstack-operator-controller-manager-69869d7dcf-h42mh\" (UID: \"e49e9fb2-a5f0-4106-b239-93d488e4f515\") " pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:02:05 crc kubenswrapper[4966]: I0127 16:02:05.580511 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e49e9fb2-a5f0-4106-b239-93d488e4f515-metrics-certs\") pod \"openstack-operator-controller-manager-69869d7dcf-h42mh\" (UID: \"e49e9fb2-a5f0-4106-b239-93d488e4f515\") " pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:02:05 crc kubenswrapper[4966]: I0127 16:02:05.593431 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:02:06 crc kubenswrapper[4966]: E0127 16:02:06.293491 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658" Jan 27 16:02:06 crc kubenswrapper[4966]: E0127 16:02:06.293729 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7fdhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-sttbs_openstack-operators(64b84834-e9db-4f50-a7c7-6d24302652d3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 16:02:06 crc kubenswrapper[4966]: E0127 16:02:06.294998 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs" podUID="64b84834-e9db-4f50-a7c7-6d24302652d3" Jan 27 16:02:06 crc kubenswrapper[4966]: I0127 16:02:06.863805 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh"] Jan 27 16:02:06 crc kubenswrapper[4966]: W0127 16:02:06.878163 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode49e9fb2_a5f0_4106_b239_93d488e4f515.slice/crio-71a4a68df97c39948431b897b0cece5d92c2b8cce81464481a21527dd48a63bf WatchSource:0}: Error finding container 71a4a68df97c39948431b897b0cece5d92c2b8cce81464481a21527dd48a63bf: Status 404 returned error can't find the container with id 71a4a68df97c39948431b897b0cece5d92c2b8cce81464481a21527dd48a63bf Jan 27 16:02:06 crc kubenswrapper[4966]: I0127 16:02:06.957242 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v"] Jan 27 16:02:06 crc kubenswrapper[4966]: I0127 16:02:06.997608 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9"] Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.166604 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-h6qtl" event={"ID":"871381eb-c218-433c-a004-fea884f4ced0","Type":"ContainerStarted","Data":"c4a427ecbb7f7c4ff29913c18d29f93a3ab4928fee138b0dc2722eb588bbdc0b"} Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.167111 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-h6qtl" Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.168678 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j7j9c" event={"ID":"624197a8-447a-4004-a1e0-679ce29dbe86","Type":"ContainerStarted","Data":"6ec1b302c4539c1ff5ca6ded5d5d6f250e5be8e2554f37cb19631767092614b8"} Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.168755 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j7j9c" Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.170322 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" event={"ID":"e49e9fb2-a5f0-4106-b239-93d488e4f515","Type":"ContainerStarted","Data":"c3f222de1dd9b934a9450ed68e20843e460b33f70da5dff2ff4ee6c14e324abb"} Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.170363 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" event={"ID":"e49e9fb2-a5f0-4106-b239-93d488e4f515","Type":"ContainerStarted","Data":"71a4a68df97c39948431b897b0cece5d92c2b8cce81464481a21527dd48a63bf"} Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.170522 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.172064 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw" event={"ID":"3ae401e5-feea-47d3-9c86-1e33635a461a","Type":"ContainerStarted","Data":"fee311969b0994fd270a42ff5698a654f826fb39b5ff38885e5921bf8175e840"} Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.172877 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw" Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.174856 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h" event={"ID":"093d4126-d96d-475a-9519-020f2f73a742","Type":"ContainerStarted","Data":"64ae8a3e77bac72bf43670e7c2a28cd961d5d57cef258491a5ef5d1d026f2e2a"} Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.174938 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h" Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.176282 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" event={"ID":"cfa058e6-1d6f-4dc2-8058-c00b201175b5","Type":"ContainerStarted","Data":"2902be57e7752c445c067896e3708d76f1db1e6d81ed31b3ed678a79252d80cf"} Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.177865 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" event={"ID":"20e54080-e732-4925-b0c2-35669744821d","Type":"ContainerStarted","Data":"5d8f6a352e3487da2e4b1071ce948c06cad7bf6b4957f8484c3dab11123d7762"} Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.179858 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-crddt" event={"ID":"3bc21fb1-fc35-42e4-ab0f-bdd047cd05d7","Type":"ContainerStarted","Data":"ab43dc43add4169f1cb5c06bba8cb1175ebe4276d05db00d3001198ba0ba2291"} Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.180760 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-crddt" Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.182521 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rv5wp" event={"ID":"f096bdf7-f589-4344-b71f-ab9db2eded5f","Type":"ContainerStarted","Data":"98161f1274ea52045dcaf03d98d4777cf6d51708e4fe929aaf5d84444bb03dce"} Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.183153 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rv5wp" Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.184678 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-l5xd2" event={"ID":"434d2d44-cb00-40d2-90b5-64dd65faadc8","Type":"ContainerStarted","Data":"9e08a1deb12e3464aa78591f60b5d03a2d0143c945bcd8269aa27f32c4e66ec0"} Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.185281 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-l5xd2" Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.187303 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-8bb444544-qmbfx" event={"ID":"734cfb67-80ec-42a1-8d52-298ae82e1a6b","Type":"ContainerStarted","Data":"5c0ea5aad5bb0fd10a414ed0bd44a1187c3f3e38e8966f0c277e35eb6cdaae45"} Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.187832 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-8bb444544-qmbfx" Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.189272 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-77jlm" event={"ID":"ed16ab57-79cc-42b3-9ae5-663ba4c7b8ff","Type":"ContainerStarted","Data":"a4c805a4152452eee02497475d2ec064159b7c0f6ccfe5009cd2e2b46ae0c5f3"} Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.189790 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-77jlm" Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.192793 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-88lhp" event={"ID":"9dcc8f2a-06d2-493e-b0ce-50120cef400e","Type":"ContainerStarted","Data":"ada07f02fbf05a6f85c6c448bfb4a52100357187d2e030270579d5927f0b69d6"} Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.192839 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-88lhp" Jan 27 16:02:07 crc kubenswrapper[4966]: E0127 16:02:07.193642 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs" podUID="64b84834-e9db-4f50-a7c7-6d24302652d3" Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.201689 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-h6qtl" podStartSLOduration=6.92835333 podStartE2EDuration="35.201668404s" podCreationTimestamp="2026-01-27 16:01:32 +0000 UTC" firstStartedPulling="2026-01-27 16:01:35.387555665 +0000 UTC m=+1161.690349153" lastFinishedPulling="2026-01-27 16:02:03.660870739 +0000 UTC m=+1189.963664227" observedRunningTime="2026-01-27 16:02:07.193234249 +0000 UTC m=+1193.496027757" watchObservedRunningTime="2026-01-27 16:02:07.201668404 +0000 UTC m=+1193.504461892" Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.227530 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h" podStartSLOduration=5.509123991 podStartE2EDuration="35.227506745s" podCreationTimestamp="2026-01-27 16:01:32 +0000 UTC" firstStartedPulling="2026-01-27 16:01:33.943753445 +0000 UTC m=+1160.246546933" lastFinishedPulling="2026-01-27 16:02:03.662136199 +0000 UTC m=+1189.964929687" observedRunningTime="2026-01-27 16:02:07.223474818 +0000 UTC m=+1193.526268316" watchObservedRunningTime="2026-01-27 16:02:07.227506745 +0000 UTC m=+1193.530300243" Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.290602 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-l5xd2" podStartSLOduration=3.320623 podStartE2EDuration="34.290586474s" podCreationTimestamp="2026-01-27 16:01:33 +0000 UTC" firstStartedPulling="2026-01-27 16:01:35.429210442 +0000 UTC m=+1161.732003930" lastFinishedPulling="2026-01-27 16:02:06.399173916 +0000 UTC m=+1192.701967404" observedRunningTime="2026-01-27 16:02:07.284057659 +0000 UTC m=+1193.586851167" watchObservedRunningTime="2026-01-27 16:02:07.290586474 +0000 UTC m=+1193.593379962" Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.332439 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-88lhp" podStartSLOduration=5.930983337 podStartE2EDuration="35.332422196s" podCreationTimestamp="2026-01-27 16:01:32 +0000 UTC" firstStartedPulling="2026-01-27 16:01:34.259393419 +0000 UTC m=+1160.562186907" lastFinishedPulling="2026-01-27 16:02:03.660832278 +0000 UTC m=+1189.963625766" observedRunningTime="2026-01-27 16:02:07.328785563 +0000 UTC m=+1193.631579071" watchObservedRunningTime="2026-01-27 16:02:07.332422196 +0000 UTC m=+1193.635215684" Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.404723 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-8bb444544-qmbfx" podStartSLOduration=3.387308252 podStartE2EDuration="34.404706964s" podCreationTimestamp="2026-01-27 16:01:33 +0000 UTC" firstStartedPulling="2026-01-27 16:01:35.417325859 +0000 UTC m=+1161.720119357" lastFinishedPulling="2026-01-27 16:02:06.434724571 +0000 UTC m=+1192.737518069" observedRunningTime="2026-01-27 16:02:07.401238516 +0000 UTC m=+1193.704032014" watchObservedRunningTime="2026-01-27 16:02:07.404706964 +0000 UTC m=+1193.707500452" Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.427570 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw" podStartSLOduration=6.206834811 podStartE2EDuration="35.427555371s" podCreationTimestamp="2026-01-27 16:01:32 +0000 UTC" firstStartedPulling="2026-01-27 16:01:34.440108178 +0000 UTC m=+1160.742901666" lastFinishedPulling="2026-01-27 16:02:03.660828738 +0000 UTC m=+1189.963622226" observedRunningTime="2026-01-27 16:02:07.426707525 +0000 UTC m=+1193.729501023" watchObservedRunningTime="2026-01-27 16:02:07.427555371 +0000 UTC m=+1193.730348859" Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.457039 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j7j9c" podStartSLOduration=5.84249203 podStartE2EDuration="35.457020686s" podCreationTimestamp="2026-01-27 16:01:32 +0000 UTC" firstStartedPulling="2026-01-27 16:01:34.046256601 +0000 UTC m=+1160.349050079" lastFinishedPulling="2026-01-27 16:02:03.660785247 +0000 UTC m=+1189.963578735" observedRunningTime="2026-01-27 16:02:07.451561095 +0000 UTC m=+1193.754354593" watchObservedRunningTime="2026-01-27 16:02:07.457020686 +0000 UTC m=+1193.759814174" Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.489152 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rv5wp" podStartSLOduration=7.219537416 podStartE2EDuration="35.489136563s" podCreationTimestamp="2026-01-27 16:01:32 +0000 UTC" firstStartedPulling="2026-01-27 16:01:35.392397377 +0000 UTC m=+1161.695190865" lastFinishedPulling="2026-01-27 16:02:03.661996524 +0000 UTC m=+1189.964790012" observedRunningTime="2026-01-27 16:02:07.484417515 +0000 UTC m=+1193.787211003" watchObservedRunningTime="2026-01-27 16:02:07.489136563 +0000 UTC m=+1193.791930051" Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.520871 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" podStartSLOduration=34.520856369 podStartE2EDuration="34.520856369s" podCreationTimestamp="2026-01-27 16:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:02:07.514859361 +0000 UTC m=+1193.817652869" watchObservedRunningTime="2026-01-27 16:02:07.520856369 +0000 UTC m=+1193.823649857" Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.563489 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-crddt" podStartSLOduration=3.600813693 podStartE2EDuration="34.563464826s" podCreationTimestamp="2026-01-27 16:01:33 +0000 UTC" firstStartedPulling="2026-01-27 16:01:35.426482637 +0000 UTC m=+1161.729276135" lastFinishedPulling="2026-01-27 16:02:06.38913378 +0000 UTC m=+1192.691927268" observedRunningTime="2026-01-27 16:02:07.5575284 +0000 UTC m=+1193.860321918" watchObservedRunningTime="2026-01-27 16:02:07.563464826 +0000 UTC m=+1193.866258314" Jan 27 16:02:07 crc kubenswrapper[4966]: I0127 16:02:07.573609 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-77jlm" podStartSLOduration=6.363967982 podStartE2EDuration="35.573592323s" podCreationTimestamp="2026-01-27 16:01:32 +0000 UTC" firstStartedPulling="2026-01-27 16:01:34.452420485 +0000 UTC m=+1160.755213973" lastFinishedPulling="2026-01-27 16:02:03.662044826 +0000 UTC m=+1189.964838314" observedRunningTime="2026-01-27 16:02:07.573249352 +0000 UTC m=+1193.876042850" watchObservedRunningTime="2026-01-27 16:02:07.573592323 +0000 UTC m=+1193.876385811" Jan 27 16:02:11 crc kubenswrapper[4966]: I0127 16:02:11.223107 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr" event={"ID":"45594823-cdbb-4586-95d2-f2af9f6460b9","Type":"ContainerStarted","Data":"90a2a1c3eab668de33a3c1c1d9014bcd43a1232e388eee9a321e575f55fa253f"} Jan 27 16:02:11 crc kubenswrapper[4966]: I0127 16:02:11.225054 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr" Jan 27 16:02:11 crc kubenswrapper[4966]: I0127 16:02:11.250335 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr" podStartSLOduration=3.219256616 podStartE2EDuration="39.250317243s" podCreationTimestamp="2026-01-27 16:01:32 +0000 UTC" firstStartedPulling="2026-01-27 16:01:34.153574089 +0000 UTC m=+1160.456367577" lastFinishedPulling="2026-01-27 16:02:10.184634716 +0000 UTC m=+1196.487428204" observedRunningTime="2026-01-27 16:02:11.247889277 +0000 UTC m=+1197.550682775" watchObservedRunningTime="2026-01-27 16:02:11.250317243 +0000 UTC m=+1197.553110731" Jan 27 16:02:12 crc kubenswrapper[4966]: I0127 16:02:12.232213 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9s62h" event={"ID":"eb03df91-4797-41be-a7fb-7ca572014c88","Type":"ContainerStarted","Data":"71f24679dea81b45da924a6d3d8bbd3b5ed0c5ac1d1a0c1f93158e1dd6d3fd51"} Jan 27 16:02:12 crc kubenswrapper[4966]: I0127 16:02:12.232912 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9s62h" Jan 27 16:02:12 crc kubenswrapper[4966]: I0127 16:02:12.234641 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" event={"ID":"cfa058e6-1d6f-4dc2-8058-c00b201175b5","Type":"ContainerStarted","Data":"81dfae4c85dacde648c00bc1c4d50217b3ea77b47bc5ceb3370e659a28f5987a"} Jan 27 16:02:12 crc kubenswrapper[4966]: I0127 16:02:12.234756 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" Jan 27 16:02:12 crc kubenswrapper[4966]: I0127 16:02:12.236674 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" event={"ID":"20e54080-e732-4925-b0c2-35669744821d","Type":"ContainerStarted","Data":"eb342873a7e52a76bb01c3714c3e1811a3d178d29442cb4dcb1796f01e1138be"} Jan 27 16:02:12 crc kubenswrapper[4966]: I0127 16:02:12.265175 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9s62h" podStartSLOduration=2.481091123 podStartE2EDuration="40.265152704s" podCreationTimestamp="2026-01-27 16:01:32 +0000 UTC" firstStartedPulling="2026-01-27 16:01:33.983527042 +0000 UTC m=+1160.286320530" lastFinishedPulling="2026-01-27 16:02:11.767588623 +0000 UTC m=+1198.070382111" observedRunningTime="2026-01-27 16:02:12.257616728 +0000 UTC m=+1198.560410236" watchObservedRunningTime="2026-01-27 16:02:12.265152704 +0000 UTC m=+1198.567946192" Jan 27 16:02:12 crc kubenswrapper[4966]: I0127 16:02:12.290041 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" podStartSLOduration=35.490810466 podStartE2EDuration="40.290021684s" podCreationTimestamp="2026-01-27 16:01:32 +0000 UTC" firstStartedPulling="2026-01-27 16:02:06.969154969 +0000 UTC m=+1193.271948457" lastFinishedPulling="2026-01-27 16:02:11.768366147 +0000 UTC m=+1198.071159675" observedRunningTime="2026-01-27 16:02:12.286279377 +0000 UTC m=+1198.589072865" watchObservedRunningTime="2026-01-27 16:02:12.290021684 +0000 UTC m=+1198.592815172" Jan 27 16:02:12 crc kubenswrapper[4966]: I0127 16:02:12.344218 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" podStartSLOduration=35.593927241 podStartE2EDuration="40.344195724s" podCreationTimestamp="2026-01-27 16:01:32 +0000 UTC" firstStartedPulling="2026-01-27 16:02:07.016194444 +0000 UTC m=+1193.318987942" lastFinishedPulling="2026-01-27 16:02:11.766462937 +0000 UTC m=+1198.069256425" observedRunningTime="2026-01-27 16:02:12.337409161 +0000 UTC m=+1198.640202669" watchObservedRunningTime="2026-01-27 16:02:12.344195724 +0000 UTC m=+1198.646989212" Jan 27 16:02:13 crc kubenswrapper[4966]: I0127 16:02:13.119453 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h" Jan 27 16:02:13 crc kubenswrapper[4966]: I0127 16:02:13.245046 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-88lhp" Jan 27 16:02:13 crc kubenswrapper[4966]: I0127 16:02:13.253381 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" Jan 27 16:02:13 crc kubenswrapper[4966]: I0127 16:02:13.388566 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j7j9c" Jan 27 16:02:13 crc kubenswrapper[4966]: I0127 16:02:13.440373 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-77jlm" Jan 27 16:02:13 crc kubenswrapper[4966]: I0127 16:02:13.483465 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw" Jan 27 16:02:13 crc kubenswrapper[4966]: I0127 16:02:13.509885 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-h6qtl" Jan 27 16:02:13 crc kubenswrapper[4966]: I0127 16:02:13.585034 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rv5wp" Jan 27 16:02:13 crc kubenswrapper[4966]: I0127 16:02:13.743349 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-8bb444544-qmbfx" Jan 27 16:02:13 crc kubenswrapper[4966]: I0127 16:02:13.758389 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-crddt" Jan 27 16:02:14 crc kubenswrapper[4966]: I0127 16:02:14.081188 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-l5xd2" Jan 27 16:02:14 crc kubenswrapper[4966]: I0127 16:02:14.262860 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv" event={"ID":"08ac68d1-220d-4098-9eed-6d0e3b752e5d","Type":"ContainerStarted","Data":"38f213e4059aac0a7c48919ccc4b734dac8a1d28e63c7fc23bb24497691f51e9"} Jan 27 16:02:14 crc kubenswrapper[4966]: I0127 16:02:14.263096 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv" Jan 27 16:02:14 crc kubenswrapper[4966]: I0127 16:02:14.267006 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp" event={"ID":"6006cb9c-d22f-47b1-b8b6-cb999ecab7df","Type":"ContainerStarted","Data":"22ae82213c968d34b22290673adcea3a471cb995b16c3ce8d5b7417294da6e47"} Jan 27 16:02:14 crc kubenswrapper[4966]: I0127 16:02:14.267227 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp" Jan 27 16:02:14 crc kubenswrapper[4966]: I0127 16:02:14.271004 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc" event={"ID":"8645d6d2-f7cd-4578-9a1a-8b07beeae08c","Type":"ContainerStarted","Data":"571328b16b60b85ea26f3e92f00d948d6faaf15146acc766d0d9ea5f7c37f571"} Jan 27 16:02:14 crc kubenswrapper[4966]: I0127 16:02:14.271254 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc" Jan 27 16:02:14 crc kubenswrapper[4966]: I0127 16:02:14.282834 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv" podStartSLOduration=2.7445373010000003 podStartE2EDuration="42.28281335s" podCreationTimestamp="2026-01-27 16:01:32 +0000 UTC" firstStartedPulling="2026-01-27 16:01:34.428751292 +0000 UTC m=+1160.731544780" lastFinishedPulling="2026-01-27 16:02:13.967027341 +0000 UTC m=+1200.269820829" observedRunningTime="2026-01-27 16:02:14.27613036 +0000 UTC m=+1200.578923858" watchObservedRunningTime="2026-01-27 16:02:14.28281335 +0000 UTC m=+1200.585606858" Jan 27 16:02:14 crc kubenswrapper[4966]: I0127 16:02:14.294026 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc" podStartSLOduration=2.794478334 podStartE2EDuration="41.294005241s" podCreationTimestamp="2026-01-27 16:01:33 +0000 UTC" firstStartedPulling="2026-01-27 16:01:35.409589036 +0000 UTC m=+1161.712382525" lastFinishedPulling="2026-01-27 16:02:13.909115934 +0000 UTC m=+1200.211909432" observedRunningTime="2026-01-27 16:02:14.292782172 +0000 UTC m=+1200.595575670" watchObservedRunningTime="2026-01-27 16:02:14.294005241 +0000 UTC m=+1200.596798739" Jan 27 16:02:14 crc kubenswrapper[4966]: I0127 16:02:14.312673 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp" podStartSLOduration=2.9900736549999998 podStartE2EDuration="42.312655486s" podCreationTimestamp="2026-01-27 16:01:32 +0000 UTC" firstStartedPulling="2026-01-27 16:01:34.644392018 +0000 UTC m=+1160.947185516" lastFinishedPulling="2026-01-27 16:02:13.966973849 +0000 UTC m=+1200.269767347" observedRunningTime="2026-01-27 16:02:14.306499282 +0000 UTC m=+1200.609292780" watchObservedRunningTime="2026-01-27 16:02:14.312655486 +0000 UTC m=+1200.615448964" Jan 27 16:02:15 crc kubenswrapper[4966]: I0127 16:02:15.599176 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" Jan 27 16:02:17 crc kubenswrapper[4966]: I0127 16:02:17.305182 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rvh45" event={"ID":"0af070d2-e4fd-488e-abd5-c8ae5915d089","Type":"ContainerStarted","Data":"fb60b28599372dc9a5c9af53ea90c5f9cc708ba28ba53c8425788bb64ab42d5b"} Jan 27 16:02:17 crc kubenswrapper[4966]: I0127 16:02:17.328997 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rvh45" podStartSLOduration=2.807225261 podStartE2EDuration="44.328975364s" podCreationTimestamp="2026-01-27 16:01:33 +0000 UTC" firstStartedPulling="2026-01-27 16:01:35.411447264 +0000 UTC m=+1161.714240752" lastFinishedPulling="2026-01-27 16:02:16.933197357 +0000 UTC m=+1203.235990855" observedRunningTime="2026-01-27 16:02:17.320755847 +0000 UTC m=+1203.623549345" watchObservedRunningTime="2026-01-27 16:02:17.328975364 +0000 UTC m=+1203.631768852" Jan 27 16:02:18 crc kubenswrapper[4966]: I0127 16:02:18.314195 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf" event={"ID":"e2cfe3d1-d500-418e-bc6b-4da3482999c3","Type":"ContainerStarted","Data":"772bd9fb4d58a5e384e1cc811ea737ef318c466be6e414a145648812d0056abb"} Jan 27 16:02:19 crc kubenswrapper[4966]: I0127 16:02:19.327025 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs" event={"ID":"64b84834-e9db-4f50-a7c7-6d24302652d3","Type":"ContainerStarted","Data":"e59e7dac71b475dc932036438a3ac002eb864bf7ba6f823658c1e6eaf7909798"} Jan 27 16:02:19 crc kubenswrapper[4966]: I0127 16:02:19.328165 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs" Jan 27 16:02:19 crc kubenswrapper[4966]: I0127 16:02:19.330257 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-j6s8k" event={"ID":"dd40e2cd-59aa-442b-b27a-209632cba6e4","Type":"ContainerStarted","Data":"adf0fcbc8ef6015588f3707f4152c9c3d17a445c985b93fbe31f7adaac786067"} Jan 27 16:02:19 crc kubenswrapper[4966]: I0127 16:02:19.330297 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf" Jan 27 16:02:19 crc kubenswrapper[4966]: I0127 16:02:19.330755 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-j6s8k" Jan 27 16:02:19 crc kubenswrapper[4966]: I0127 16:02:19.354310 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs" podStartSLOduration=3.835390726 podStartE2EDuration="47.35428613s" podCreationTimestamp="2026-01-27 16:01:32 +0000 UTC" firstStartedPulling="2026-01-27 16:01:35.398353984 +0000 UTC m=+1161.701147472" lastFinishedPulling="2026-01-27 16:02:18.917249378 +0000 UTC m=+1205.220042876" observedRunningTime="2026-01-27 16:02:19.346261559 +0000 UTC m=+1205.649055097" watchObservedRunningTime="2026-01-27 16:02:19.35428613 +0000 UTC m=+1205.657079628" Jan 27 16:02:19 crc kubenswrapper[4966]: I0127 16:02:19.383424 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-j6s8k" podStartSLOduration=4.044057794 podStartE2EDuration="47.383401014s" podCreationTimestamp="2026-01-27 16:01:32 +0000 UTC" firstStartedPulling="2026-01-27 16:01:34.644820771 +0000 UTC m=+1160.947614259" lastFinishedPulling="2026-01-27 16:02:17.984163991 +0000 UTC m=+1204.286957479" observedRunningTime="2026-01-27 16:02:19.375686921 +0000 UTC m=+1205.678480409" watchObservedRunningTime="2026-01-27 16:02:19.383401014 +0000 UTC m=+1205.686194522" Jan 27 16:02:19 crc kubenswrapper[4966]: I0127 16:02:19.396590 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf" podStartSLOduration=4.766690127 podStartE2EDuration="47.396567527s" podCreationTimestamp="2026-01-27 16:01:32 +0000 UTC" firstStartedPulling="2026-01-27 16:01:35.38769971 +0000 UTC m=+1161.690493198" lastFinishedPulling="2026-01-27 16:02:18.01757709 +0000 UTC m=+1204.320370598" observedRunningTime="2026-01-27 16:02:19.390072033 +0000 UTC m=+1205.692865551" watchObservedRunningTime="2026-01-27 16:02:19.396567527 +0000 UTC m=+1205.699361025" Jan 27 16:02:20 crc kubenswrapper[4966]: I0127 16:02:20.337385 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69" event={"ID":"dd6f6600-3072-42e6-a8ca-5e72c960425a","Type":"ContainerStarted","Data":"f99db16f885bdabbf6b86e765186e8b6f39b566034ded7713f8ed4df5c615aa7"} Jan 27 16:02:20 crc kubenswrapper[4966]: I0127 16:02:20.337872 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69" Jan 27 16:02:20 crc kubenswrapper[4966]: I0127 16:02:20.361189 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69" podStartSLOduration=2.730139744 podStartE2EDuration="47.361170662s" podCreationTimestamp="2026-01-27 16:01:33 +0000 UTC" firstStartedPulling="2026-01-27 16:01:35.387952268 +0000 UTC m=+1161.690745756" lastFinishedPulling="2026-01-27 16:02:20.018983196 +0000 UTC m=+1206.321776674" observedRunningTime="2026-01-27 16:02:20.357731805 +0000 UTC m=+1206.660525313" watchObservedRunningTime="2026-01-27 16:02:20.361170662 +0000 UTC m=+1206.663964160" Jan 27 16:02:23 crc kubenswrapper[4966]: I0127 16:02:23.105202 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9s62h" Jan 27 16:02:23 crc kubenswrapper[4966]: I0127 16:02:23.232366 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr" Jan 27 16:02:23 crc kubenswrapper[4966]: I0127 16:02:23.446020 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv" Jan 27 16:02:23 crc kubenswrapper[4966]: I0127 16:02:23.536234 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-j6s8k" Jan 27 16:02:23 crc kubenswrapper[4966]: I0127 16:02:23.566179 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp" Jan 27 16:02:23 crc kubenswrapper[4966]: I0127 16:02:23.577197 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf" Jan 27 16:02:23 crc kubenswrapper[4966]: I0127 16:02:23.728835 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc" Jan 27 16:02:25 crc kubenswrapper[4966]: I0127 16:02:25.013876 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" Jan 27 16:02:25 crc kubenswrapper[4966]: I0127 16:02:25.410122 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" Jan 27 16:02:33 crc kubenswrapper[4966]: I0127 16:02:33.529615 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs" Jan 27 16:02:33 crc kubenswrapper[4966]: I0127 16:02:33.716387 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.362037 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pdbrb"] Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.363890 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pdbrb" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.366530 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-2znrf" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.366662 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.366715 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.366982 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.380411 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pdbrb"] Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.458515 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vzpmd"] Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.460191 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vzpmd" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.462434 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.476396 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vzpmd"] Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.510997 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52545325-12bb-4111-9ace-f323527adf59-config\") pod \"dnsmasq-dns-675f4bcbfc-pdbrb\" (UID: \"52545325-12bb-4111-9ace-f323527adf59\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pdbrb" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.511112 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6z9c\" (UniqueName: \"kubernetes.io/projected/52545325-12bb-4111-9ace-f323527adf59-kube-api-access-m6z9c\") pod \"dnsmasq-dns-675f4bcbfc-pdbrb\" (UID: \"52545325-12bb-4111-9ace-f323527adf59\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pdbrb" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.613043 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pft2\" (UniqueName: \"kubernetes.io/projected/a85c8d57-6d1c-4e5b-ad0c-0074ea018944-kube-api-access-2pft2\") pod \"dnsmasq-dns-78dd6ddcc-vzpmd\" (UID: \"a85c8d57-6d1c-4e5b-ad0c-0074ea018944\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vzpmd" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.613091 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a85c8d57-6d1c-4e5b-ad0c-0074ea018944-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vzpmd\" (UID: \"a85c8d57-6d1c-4e5b-ad0c-0074ea018944\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vzpmd" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.613114 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85c8d57-6d1c-4e5b-ad0c-0074ea018944-config\") pod \"dnsmasq-dns-78dd6ddcc-vzpmd\" (UID: \"a85c8d57-6d1c-4e5b-ad0c-0074ea018944\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vzpmd" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.613155 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6z9c\" (UniqueName: \"kubernetes.io/projected/52545325-12bb-4111-9ace-f323527adf59-kube-api-access-m6z9c\") pod \"dnsmasq-dns-675f4bcbfc-pdbrb\" (UID: \"52545325-12bb-4111-9ace-f323527adf59\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pdbrb" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.613219 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52545325-12bb-4111-9ace-f323527adf59-config\") pod \"dnsmasq-dns-675f4bcbfc-pdbrb\" (UID: \"52545325-12bb-4111-9ace-f323527adf59\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pdbrb" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.614057 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52545325-12bb-4111-9ace-f323527adf59-config\") pod \"dnsmasq-dns-675f4bcbfc-pdbrb\" (UID: \"52545325-12bb-4111-9ace-f323527adf59\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pdbrb" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.635714 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6z9c\" (UniqueName: \"kubernetes.io/projected/52545325-12bb-4111-9ace-f323527adf59-kube-api-access-m6z9c\") pod \"dnsmasq-dns-675f4bcbfc-pdbrb\" (UID: \"52545325-12bb-4111-9ace-f323527adf59\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pdbrb" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.714352 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pft2\" (UniqueName: \"kubernetes.io/projected/a85c8d57-6d1c-4e5b-ad0c-0074ea018944-kube-api-access-2pft2\") pod \"dnsmasq-dns-78dd6ddcc-vzpmd\" (UID: \"a85c8d57-6d1c-4e5b-ad0c-0074ea018944\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vzpmd" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.714419 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a85c8d57-6d1c-4e5b-ad0c-0074ea018944-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vzpmd\" (UID: \"a85c8d57-6d1c-4e5b-ad0c-0074ea018944\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vzpmd" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.714444 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85c8d57-6d1c-4e5b-ad0c-0074ea018944-config\") pod \"dnsmasq-dns-78dd6ddcc-vzpmd\" (UID: \"a85c8d57-6d1c-4e5b-ad0c-0074ea018944\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vzpmd" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.715264 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a85c8d57-6d1c-4e5b-ad0c-0074ea018944-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vzpmd\" (UID: \"a85c8d57-6d1c-4e5b-ad0c-0074ea018944\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vzpmd" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.715296 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85c8d57-6d1c-4e5b-ad0c-0074ea018944-config\") pod \"dnsmasq-dns-78dd6ddcc-vzpmd\" (UID: \"a85c8d57-6d1c-4e5b-ad0c-0074ea018944\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vzpmd" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.724414 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pdbrb" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.735123 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pft2\" (UniqueName: \"kubernetes.io/projected/a85c8d57-6d1c-4e5b-ad0c-0074ea018944-kube-api-access-2pft2\") pod \"dnsmasq-dns-78dd6ddcc-vzpmd\" (UID: \"a85c8d57-6d1c-4e5b-ad0c-0074ea018944\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vzpmd" Jan 27 16:02:50 crc kubenswrapper[4966]: I0127 16:02:50.782491 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vzpmd" Jan 27 16:02:51 crc kubenswrapper[4966]: I0127 16:02:51.192141 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pdbrb"] Jan 27 16:02:51 crc kubenswrapper[4966]: I0127 16:02:51.363450 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vzpmd"] Jan 27 16:02:51 crc kubenswrapper[4966]: W0127 16:02:51.370213 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda85c8d57_6d1c_4e5b_ad0c_0074ea018944.slice/crio-e36d66f23abe3ddd787e16c86165edad4635e24bf8d8b93d078215568c64a8d7 WatchSource:0}: Error finding container e36d66f23abe3ddd787e16c86165edad4635e24bf8d8b93d078215568c64a8d7: Status 404 returned error can't find the container with id e36d66f23abe3ddd787e16c86165edad4635e24bf8d8b93d078215568c64a8d7 Jan 27 16:02:51 crc kubenswrapper[4966]: I0127 16:02:51.637810 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vzpmd" event={"ID":"a85c8d57-6d1c-4e5b-ad0c-0074ea018944","Type":"ContainerStarted","Data":"e36d66f23abe3ddd787e16c86165edad4635e24bf8d8b93d078215568c64a8d7"} Jan 27 16:02:51 crc kubenswrapper[4966]: I0127 16:02:51.640382 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pdbrb" event={"ID":"52545325-12bb-4111-9ace-f323527adf59","Type":"ContainerStarted","Data":"463574447ad7aa5127252bbbe58786d30b7d149d2a30ad0502657cd33d55cd60"} Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.207730 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pdbrb"] Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.232052 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mzkzx"] Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.235218 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.246601 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mzkzx"] Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.358531 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8785f00c-f81b-4bb7-9e88-25445442d30d-config\") pod \"dnsmasq-dns-666b6646f7-mzkzx\" (UID: \"8785f00c-f81b-4bb7-9e88-25445442d30d\") " pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.358779 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8785f00c-f81b-4bb7-9e88-25445442d30d-dns-svc\") pod \"dnsmasq-dns-666b6646f7-mzkzx\" (UID: \"8785f00c-f81b-4bb7-9e88-25445442d30d\") " pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.358856 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wkpw\" (UniqueName: \"kubernetes.io/projected/8785f00c-f81b-4bb7-9e88-25445442d30d-kube-api-access-5wkpw\") pod \"dnsmasq-dns-666b6646f7-mzkzx\" (UID: \"8785f00c-f81b-4bb7-9e88-25445442d30d\") " pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.464372 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8785f00c-f81b-4bb7-9e88-25445442d30d-dns-svc\") pod \"dnsmasq-dns-666b6646f7-mzkzx\" (UID: \"8785f00c-f81b-4bb7-9e88-25445442d30d\") " pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.464473 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wkpw\" (UniqueName: \"kubernetes.io/projected/8785f00c-f81b-4bb7-9e88-25445442d30d-kube-api-access-5wkpw\") pod \"dnsmasq-dns-666b6646f7-mzkzx\" (UID: \"8785f00c-f81b-4bb7-9e88-25445442d30d\") " pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.464681 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8785f00c-f81b-4bb7-9e88-25445442d30d-config\") pod \"dnsmasq-dns-666b6646f7-mzkzx\" (UID: \"8785f00c-f81b-4bb7-9e88-25445442d30d\") " pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.465658 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8785f00c-f81b-4bb7-9e88-25445442d30d-dns-svc\") pod \"dnsmasq-dns-666b6646f7-mzkzx\" (UID: \"8785f00c-f81b-4bb7-9e88-25445442d30d\") " pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.465866 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8785f00c-f81b-4bb7-9e88-25445442d30d-config\") pod \"dnsmasq-dns-666b6646f7-mzkzx\" (UID: \"8785f00c-f81b-4bb7-9e88-25445442d30d\") " pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.492804 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vzpmd"] Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.504185 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wkpw\" (UniqueName: \"kubernetes.io/projected/8785f00c-f81b-4bb7-9e88-25445442d30d-kube-api-access-5wkpw\") pod \"dnsmasq-dns-666b6646f7-mzkzx\" (UID: \"8785f00c-f81b-4bb7-9e88-25445442d30d\") " pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.521057 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-r5nws"] Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.526692 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.535491 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-r5nws"] Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.566074 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.671565 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d76b62e7-a8ef-4976-90c0-6851a364d8d0-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-r5nws\" (UID: \"d76b62e7-a8ef-4976-90c0-6851a364d8d0\") " pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.671760 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vz4z\" (UniqueName: \"kubernetes.io/projected/d76b62e7-a8ef-4976-90c0-6851a364d8d0-kube-api-access-8vz4z\") pod \"dnsmasq-dns-57d769cc4f-r5nws\" (UID: \"d76b62e7-a8ef-4976-90c0-6851a364d8d0\") " pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.671968 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d76b62e7-a8ef-4976-90c0-6851a364d8d0-config\") pod \"dnsmasq-dns-57d769cc4f-r5nws\" (UID: \"d76b62e7-a8ef-4976-90c0-6851a364d8d0\") " pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.774665 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d76b62e7-a8ef-4976-90c0-6851a364d8d0-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-r5nws\" (UID: \"d76b62e7-a8ef-4976-90c0-6851a364d8d0\") " pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.774802 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vz4z\" (UniqueName: \"kubernetes.io/projected/d76b62e7-a8ef-4976-90c0-6851a364d8d0-kube-api-access-8vz4z\") pod \"dnsmasq-dns-57d769cc4f-r5nws\" (UID: \"d76b62e7-a8ef-4976-90c0-6851a364d8d0\") " pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.774854 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d76b62e7-a8ef-4976-90c0-6851a364d8d0-config\") pod \"dnsmasq-dns-57d769cc4f-r5nws\" (UID: \"d76b62e7-a8ef-4976-90c0-6851a364d8d0\") " pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.775966 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d76b62e7-a8ef-4976-90c0-6851a364d8d0-config\") pod \"dnsmasq-dns-57d769cc4f-r5nws\" (UID: \"d76b62e7-a8ef-4976-90c0-6851a364d8d0\") " pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.775960 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d76b62e7-a8ef-4976-90c0-6851a364d8d0-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-r5nws\" (UID: \"d76b62e7-a8ef-4976-90c0-6851a364d8d0\") " pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.799501 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vz4z\" (UniqueName: \"kubernetes.io/projected/d76b62e7-a8ef-4976-90c0-6851a364d8d0-kube-api-access-8vz4z\") pod \"dnsmasq-dns-57d769cc4f-r5nws\" (UID: \"d76b62e7-a8ef-4976-90c0-6851a364d8d0\") " pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" Jan 27 16:02:53 crc kubenswrapper[4966]: I0127 16:02:53.880636 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.216294 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mzkzx"] Jan 27 16:02:54 crc kubenswrapper[4966]: W0127 16:02:54.218843 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8785f00c_f81b_4bb7_9e88_25445442d30d.slice/crio-26d223007cea7554b14f5d20203539f5327fd80d8f68c12592c0bc9a46d1ca22 WatchSource:0}: Error finding container 26d223007cea7554b14f5d20203539f5327fd80d8f68c12592c0bc9a46d1ca22: Status 404 returned error can't find the container with id 26d223007cea7554b14f5d20203539f5327fd80d8f68c12592c0bc9a46d1ca22 Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.347089 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.355615 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.359428 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.359797 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.360046 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-lkdnz" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.360159 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.361145 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.361256 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.363360 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.365003 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.375226 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.375459 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.376884 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.385704 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.420382 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.429393 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.448077 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-r5nws"] Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.488658 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.488729 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3744b7e0-d355-43b7-bbf3-853416fb4483-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.488820 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9809bd0-3c51-46c3-b6c0-0b2576685999-config-data\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.488853 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c7jj\" (UniqueName: \"kubernetes.io/projected/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-kube-api-access-4c7jj\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.488958 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29mt8\" (UniqueName: \"kubernetes.io/projected/c9809bd0-3c51-46c3-b6c0-0b2576685999-kube-api-access-29mt8\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.488984 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.489011 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9809bd0-3c51-46c3-b6c0-0b2576685999-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.489043 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3744b7e0-d355-43b7-bbf3-853416fb4483-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.489076 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.489107 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3744b7e0-d355-43b7-bbf3-853416fb4483-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.489156 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9809bd0-3c51-46c3-b6c0-0b2576685999-pod-info\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.489285 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.489326 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3744b7e0-d355-43b7-bbf3-853416fb4483-config-data\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.489367 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.489399 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.489446 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3744b7e0-d355-43b7-bbf3-853416fb4483-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.489471 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.489490 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-config-data\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.489520 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.489548 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.489950 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9809bd0-3c51-46c3-b6c0-0b2576685999-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.489983 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.490045 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.490106 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw9qv\" (UniqueName: \"kubernetes.io/projected/3744b7e0-d355-43b7-bbf3-853416fb4483-kube-api-access-tw9qv\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.490144 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-pod-info\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.490180 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9809bd0-3c51-46c3-b6c0-0b2576685999-server-conf\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.490215 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d2769c9f-753f-4b73-a13f-149ee353e85b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2769c9f-753f-4b73-a13f-149ee353e85b\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.490342 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-server-conf\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.490383 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.490401 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.490476 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.495238 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.495334 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.596767 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-pod-info\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.596836 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9809bd0-3c51-46c3-b6c0-0b2576685999-server-conf\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.596875 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d2769c9f-753f-4b73-a13f-149ee353e85b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2769c9f-753f-4b73-a13f-149ee353e85b\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.596919 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-server-conf\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.596948 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.596969 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597011 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597033 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597057 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597090 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597113 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3744b7e0-d355-43b7-bbf3-853416fb4483-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597136 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9809bd0-3c51-46c3-b6c0-0b2576685999-config-data\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597168 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c7jj\" (UniqueName: \"kubernetes.io/projected/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-kube-api-access-4c7jj\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597197 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9809bd0-3c51-46c3-b6c0-0b2576685999-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597220 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29mt8\" (UniqueName: \"kubernetes.io/projected/c9809bd0-3c51-46c3-b6c0-0b2576685999-kube-api-access-29mt8\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597240 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597263 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3744b7e0-d355-43b7-bbf3-853416fb4483-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597295 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597315 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3744b7e0-d355-43b7-bbf3-853416fb4483-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597348 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9809bd0-3c51-46c3-b6c0-0b2576685999-pod-info\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597386 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597411 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3744b7e0-d355-43b7-bbf3-853416fb4483-config-data\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597453 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597477 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597510 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3744b7e0-d355-43b7-bbf3-853416fb4483-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597538 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597563 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-config-data\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597590 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597618 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597645 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9809bd0-3c51-46c3-b6c0-0b2576685999-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597665 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597712 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.597731 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw9qv\" (UniqueName: \"kubernetes.io/projected/3744b7e0-d355-43b7-bbf3-853416fb4483-kube-api-access-tw9qv\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.600000 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.600395 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3744b7e0-d355-43b7-bbf3-853416fb4483-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.600491 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3744b7e0-d355-43b7-bbf3-853416fb4483-config-data\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.600665 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.601322 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-config-data\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.606091 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.606720 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.607124 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.607133 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9809bd0-3c51-46c3-b6c0-0b2576685999-server-conf\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.607179 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-server-conf\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.607438 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.608573 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3744b7e0-d355-43b7-bbf3-853416fb4483-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.609246 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.609249 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.609283 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9809bd0-3c51-46c3-b6c0-0b2576685999-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.610143 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9809bd0-3c51-46c3-b6c0-0b2576685999-config-data\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.612580 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.612616 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e106a5c20d0399fb230aa3c602806df4667723965fc672c68ac6f33cfc3bfd0c/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.612648 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.612675 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6968b035681a68868604c21042f04be572e2d6e7a96fb8fab8851faec754bf6a/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.614303 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.614349 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d2769c9f-753f-4b73-a13f-149ee353e85b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2769c9f-753f-4b73-a13f-149ee353e85b\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ae8ca631bd9e01a225a1e43fc47472bdb87f8107a883fd607e22e62d3fb3f48c/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.614977 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9809bd0-3c51-46c3-b6c0-0b2576685999-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.615641 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.615778 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9809bd0-3c51-46c3-b6c0-0b2576685999-pod-info\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.616395 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-pod-info\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.617244 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.618162 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.620837 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.626517 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.626952 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3744b7e0-d355-43b7-bbf3-853416fb4483-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.627846 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw9qv\" (UniqueName: \"kubernetes.io/projected/3744b7e0-d355-43b7-bbf3-853416fb4483-kube-api-access-tw9qv\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.628104 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3744b7e0-d355-43b7-bbf3-853416fb4483-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.629380 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.634222 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c7jj\" (UniqueName: \"kubernetes.io/projected/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-kube-api-access-4c7jj\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.642768 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.643379 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29mt8\" (UniqueName: \"kubernetes.io/projected/c9809bd0-3c51-46c3-b6c0-0b2576685999-kube-api-access-29mt8\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.644482 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.649694 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.649921 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.650130 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.650275 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.650361 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.650443 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.650681 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4rpnk" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.675728 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de\") pod \"rabbitmq-server-2\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.680426 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.696711 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d2769c9f-753f-4b73-a13f-149ee353e85b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2769c9f-753f-4b73-a13f-149ee353e85b\") pod \"rabbitmq-server-1\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.698352 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8\") pod \"rabbitmq-server-0\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.703682 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" event={"ID":"8785f00c-f81b-4bb7-9e88-25445442d30d","Type":"ContainerStarted","Data":"26d223007cea7554b14f5d20203539f5327fd80d8f68c12592c0bc9a46d1ca22"} Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.709772 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" event={"ID":"d76b62e7-a8ef-4976-90c0-6851a364d8d0","Type":"ContainerStarted","Data":"12ee20790b5df6cd6b873f1e5cbb9d3dee73296c697b4d40d18e41ca8dc75b32"} Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.712511 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.729423 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.744384 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.800636 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.800686 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.800723 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.800753 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.800792 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.800820 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.800840 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.800875 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.800909 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.800924 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.800974 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dks9j\" (UniqueName: \"kubernetes.io/projected/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-kube-api-access-dks9j\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.907499 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.907816 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.907868 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.907908 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.907961 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.907981 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.908000 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.908097 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dks9j\" (UniqueName: \"kubernetes.io/projected/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-kube-api-access-dks9j\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.908142 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.908175 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.908226 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.909116 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.909158 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.909245 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.910445 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.912581 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.917129 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.917177 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f89dd92db4792d3c870b009b0083cb9063c922fd2565d2430e19a54155fb4df4/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.917652 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.918330 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.918911 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.920390 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.934272 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dks9j\" (UniqueName: \"kubernetes.io/projected/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-kube-api-access-dks9j\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:54 crc kubenswrapper[4966]: I0127 16:02:54.992730 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:55 crc kubenswrapper[4966]: I0127 16:02:55.045166 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:02:55 crc kubenswrapper[4966]: I0127 16:02:55.317881 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 16:02:55 crc kubenswrapper[4966]: W0127 16:02:55.321842 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f9cc33b_7f85_4c4a_9cf5_074309fd76eb.slice/crio-a3da364fca59502a0d8509bca45b20a38e2438d48b72460ef72a76650d067b1a WatchSource:0}: Error finding container a3da364fca59502a0d8509bca45b20a38e2438d48b72460ef72a76650d067b1a: Status 404 returned error can't find the container with id a3da364fca59502a0d8509bca45b20a38e2438d48b72460ef72a76650d067b1a Jan 27 16:02:55 crc kubenswrapper[4966]: I0127 16:02:55.390959 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 16:02:55 crc kubenswrapper[4966]: W0127 16:02:55.394819 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9809bd0_3c51_46c3_b6c0_0b2576685999.slice/crio-12c8cb0523eb5cdbeb475c11b0a7803681a709b17a5dfb940c902d52dfc24847 WatchSource:0}: Error finding container 12c8cb0523eb5cdbeb475c11b0a7803681a709b17a5dfb940c902d52dfc24847: Status 404 returned error can't find the container with id 12c8cb0523eb5cdbeb475c11b0a7803681a709b17a5dfb940c902d52dfc24847 Jan 27 16:02:55 crc kubenswrapper[4966]: I0127 16:02:55.478108 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 16:02:55 crc kubenswrapper[4966]: I0127 16:02:55.677561 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 16:02:55 crc kubenswrapper[4966]: W0127 16:02:55.702846 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b278fb8_add2_4ddc_9e93_71962f1bb6fa.slice/crio-4eb8955693c47d65abbb32f2c568bb793f190f178f91c251c98263d87ba841d6 WatchSource:0}: Error finding container 4eb8955693c47d65abbb32f2c568bb793f190f178f91c251c98263d87ba841d6: Status 404 returned error can't find the container with id 4eb8955693c47d65abbb32f2c568bb793f190f178f91c251c98263d87ba841d6 Jan 27 16:02:55 crc kubenswrapper[4966]: I0127 16:02:55.753403 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b278fb8-add2-4ddc-9e93-71962f1bb6fa","Type":"ContainerStarted","Data":"4eb8955693c47d65abbb32f2c568bb793f190f178f91c251c98263d87ba841d6"} Jan 27 16:02:55 crc kubenswrapper[4966]: I0127 16:02:55.755235 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"c9809bd0-3c51-46c3-b6c0-0b2576685999","Type":"ContainerStarted","Data":"12c8cb0523eb5cdbeb475c11b0a7803681a709b17a5dfb940c902d52dfc24847"} Jan 27 16:02:55 crc kubenswrapper[4966]: I0127 16:02:55.758682 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb","Type":"ContainerStarted","Data":"a3da364fca59502a0d8509bca45b20a38e2438d48b72460ef72a76650d067b1a"} Jan 27 16:02:55 crc kubenswrapper[4966]: I0127 16:02:55.762755 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3744b7e0-d355-43b7-bbf3-853416fb4483","Type":"ContainerStarted","Data":"e7a40d9e06f6e977e7ae80d754d982e99c7131f7d0b7ed425d023abfb848c0a2"} Jan 27 16:02:55 crc kubenswrapper[4966]: I0127 16:02:55.903476 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 27 16:02:55 crc kubenswrapper[4966]: I0127 16:02:55.906786 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 16:02:55 crc kubenswrapper[4966]: I0127 16:02:55.909625 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 27 16:02:55 crc kubenswrapper[4966]: I0127 16:02:55.910408 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 27 16:02:55 crc kubenswrapper[4966]: I0127 16:02:55.910672 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-89rxn" Jan 27 16:02:55 crc kubenswrapper[4966]: I0127 16:02:55.911499 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 27 16:02:55 crc kubenswrapper[4966]: I0127 16:02:55.916535 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 16:02:55 crc kubenswrapper[4966]: I0127 16:02:55.970925 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.039077 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dc01362-ea5a-48fe-b67f-1e00b193c36e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.040536 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57vdd\" (UniqueName: \"kubernetes.io/projected/1dc01362-ea5a-48fe-b67f-1e00b193c36e-kube-api-access-57vdd\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.040633 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1dc01362-ea5a-48fe-b67f-1e00b193c36e-config-data-default\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.040828 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dc01362-ea5a-48fe-b67f-1e00b193c36e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.040884 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1dc01362-ea5a-48fe-b67f-1e00b193c36e-kolla-config\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.040940 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c23e6dc7-67ca-45b1-8646-b16dd7ef7992\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c23e6dc7-67ca-45b1-8646-b16dd7ef7992\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.040981 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1dc01362-ea5a-48fe-b67f-1e00b193c36e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.041036 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dc01362-ea5a-48fe-b67f-1e00b193c36e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.142854 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1dc01362-ea5a-48fe-b67f-1e00b193c36e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.142989 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dc01362-ea5a-48fe-b67f-1e00b193c36e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.143080 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dc01362-ea5a-48fe-b67f-1e00b193c36e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.143161 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57vdd\" (UniqueName: \"kubernetes.io/projected/1dc01362-ea5a-48fe-b67f-1e00b193c36e-kube-api-access-57vdd\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.143185 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1dc01362-ea5a-48fe-b67f-1e00b193c36e-config-data-default\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.143232 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dc01362-ea5a-48fe-b67f-1e00b193c36e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.143261 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1dc01362-ea5a-48fe-b67f-1e00b193c36e-kolla-config\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.143289 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c23e6dc7-67ca-45b1-8646-b16dd7ef7992\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c23e6dc7-67ca-45b1-8646-b16dd7ef7992\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.143401 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1dc01362-ea5a-48fe-b67f-1e00b193c36e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.144326 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1dc01362-ea5a-48fe-b67f-1e00b193c36e-config-data-default\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.145184 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dc01362-ea5a-48fe-b67f-1e00b193c36e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.145690 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1dc01362-ea5a-48fe-b67f-1e00b193c36e-kolla-config\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.148864 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.148921 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c23e6dc7-67ca-45b1-8646-b16dd7ef7992\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c23e6dc7-67ca-45b1-8646-b16dd7ef7992\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2a0d2c7708847d086f7594b2b825c6f06d32e770caa1c71054b2fa8295174f7f/globalmount\"" pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.153845 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dc01362-ea5a-48fe-b67f-1e00b193c36e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.166494 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dc01362-ea5a-48fe-b67f-1e00b193c36e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.168945 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57vdd\" (UniqueName: \"kubernetes.io/projected/1dc01362-ea5a-48fe-b67f-1e00b193c36e-kube-api-access-57vdd\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.190352 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c23e6dc7-67ca-45b1-8646-b16dd7ef7992\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c23e6dc7-67ca-45b1-8646-b16dd7ef7992\") pod \"openstack-galera-0\" (UID: \"1dc01362-ea5a-48fe-b67f-1e00b193c36e\") " pod="openstack/openstack-galera-0" Jan 27 16:02:56 crc kubenswrapper[4966]: I0127 16:02:56.241932 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.146029 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.148236 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.151182 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.151423 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.151573 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.154489 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-phtw7" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.160721 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.293708 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1be6855-0a73-406a-93d5-625f7fca558b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.293806 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-603a0afe-af6d-4770-9f7b-c4a4c1e25df5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-603a0afe-af6d-4770-9f7b-c4a4c1e25df5\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.293878 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a1be6855-0a73-406a-93d5-625f7fca558b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.293926 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a1be6855-0a73-406a-93d5-625f7fca558b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.293978 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1be6855-0a73-406a-93d5-625f7fca558b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.294000 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzjqs\" (UniqueName: \"kubernetes.io/projected/a1be6855-0a73-406a-93d5-625f7fca558b-kube-api-access-pzjqs\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.294028 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1be6855-0a73-406a-93d5-625f7fca558b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.294047 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a1be6855-0a73-406a-93d5-625f7fca558b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.395471 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-603a0afe-af6d-4770-9f7b-c4a4c1e25df5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-603a0afe-af6d-4770-9f7b-c4a4c1e25df5\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.395566 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a1be6855-0a73-406a-93d5-625f7fca558b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.395619 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a1be6855-0a73-406a-93d5-625f7fca558b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.395668 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1be6855-0a73-406a-93d5-625f7fca558b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.395710 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzjqs\" (UniqueName: \"kubernetes.io/projected/a1be6855-0a73-406a-93d5-625f7fca558b-kube-api-access-pzjqs\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.395744 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1be6855-0a73-406a-93d5-625f7fca558b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.395780 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a1be6855-0a73-406a-93d5-625f7fca558b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.395807 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1be6855-0a73-406a-93d5-625f7fca558b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.396506 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a1be6855-0a73-406a-93d5-625f7fca558b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.397402 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a1be6855-0a73-406a-93d5-625f7fca558b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.397455 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1be6855-0a73-406a-93d5-625f7fca558b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.397643 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a1be6855-0a73-406a-93d5-625f7fca558b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.398288 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.405093 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-603a0afe-af6d-4770-9f7b-c4a4c1e25df5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-603a0afe-af6d-4770-9f7b-c4a4c1e25df5\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e2bf3766b452166035de209fc5463a607b0572ee77682cf9c1aff4a14625b970/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.405764 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1be6855-0a73-406a-93d5-625f7fca558b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.445466 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1be6855-0a73-406a-93d5-625f7fca558b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.450376 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzjqs\" (UniqueName: \"kubernetes.io/projected/a1be6855-0a73-406a-93d5-625f7fca558b-kube-api-access-pzjqs\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.509197 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-603a0afe-af6d-4770-9f7b-c4a4c1e25df5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-603a0afe-af6d-4770-9f7b-c4a4c1e25df5\") pod \"openstack-cell1-galera-0\" (UID: \"a1be6855-0a73-406a-93d5-625f7fca558b\") " pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.529363 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.598447 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.606226 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.615314 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-m2pkm" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.615617 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.615762 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.627303 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.709173 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3778d5b6-0474-4399-b163-521cb18b3eda-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3778d5b6-0474-4399-b163-521cb18b3eda\") " pod="openstack/memcached-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.709377 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3778d5b6-0474-4399-b163-521cb18b3eda-kolla-config\") pod \"memcached-0\" (UID: \"3778d5b6-0474-4399-b163-521cb18b3eda\") " pod="openstack/memcached-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.710513 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whd2d\" (UniqueName: \"kubernetes.io/projected/3778d5b6-0474-4399-b163-521cb18b3eda-kube-api-access-whd2d\") pod \"memcached-0\" (UID: \"3778d5b6-0474-4399-b163-521cb18b3eda\") " pod="openstack/memcached-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.710577 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3778d5b6-0474-4399-b163-521cb18b3eda-config-data\") pod \"memcached-0\" (UID: \"3778d5b6-0474-4399-b163-521cb18b3eda\") " pod="openstack/memcached-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.710928 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3778d5b6-0474-4399-b163-521cb18b3eda-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3778d5b6-0474-4399-b163-521cb18b3eda\") " pod="openstack/memcached-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.812677 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whd2d\" (UniqueName: \"kubernetes.io/projected/3778d5b6-0474-4399-b163-521cb18b3eda-kube-api-access-whd2d\") pod \"memcached-0\" (UID: \"3778d5b6-0474-4399-b163-521cb18b3eda\") " pod="openstack/memcached-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.812720 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3778d5b6-0474-4399-b163-521cb18b3eda-config-data\") pod \"memcached-0\" (UID: \"3778d5b6-0474-4399-b163-521cb18b3eda\") " pod="openstack/memcached-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.812743 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3778d5b6-0474-4399-b163-521cb18b3eda-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3778d5b6-0474-4399-b163-521cb18b3eda\") " pod="openstack/memcached-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.812767 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3778d5b6-0474-4399-b163-521cb18b3eda-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3778d5b6-0474-4399-b163-521cb18b3eda\") " pod="openstack/memcached-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.812817 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3778d5b6-0474-4399-b163-521cb18b3eda-kolla-config\") pod \"memcached-0\" (UID: \"3778d5b6-0474-4399-b163-521cb18b3eda\") " pod="openstack/memcached-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.814056 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3778d5b6-0474-4399-b163-521cb18b3eda-config-data\") pod \"memcached-0\" (UID: \"3778d5b6-0474-4399-b163-521cb18b3eda\") " pod="openstack/memcached-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.814559 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3778d5b6-0474-4399-b163-521cb18b3eda-kolla-config\") pod \"memcached-0\" (UID: \"3778d5b6-0474-4399-b163-521cb18b3eda\") " pod="openstack/memcached-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.836777 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3778d5b6-0474-4399-b163-521cb18b3eda-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3778d5b6-0474-4399-b163-521cb18b3eda\") " pod="openstack/memcached-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.839417 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3778d5b6-0474-4399-b163-521cb18b3eda-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3778d5b6-0474-4399-b163-521cb18b3eda\") " pod="openstack/memcached-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.853419 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whd2d\" (UniqueName: \"kubernetes.io/projected/3778d5b6-0474-4399-b163-521cb18b3eda-kube-api-access-whd2d\") pod \"memcached-0\" (UID: \"3778d5b6-0474-4399-b163-521cb18b3eda\") " pod="openstack/memcached-0" Jan 27 16:02:57 crc kubenswrapper[4966]: I0127 16:02:57.940265 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 16:02:59 crc kubenswrapper[4966]: I0127 16:02:59.267959 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 16:02:59 crc kubenswrapper[4966]: I0127 16:02:59.270483 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 16:02:59 crc kubenswrapper[4966]: I0127 16:02:59.273918 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-m78rf" Jan 27 16:02:59 crc kubenswrapper[4966]: I0127 16:02:59.280651 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 16:02:59 crc kubenswrapper[4966]: I0127 16:02:59.347351 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmrr5\" (UniqueName: \"kubernetes.io/projected/2d7787a4-9dd3-438b-8fc8-f6708ede1f4b-kube-api-access-rmrr5\") pod \"kube-state-metrics-0\" (UID: \"2d7787a4-9dd3-438b-8fc8-f6708ede1f4b\") " pod="openstack/kube-state-metrics-0" Jan 27 16:02:59 crc kubenswrapper[4966]: I0127 16:02:59.449148 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmrr5\" (UniqueName: \"kubernetes.io/projected/2d7787a4-9dd3-438b-8fc8-f6708ede1f4b-kube-api-access-rmrr5\") pod \"kube-state-metrics-0\" (UID: \"2d7787a4-9dd3-438b-8fc8-f6708ede1f4b\") " pod="openstack/kube-state-metrics-0" Jan 27 16:02:59 crc kubenswrapper[4966]: I0127 16:02:59.502728 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmrr5\" (UniqueName: \"kubernetes.io/projected/2d7787a4-9dd3-438b-8fc8-f6708ede1f4b-kube-api-access-rmrr5\") pod \"kube-state-metrics-0\" (UID: \"2d7787a4-9dd3-438b-8fc8-f6708ede1f4b\") " pod="openstack/kube-state-metrics-0" Jan 27 16:02:59 crc kubenswrapper[4966]: I0127 16:02:59.592832 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.222277 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-qwptr"] Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.223861 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qwptr" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.233794 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-qwptr"] Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.234079 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.234320 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-ch6np" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.271129 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e34c83b-3e54-47c0-88c7-57c3065deda1-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-qwptr\" (UID: \"1e34c83b-3e54-47c0-88c7-57c3065deda1\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qwptr" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.271478 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx9f6\" (UniqueName: \"kubernetes.io/projected/1e34c83b-3e54-47c0-88c7-57c3065deda1-kube-api-access-xx9f6\") pod \"observability-ui-dashboards-66cbf594b5-qwptr\" (UID: \"1e34c83b-3e54-47c0-88c7-57c3065deda1\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qwptr" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.374288 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx9f6\" (UniqueName: \"kubernetes.io/projected/1e34c83b-3e54-47c0-88c7-57c3065deda1-kube-api-access-xx9f6\") pod \"observability-ui-dashboards-66cbf594b5-qwptr\" (UID: \"1e34c83b-3e54-47c0-88c7-57c3065deda1\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qwptr" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.374891 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e34c83b-3e54-47c0-88c7-57c3065deda1-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-qwptr\" (UID: \"1e34c83b-3e54-47c0-88c7-57c3065deda1\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qwptr" Jan 27 16:03:00 crc kubenswrapper[4966]: E0127 16:03:00.375177 4966 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Jan 27 16:03:00 crc kubenswrapper[4966]: E0127 16:03:00.375255 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e34c83b-3e54-47c0-88c7-57c3065deda1-serving-cert podName:1e34c83b-3e54-47c0-88c7-57c3065deda1 nodeName:}" failed. No retries permitted until 2026-01-27 16:03:00.875235785 +0000 UTC m=+1247.178029273 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1e34c83b-3e54-47c0-88c7-57c3065deda1-serving-cert") pod "observability-ui-dashboards-66cbf594b5-qwptr" (UID: "1e34c83b-3e54-47c0-88c7-57c3065deda1") : secret "observability-ui-dashboards" not found Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.408580 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx9f6\" (UniqueName: \"kubernetes.io/projected/1e34c83b-3e54-47c0-88c7-57c3065deda1-kube-api-access-xx9f6\") pod \"observability-ui-dashboards-66cbf594b5-qwptr\" (UID: \"1e34c83b-3e54-47c0-88c7-57c3065deda1\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qwptr" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.609209 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-76cf6b7d9d-8vc2q"] Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.612224 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.635344 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76cf6b7d9d-8vc2q"] Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.680341 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f71765ab-530f-4029-9091-f63413efd9c2-oauth-serving-cert\") pod \"console-76cf6b7d9d-8vc2q\" (UID: \"f71765ab-530f-4029-9091-f63413efd9c2\") " pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.680404 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f71765ab-530f-4029-9091-f63413efd9c2-console-oauth-config\") pod \"console-76cf6b7d9d-8vc2q\" (UID: \"f71765ab-530f-4029-9091-f63413efd9c2\") " pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.680446 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f71765ab-530f-4029-9091-f63413efd9c2-service-ca\") pod \"console-76cf6b7d9d-8vc2q\" (UID: \"f71765ab-530f-4029-9091-f63413efd9c2\") " pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.680579 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f71765ab-530f-4029-9091-f63413efd9c2-console-config\") pod \"console-76cf6b7d9d-8vc2q\" (UID: \"f71765ab-530f-4029-9091-f63413efd9c2\") " pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.680631 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f71765ab-530f-4029-9091-f63413efd9c2-trusted-ca-bundle\") pod \"console-76cf6b7d9d-8vc2q\" (UID: \"f71765ab-530f-4029-9091-f63413efd9c2\") " pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.680845 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f71765ab-530f-4029-9091-f63413efd9c2-console-serving-cert\") pod \"console-76cf6b7d9d-8vc2q\" (UID: \"f71765ab-530f-4029-9091-f63413efd9c2\") " pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.681176 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jhts\" (UniqueName: \"kubernetes.io/projected/f71765ab-530f-4029-9091-f63413efd9c2-kube-api-access-9jhts\") pod \"console-76cf6b7d9d-8vc2q\" (UID: \"f71765ab-530f-4029-9091-f63413efd9c2\") " pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.695101 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.697645 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.704150 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.704338 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.704473 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-lvfmv" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.704579 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.704695 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.704805 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.706687 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.711576 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.724975 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.783076 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/688b0294-ca80-4f78-8704-31c17a81345b-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.783147 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/688b0294-ca80-4f78-8704-31c17a81345b-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.783197 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f71765ab-530f-4029-9091-f63413efd9c2-console-config\") pod \"console-76cf6b7d9d-8vc2q\" (UID: \"f71765ab-530f-4029-9091-f63413efd9c2\") " pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.783237 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/688b0294-ca80-4f78-8704-31c17a81345b-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.783260 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f71765ab-530f-4029-9091-f63413efd9c2-trusted-ca-bundle\") pod \"console-76cf6b7d9d-8vc2q\" (UID: \"f71765ab-530f-4029-9091-f63413efd9c2\") " pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.783306 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.783332 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f71765ab-530f-4029-9091-f63413efd9c2-console-serving-cert\") pod \"console-76cf6b7d9d-8vc2q\" (UID: \"f71765ab-530f-4029-9091-f63413efd9c2\") " pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.783382 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/688b0294-ca80-4f78-8704-31c17a81345b-config\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.783416 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/688b0294-ca80-4f78-8704-31c17a81345b-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.783440 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbq6s\" (UniqueName: \"kubernetes.io/projected/688b0294-ca80-4f78-8704-31c17a81345b-kube-api-access-tbq6s\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.783470 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/688b0294-ca80-4f78-8704-31c17a81345b-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.783488 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jhts\" (UniqueName: \"kubernetes.io/projected/f71765ab-530f-4029-9091-f63413efd9c2-kube-api-access-9jhts\") pod \"console-76cf6b7d9d-8vc2q\" (UID: \"f71765ab-530f-4029-9091-f63413efd9c2\") " pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.783526 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f71765ab-530f-4029-9091-f63413efd9c2-oauth-serving-cert\") pod \"console-76cf6b7d9d-8vc2q\" (UID: \"f71765ab-530f-4029-9091-f63413efd9c2\") " pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.783543 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f71765ab-530f-4029-9091-f63413efd9c2-console-oauth-config\") pod \"console-76cf6b7d9d-8vc2q\" (UID: \"f71765ab-530f-4029-9091-f63413efd9c2\") " pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.783568 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f71765ab-530f-4029-9091-f63413efd9c2-service-ca\") pod \"console-76cf6b7d9d-8vc2q\" (UID: \"f71765ab-530f-4029-9091-f63413efd9c2\") " pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.783587 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/688b0294-ca80-4f78-8704-31c17a81345b-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.783622 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/688b0294-ca80-4f78-8704-31c17a81345b-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.785796 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f71765ab-530f-4029-9091-f63413efd9c2-oauth-serving-cert\") pod \"console-76cf6b7d9d-8vc2q\" (UID: \"f71765ab-530f-4029-9091-f63413efd9c2\") " pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.787040 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f71765ab-530f-4029-9091-f63413efd9c2-service-ca\") pod \"console-76cf6b7d9d-8vc2q\" (UID: \"f71765ab-530f-4029-9091-f63413efd9c2\") " pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.787469 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f71765ab-530f-4029-9091-f63413efd9c2-trusted-ca-bundle\") pod \"console-76cf6b7d9d-8vc2q\" (UID: \"f71765ab-530f-4029-9091-f63413efd9c2\") " pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.788865 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f71765ab-530f-4029-9091-f63413efd9c2-console-oauth-config\") pod \"console-76cf6b7d9d-8vc2q\" (UID: \"f71765ab-530f-4029-9091-f63413efd9c2\") " pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.790956 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f71765ab-530f-4029-9091-f63413efd9c2-console-serving-cert\") pod \"console-76cf6b7d9d-8vc2q\" (UID: \"f71765ab-530f-4029-9091-f63413efd9c2\") " pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.794156 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f71765ab-530f-4029-9091-f63413efd9c2-console-config\") pod \"console-76cf6b7d9d-8vc2q\" (UID: \"f71765ab-530f-4029-9091-f63413efd9c2\") " pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.803166 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jhts\" (UniqueName: \"kubernetes.io/projected/f71765ab-530f-4029-9091-f63413efd9c2-kube-api-access-9jhts\") pod \"console-76cf6b7d9d-8vc2q\" (UID: \"f71765ab-530f-4029-9091-f63413efd9c2\") " pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.885383 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/688b0294-ca80-4f78-8704-31c17a81345b-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.885562 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.885715 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/688b0294-ca80-4f78-8704-31c17a81345b-config\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.885794 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/688b0294-ca80-4f78-8704-31c17a81345b-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.885846 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq6s\" (UniqueName: \"kubernetes.io/projected/688b0294-ca80-4f78-8704-31c17a81345b-kube-api-access-tbq6s\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.885967 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/688b0294-ca80-4f78-8704-31c17a81345b-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.886011 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e34c83b-3e54-47c0-88c7-57c3065deda1-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-qwptr\" (UID: \"1e34c83b-3e54-47c0-88c7-57c3065deda1\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qwptr" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.886117 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/688b0294-ca80-4f78-8704-31c17a81345b-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.886227 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/688b0294-ca80-4f78-8704-31c17a81345b-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.886315 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/688b0294-ca80-4f78-8704-31c17a81345b-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.886365 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/688b0294-ca80-4f78-8704-31c17a81345b-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.886650 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/688b0294-ca80-4f78-8704-31c17a81345b-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.886851 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/688b0294-ca80-4f78-8704-31c17a81345b-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.887521 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/688b0294-ca80-4f78-8704-31c17a81345b-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.889462 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/688b0294-ca80-4f78-8704-31c17a81345b-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.889726 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e34c83b-3e54-47c0-88c7-57c3065deda1-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-qwptr\" (UID: \"1e34c83b-3e54-47c0-88c7-57c3065deda1\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qwptr" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.891291 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.891330 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a33938e059c072199d3b6223bdfa367a3b3bcef4e32c284009ff56b852d373de/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.891434 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/688b0294-ca80-4f78-8704-31c17a81345b-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.892754 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/688b0294-ca80-4f78-8704-31c17a81345b-config\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.894125 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/688b0294-ca80-4f78-8704-31c17a81345b-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.898653 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/688b0294-ca80-4f78-8704-31c17a81345b-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.902856 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbq6s\" (UniqueName: \"kubernetes.io/projected/688b0294-ca80-4f78-8704-31c17a81345b-kube-api-access-tbq6s\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.934570 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\") pod \"prometheus-metric-storage-0\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:00 crc kubenswrapper[4966]: I0127 16:03:00.940326 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:01 crc kubenswrapper[4966]: I0127 16:03:01.035443 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:01 crc kubenswrapper[4966]: I0127 16:03:01.150185 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qwptr" Jan 27 16:03:01 crc kubenswrapper[4966]: I0127 16:03:01.393486 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.695525 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-96kvf"] Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.696666 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.701706 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.704785 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-ntpvl" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.705429 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.741757 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8qxl\" (UniqueName: \"kubernetes.io/projected/7c442a88-8881-4780-a2c3-eddb5d940209-kube-api-access-w8qxl\") pod \"ovn-controller-96kvf\" (UID: \"7c442a88-8881-4780-a2c3-eddb5d940209\") " pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.743012 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c442a88-8881-4780-a2c3-eddb5d940209-var-run-ovn\") pod \"ovn-controller-96kvf\" (UID: \"7c442a88-8881-4780-a2c3-eddb5d940209\") " pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.743125 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c442a88-8881-4780-a2c3-eddb5d940209-combined-ca-bundle\") pod \"ovn-controller-96kvf\" (UID: \"7c442a88-8881-4780-a2c3-eddb5d940209\") " pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.743329 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c442a88-8881-4780-a2c3-eddb5d940209-ovn-controller-tls-certs\") pod \"ovn-controller-96kvf\" (UID: \"7c442a88-8881-4780-a2c3-eddb5d940209\") " pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.743405 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7c442a88-8881-4780-a2c3-eddb5d940209-var-log-ovn\") pod \"ovn-controller-96kvf\" (UID: \"7c442a88-8881-4780-a2c3-eddb5d940209\") " pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.743474 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c442a88-8881-4780-a2c3-eddb5d940209-var-run\") pod \"ovn-controller-96kvf\" (UID: \"7c442a88-8881-4780-a2c3-eddb5d940209\") " pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.743546 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c442a88-8881-4780-a2c3-eddb5d940209-scripts\") pod \"ovn-controller-96kvf\" (UID: \"7c442a88-8881-4780-a2c3-eddb5d940209\") " pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.784781 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-5jgjj"] Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.786788 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.801168 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96kvf"] Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.815702 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5jgjj"] Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.845674 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ae54b281-3b7e-412e-8575-9096f191343e-var-log\") pod \"ovn-controller-ovs-5jgjj\" (UID: \"ae54b281-3b7e-412e-8575-9096f191343e\") " pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.845726 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ae54b281-3b7e-412e-8575-9096f191343e-var-lib\") pod \"ovn-controller-ovs-5jgjj\" (UID: \"ae54b281-3b7e-412e-8575-9096f191343e\") " pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.845855 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c442a88-8881-4780-a2c3-eddb5d940209-var-run\") pod \"ovn-controller-96kvf\" (UID: \"7c442a88-8881-4780-a2c3-eddb5d940209\") " pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.846059 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c442a88-8881-4780-a2c3-eddb5d940209-scripts\") pod \"ovn-controller-96kvf\" (UID: \"7c442a88-8881-4780-a2c3-eddb5d940209\") " pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.846140 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8qxl\" (UniqueName: \"kubernetes.io/projected/7c442a88-8881-4780-a2c3-eddb5d940209-kube-api-access-w8qxl\") pod \"ovn-controller-96kvf\" (UID: \"7c442a88-8881-4780-a2c3-eddb5d940209\") " pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.846187 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ae54b281-3b7e-412e-8575-9096f191343e-var-run\") pod \"ovn-controller-ovs-5jgjj\" (UID: \"ae54b281-3b7e-412e-8575-9096f191343e\") " pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.846234 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c442a88-8881-4780-a2c3-eddb5d940209-var-run\") pod \"ovn-controller-96kvf\" (UID: \"7c442a88-8881-4780-a2c3-eddb5d940209\") " pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.846243 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c442a88-8881-4780-a2c3-eddb5d940209-var-run-ovn\") pod \"ovn-controller-96kvf\" (UID: \"7c442a88-8881-4780-a2c3-eddb5d940209\") " pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.846347 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c442a88-8881-4780-a2c3-eddb5d940209-combined-ca-bundle\") pod \"ovn-controller-96kvf\" (UID: \"7c442a88-8881-4780-a2c3-eddb5d940209\") " pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.846359 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c442a88-8881-4780-a2c3-eddb5d940209-var-run-ovn\") pod \"ovn-controller-96kvf\" (UID: \"7c442a88-8881-4780-a2c3-eddb5d940209\") " pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.847841 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ae54b281-3b7e-412e-8575-9096f191343e-etc-ovs\") pod \"ovn-controller-ovs-5jgjj\" (UID: \"ae54b281-3b7e-412e-8575-9096f191343e\") " pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.847873 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae54b281-3b7e-412e-8575-9096f191343e-scripts\") pod \"ovn-controller-ovs-5jgjj\" (UID: \"ae54b281-3b7e-412e-8575-9096f191343e\") " pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.847949 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c442a88-8881-4780-a2c3-eddb5d940209-ovn-controller-tls-certs\") pod \"ovn-controller-96kvf\" (UID: \"7c442a88-8881-4780-a2c3-eddb5d940209\") " pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.848048 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7c442a88-8881-4780-a2c3-eddb5d940209-var-log-ovn\") pod \"ovn-controller-96kvf\" (UID: \"7c442a88-8881-4780-a2c3-eddb5d940209\") " pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.848340 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7c442a88-8881-4780-a2c3-eddb5d940209-var-log-ovn\") pod \"ovn-controller-96kvf\" (UID: \"7c442a88-8881-4780-a2c3-eddb5d940209\") " pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.848629 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qkgh\" (UniqueName: \"kubernetes.io/projected/ae54b281-3b7e-412e-8575-9096f191343e-kube-api-access-7qkgh\") pod \"ovn-controller-ovs-5jgjj\" (UID: \"ae54b281-3b7e-412e-8575-9096f191343e\") " pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.851685 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c442a88-8881-4780-a2c3-eddb5d940209-combined-ca-bundle\") pod \"ovn-controller-96kvf\" (UID: \"7c442a88-8881-4780-a2c3-eddb5d940209\") " pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.852221 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c442a88-8881-4780-a2c3-eddb5d940209-ovn-controller-tls-certs\") pod \"ovn-controller-96kvf\" (UID: \"7c442a88-8881-4780-a2c3-eddb5d940209\") " pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.855162 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c442a88-8881-4780-a2c3-eddb5d940209-scripts\") pod \"ovn-controller-96kvf\" (UID: \"7c442a88-8881-4780-a2c3-eddb5d940209\") " pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.868809 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8qxl\" (UniqueName: \"kubernetes.io/projected/7c442a88-8881-4780-a2c3-eddb5d940209-kube-api-access-w8qxl\") pod \"ovn-controller-96kvf\" (UID: \"7c442a88-8881-4780-a2c3-eddb5d940209\") " pod="openstack/ovn-controller-96kvf" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.949970 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ae54b281-3b7e-412e-8575-9096f191343e-etc-ovs\") pod \"ovn-controller-ovs-5jgjj\" (UID: \"ae54b281-3b7e-412e-8575-9096f191343e\") " pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.950013 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae54b281-3b7e-412e-8575-9096f191343e-scripts\") pod \"ovn-controller-ovs-5jgjj\" (UID: \"ae54b281-3b7e-412e-8575-9096f191343e\") " pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.950059 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qkgh\" (UniqueName: \"kubernetes.io/projected/ae54b281-3b7e-412e-8575-9096f191343e-kube-api-access-7qkgh\") pod \"ovn-controller-ovs-5jgjj\" (UID: \"ae54b281-3b7e-412e-8575-9096f191343e\") " pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.950079 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ae54b281-3b7e-412e-8575-9096f191343e-var-log\") pod \"ovn-controller-ovs-5jgjj\" (UID: \"ae54b281-3b7e-412e-8575-9096f191343e\") " pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.950096 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ae54b281-3b7e-412e-8575-9096f191343e-var-lib\") pod \"ovn-controller-ovs-5jgjj\" (UID: \"ae54b281-3b7e-412e-8575-9096f191343e\") " pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.950153 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ae54b281-3b7e-412e-8575-9096f191343e-var-run\") pod \"ovn-controller-ovs-5jgjj\" (UID: \"ae54b281-3b7e-412e-8575-9096f191343e\") " pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.950293 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ae54b281-3b7e-412e-8575-9096f191343e-var-run\") pod \"ovn-controller-ovs-5jgjj\" (UID: \"ae54b281-3b7e-412e-8575-9096f191343e\") " pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.950384 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ae54b281-3b7e-412e-8575-9096f191343e-etc-ovs\") pod \"ovn-controller-ovs-5jgjj\" (UID: \"ae54b281-3b7e-412e-8575-9096f191343e\") " pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.950508 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ae54b281-3b7e-412e-8575-9096f191343e-var-log\") pod \"ovn-controller-ovs-5jgjj\" (UID: \"ae54b281-3b7e-412e-8575-9096f191343e\") " pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.951134 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ae54b281-3b7e-412e-8575-9096f191343e-var-lib\") pod \"ovn-controller-ovs-5jgjj\" (UID: \"ae54b281-3b7e-412e-8575-9096f191343e\") " pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.952096 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae54b281-3b7e-412e-8575-9096f191343e-scripts\") pod \"ovn-controller-ovs-5jgjj\" (UID: \"ae54b281-3b7e-412e-8575-9096f191343e\") " pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:02 crc kubenswrapper[4966]: I0127 16:03:02.972525 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qkgh\" (UniqueName: \"kubernetes.io/projected/ae54b281-3b7e-412e-8575-9096f191343e-kube-api-access-7qkgh\") pod \"ovn-controller-ovs-5jgjj\" (UID: \"ae54b281-3b7e-412e-8575-9096f191343e\") " pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:03 crc kubenswrapper[4966]: I0127 16:03:03.046500 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96kvf" Jan 27 16:03:03 crc kubenswrapper[4966]: I0127 16:03:03.110660 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:03 crc kubenswrapper[4966]: I0127 16:03:03.938327 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 16:03:03 crc kubenswrapper[4966]: I0127 16:03:03.940256 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:03 crc kubenswrapper[4966]: I0127 16:03:03.949540 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-6mm2k" Jan 27 16:03:03 crc kubenswrapper[4966]: I0127 16:03:03.950194 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 27 16:03:03 crc kubenswrapper[4966]: I0127 16:03:03.950454 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 27 16:03:03 crc kubenswrapper[4966]: I0127 16:03:03.950762 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 27 16:03:03 crc kubenswrapper[4966]: I0127 16:03:03.953947 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 16:03:03 crc kubenswrapper[4966]: I0127 16:03:03.955155 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 27 16:03:03 crc kubenswrapper[4966]: I0127 16:03:03.977916 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17569cb4-bb32-47c9-8fed-2bffeda09a7c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:03 crc kubenswrapper[4966]: I0127 16:03:03.977966 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17569cb4-bb32-47c9-8fed-2bffeda09a7c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:03 crc kubenswrapper[4966]: I0127 16:03:03.978017 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5p5r\" (UniqueName: \"kubernetes.io/projected/17569cb4-bb32-47c9-8fed-2bffeda09a7c-kube-api-access-s5p5r\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:03 crc kubenswrapper[4966]: I0127 16:03:03.978052 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17569cb4-bb32-47c9-8fed-2bffeda09a7c-config\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:03 crc kubenswrapper[4966]: I0127 16:03:03.978077 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17569cb4-bb32-47c9-8fed-2bffeda09a7c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:03 crc kubenswrapper[4966]: I0127 16:03:03.978096 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17569cb4-bb32-47c9-8fed-2bffeda09a7c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:03 crc kubenswrapper[4966]: I0127 16:03:03.978156 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0eeb44ba-002e-4646-9617-c6a9cb5d9521\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eeb44ba-002e-4646-9617-c6a9cb5d9521\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:03 crc kubenswrapper[4966]: I0127 16:03:03.978217 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17569cb4-bb32-47c9-8fed-2bffeda09a7c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:04 crc kubenswrapper[4966]: I0127 16:03:04.079672 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17569cb4-bb32-47c9-8fed-2bffeda09a7c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:04 crc kubenswrapper[4966]: I0127 16:03:04.079886 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17569cb4-bb32-47c9-8fed-2bffeda09a7c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:04 crc kubenswrapper[4966]: I0127 16:03:04.079934 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17569cb4-bb32-47c9-8fed-2bffeda09a7c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:04 crc kubenswrapper[4966]: I0127 16:03:04.079969 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5p5r\" (UniqueName: \"kubernetes.io/projected/17569cb4-bb32-47c9-8fed-2bffeda09a7c-kube-api-access-s5p5r\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:04 crc kubenswrapper[4966]: I0127 16:03:04.079997 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17569cb4-bb32-47c9-8fed-2bffeda09a7c-config\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:04 crc kubenswrapper[4966]: I0127 16:03:04.080024 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17569cb4-bb32-47c9-8fed-2bffeda09a7c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:04 crc kubenswrapper[4966]: I0127 16:03:04.080052 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17569cb4-bb32-47c9-8fed-2bffeda09a7c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:04 crc kubenswrapper[4966]: I0127 16:03:04.080114 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0eeb44ba-002e-4646-9617-c6a9cb5d9521\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eeb44ba-002e-4646-9617-c6a9cb5d9521\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:04 crc kubenswrapper[4966]: I0127 16:03:04.081036 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17569cb4-bb32-47c9-8fed-2bffeda09a7c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:04 crc kubenswrapper[4966]: I0127 16:03:04.081708 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17569cb4-bb32-47c9-8fed-2bffeda09a7c-config\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:04 crc kubenswrapper[4966]: I0127 16:03:04.081918 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17569cb4-bb32-47c9-8fed-2bffeda09a7c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:04 crc kubenswrapper[4966]: I0127 16:03:04.084264 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17569cb4-bb32-47c9-8fed-2bffeda09a7c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:04 crc kubenswrapper[4966]: I0127 16:03:04.084318 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17569cb4-bb32-47c9-8fed-2bffeda09a7c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:04 crc kubenswrapper[4966]: I0127 16:03:04.084463 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:03:04 crc kubenswrapper[4966]: I0127 16:03:04.084506 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0eeb44ba-002e-4646-9617-c6a9cb5d9521\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eeb44ba-002e-4646-9617-c6a9cb5d9521\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e4a14d77d269bbe46919776b3d41fd6f90c5a7460d54b040fd9723d8a1380e6a/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:04 crc kubenswrapper[4966]: I0127 16:03:04.085288 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17569cb4-bb32-47c9-8fed-2bffeda09a7c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:04 crc kubenswrapper[4966]: I0127 16:03:04.106176 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5p5r\" (UniqueName: \"kubernetes.io/projected/17569cb4-bb32-47c9-8fed-2bffeda09a7c-kube-api-access-s5p5r\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:04 crc kubenswrapper[4966]: I0127 16:03:04.138145 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0eeb44ba-002e-4646-9617-c6a9cb5d9521\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eeb44ba-002e-4646-9617-c6a9cb5d9521\") pod \"ovsdbserver-nb-0\" (UID: \"17569cb4-bb32-47c9-8fed-2bffeda09a7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:04 crc kubenswrapper[4966]: I0127 16:03:04.262713 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.746273 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.750359 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.752789 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.753020 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.753160 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-7lpwj" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.753285 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.757240 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.838802 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfe12ec4-548f-4242-94fb-1ac7cac46c73-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.839031 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d4a349c7-a710-4cb2-994a-e44bb13a236a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d4a349c7-a710-4cb2-994a-e44bb13a236a\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.839063 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46q6b\" (UniqueName: \"kubernetes.io/projected/bfe12ec4-548f-4242-94fb-1ac7cac46c73-kube-api-access-46q6b\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.839095 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfe12ec4-548f-4242-94fb-1ac7cac46c73-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.839117 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfe12ec4-548f-4242-94fb-1ac7cac46c73-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.839294 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bfe12ec4-548f-4242-94fb-1ac7cac46c73-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.839389 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfe12ec4-548f-4242-94fb-1ac7cac46c73-config\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.840762 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfe12ec4-548f-4242-94fb-1ac7cac46c73-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.942624 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfe12ec4-548f-4242-94fb-1ac7cac46c73-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.942801 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d4a349c7-a710-4cb2-994a-e44bb13a236a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d4a349c7-a710-4cb2-994a-e44bb13a236a\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.942831 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46q6b\" (UniqueName: \"kubernetes.io/projected/bfe12ec4-548f-4242-94fb-1ac7cac46c73-kube-api-access-46q6b\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.943419 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfe12ec4-548f-4242-94fb-1ac7cac46c73-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.943448 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfe12ec4-548f-4242-94fb-1ac7cac46c73-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.943505 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bfe12ec4-548f-4242-94fb-1ac7cac46c73-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.943545 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfe12ec4-548f-4242-94fb-1ac7cac46c73-config\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.943616 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfe12ec4-548f-4242-94fb-1ac7cac46c73-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.944068 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bfe12ec4-548f-4242-94fb-1ac7cac46c73-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.944719 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfe12ec4-548f-4242-94fb-1ac7cac46c73-config\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.944836 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfe12ec4-548f-4242-94fb-1ac7cac46c73-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.946987 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.947025 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d4a349c7-a710-4cb2-994a-e44bb13a236a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d4a349c7-a710-4cb2-994a-e44bb13a236a\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2a5a4dd57f9f694745ab17f6b667f8909f17488f7c2ca7253481f2e03eaff425/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.950874 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfe12ec4-548f-4242-94fb-1ac7cac46c73-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.950979 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfe12ec4-548f-4242-94fb-1ac7cac46c73-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.952801 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfe12ec4-548f-4242-94fb-1ac7cac46c73-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:06 crc kubenswrapper[4966]: I0127 16:03:06.965811 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46q6b\" (UniqueName: \"kubernetes.io/projected/bfe12ec4-548f-4242-94fb-1ac7cac46c73-kube-api-access-46q6b\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:07 crc kubenswrapper[4966]: I0127 16:03:07.000983 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d4a349c7-a710-4cb2-994a-e44bb13a236a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d4a349c7-a710-4cb2-994a-e44bb13a236a\") pod \"ovsdbserver-sb-0\" (UID: \"bfe12ec4-548f-4242-94fb-1ac7cac46c73\") " pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:07 crc kubenswrapper[4966]: I0127 16:03:07.076358 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:10 crc kubenswrapper[4966]: I0127 16:03:10.119391 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:03:10 crc kubenswrapper[4966]: I0127 16:03:10.120003 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:03:16 crc kubenswrapper[4966]: E0127 16:03:16.609769 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 16:03:16 crc kubenswrapper[4966]: E0127 16:03:16.610648 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6z9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-pdbrb_openstack(52545325-12bb-4111-9ace-f323527adf59): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 16:03:16 crc kubenswrapper[4966]: E0127 16:03:16.612358 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-pdbrb" podUID="52545325-12bb-4111-9ace-f323527adf59" Jan 27 16:03:16 crc kubenswrapper[4966]: E0127 16:03:16.637842 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 16:03:16 crc kubenswrapper[4966]: E0127 16:03:16.638016 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5wkpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-mzkzx_openstack(8785f00c-f81b-4bb7-9e88-25445442d30d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 16:03:16 crc kubenswrapper[4966]: E0127 16:03:16.639544 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" podUID="8785f00c-f81b-4bb7-9e88-25445442d30d" Jan 27 16:03:16 crc kubenswrapper[4966]: E0127 16:03:16.733517 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 16:03:16 crc kubenswrapper[4966]: E0127 16:03:16.734063 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8vz4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-r5nws_openstack(d76b62e7-a8ef-4976-90c0-6851a364d8d0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 16:03:16 crc kubenswrapper[4966]: E0127 16:03:16.735823 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" podUID="d76b62e7-a8ef-4976-90c0-6851a364d8d0" Jan 27 16:03:16 crc kubenswrapper[4966]: E0127 16:03:16.743252 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 16:03:16 crc kubenswrapper[4966]: E0127 16:03:16.743456 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pft2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-vzpmd_openstack(a85c8d57-6d1c-4e5b-ad0c-0074ea018944): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 16:03:16 crc kubenswrapper[4966]: E0127 16:03:16.744609 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-vzpmd" podUID="a85c8d57-6d1c-4e5b-ad0c-0074ea018944" Jan 27 16:03:16 crc kubenswrapper[4966]: I0127 16:03:16.776909 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 16:03:17 crc kubenswrapper[4966]: E0127 16:03:17.001925 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" podUID="d76b62e7-a8ef-4976-90c0-6851a364d8d0" Jan 27 16:03:17 crc kubenswrapper[4966]: E0127 16:03:17.001953 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" podUID="8785f00c-f81b-4bb7-9e88-25445442d30d" Jan 27 16:03:18 crc kubenswrapper[4966]: W0127 16:03:18.193614 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfe12ec4_548f_4242_94fb_1ac7cac46c73.slice/crio-d5ed1a54c062fed1feb7f28a7ac4d1b0fb373cedf58224db66bd860876899181 WatchSource:0}: Error finding container d5ed1a54c062fed1feb7f28a7ac4d1b0fb373cedf58224db66bd860876899181: Status 404 returned error can't find the container with id d5ed1a54c062fed1feb7f28a7ac4d1b0fb373cedf58224db66bd860876899181 Jan 27 16:03:18 crc kubenswrapper[4966]: I0127 16:03:18.316354 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pdbrb" Jan 27 16:03:18 crc kubenswrapper[4966]: I0127 16:03:18.319142 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vzpmd" Jan 27 16:03:18 crc kubenswrapper[4966]: I0127 16:03:18.409783 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6z9c\" (UniqueName: \"kubernetes.io/projected/52545325-12bb-4111-9ace-f323527adf59-kube-api-access-m6z9c\") pod \"52545325-12bb-4111-9ace-f323527adf59\" (UID: \"52545325-12bb-4111-9ace-f323527adf59\") " Jan 27 16:03:18 crc kubenswrapper[4966]: I0127 16:03:18.409928 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a85c8d57-6d1c-4e5b-ad0c-0074ea018944-dns-svc\") pod \"a85c8d57-6d1c-4e5b-ad0c-0074ea018944\" (UID: \"a85c8d57-6d1c-4e5b-ad0c-0074ea018944\") " Jan 27 16:03:18 crc kubenswrapper[4966]: I0127 16:03:18.410004 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52545325-12bb-4111-9ace-f323527adf59-config\") pod \"52545325-12bb-4111-9ace-f323527adf59\" (UID: \"52545325-12bb-4111-9ace-f323527adf59\") " Jan 27 16:03:18 crc kubenswrapper[4966]: I0127 16:03:18.410065 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pft2\" (UniqueName: \"kubernetes.io/projected/a85c8d57-6d1c-4e5b-ad0c-0074ea018944-kube-api-access-2pft2\") pod \"a85c8d57-6d1c-4e5b-ad0c-0074ea018944\" (UID: \"a85c8d57-6d1c-4e5b-ad0c-0074ea018944\") " Jan 27 16:03:18 crc kubenswrapper[4966]: I0127 16:03:18.410225 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85c8d57-6d1c-4e5b-ad0c-0074ea018944-config\") pod \"a85c8d57-6d1c-4e5b-ad0c-0074ea018944\" (UID: \"a85c8d57-6d1c-4e5b-ad0c-0074ea018944\") " Jan 27 16:03:18 crc kubenswrapper[4966]: I0127 16:03:18.410684 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52545325-12bb-4111-9ace-f323527adf59-config" (OuterVolumeSpecName: "config") pod "52545325-12bb-4111-9ace-f323527adf59" (UID: "52545325-12bb-4111-9ace-f323527adf59"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:18 crc kubenswrapper[4966]: I0127 16:03:18.411173 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52545325-12bb-4111-9ace-f323527adf59-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:18 crc kubenswrapper[4966]: I0127 16:03:18.411350 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a85c8d57-6d1c-4e5b-ad0c-0074ea018944-config" (OuterVolumeSpecName: "config") pod "a85c8d57-6d1c-4e5b-ad0c-0074ea018944" (UID: "a85c8d57-6d1c-4e5b-ad0c-0074ea018944"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:18 crc kubenswrapper[4966]: I0127 16:03:18.417223 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a85c8d57-6d1c-4e5b-ad0c-0074ea018944-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a85c8d57-6d1c-4e5b-ad0c-0074ea018944" (UID: "a85c8d57-6d1c-4e5b-ad0c-0074ea018944"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:18 crc kubenswrapper[4966]: I0127 16:03:18.423293 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52545325-12bb-4111-9ace-f323527adf59-kube-api-access-m6z9c" (OuterVolumeSpecName: "kube-api-access-m6z9c") pod "52545325-12bb-4111-9ace-f323527adf59" (UID: "52545325-12bb-4111-9ace-f323527adf59"). InnerVolumeSpecName "kube-api-access-m6z9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:03:18 crc kubenswrapper[4966]: I0127 16:03:18.429748 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a85c8d57-6d1c-4e5b-ad0c-0074ea018944-kube-api-access-2pft2" (OuterVolumeSpecName: "kube-api-access-2pft2") pod "a85c8d57-6d1c-4e5b-ad0c-0074ea018944" (UID: "a85c8d57-6d1c-4e5b-ad0c-0074ea018944"). InnerVolumeSpecName "kube-api-access-2pft2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:03:18 crc kubenswrapper[4966]: I0127 16:03:18.513552 4966 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a85c8d57-6d1c-4e5b-ad0c-0074ea018944-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:18 crc kubenswrapper[4966]: I0127 16:03:18.513594 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pft2\" (UniqueName: \"kubernetes.io/projected/a85c8d57-6d1c-4e5b-ad0c-0074ea018944-kube-api-access-2pft2\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:18 crc kubenswrapper[4966]: I0127 16:03:18.513605 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85c8d57-6d1c-4e5b-ad0c-0074ea018944-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:18 crc kubenswrapper[4966]: I0127 16:03:18.513616 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6z9c\" (UniqueName: \"kubernetes.io/projected/52545325-12bb-4111-9ace-f323527adf59-kube-api-access-m6z9c\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:18 crc kubenswrapper[4966]: I0127 16:03:18.917139 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 16:03:18 crc kubenswrapper[4966]: W0127 16:03:18.918007 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d7787a4_9dd3_438b_8fc8_f6708ede1f4b.slice/crio-ff338db7cba2a5a941ca79f81feb9fd68a18ab055f2edcdc6afded90ef72cf31 WatchSource:0}: Error finding container ff338db7cba2a5a941ca79f81feb9fd68a18ab055f2edcdc6afded90ef72cf31: Status 404 returned error can't find the container with id ff338db7cba2a5a941ca79f81feb9fd68a18ab055f2edcdc6afded90ef72cf31 Jan 27 16:03:18 crc kubenswrapper[4966]: I0127 16:03:18.942731 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76cf6b7d9d-8vc2q"] Jan 27 16:03:19 crc kubenswrapper[4966]: I0127 16:03:19.094488 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 16:03:19 crc kubenswrapper[4966]: I0127 16:03:19.098789 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2d7787a4-9dd3-438b-8fc8-f6708ede1f4b","Type":"ContainerStarted","Data":"ff338db7cba2a5a941ca79f81feb9fd68a18ab055f2edcdc6afded90ef72cf31"} Jan 27 16:03:19 crc kubenswrapper[4966]: I0127 16:03:19.100354 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76cf6b7d9d-8vc2q" event={"ID":"f71765ab-530f-4029-9091-f63413efd9c2","Type":"ContainerStarted","Data":"6f1b6279b83cf47851f015f50105b658eae47bd58a32d1ff7cf60128c0353ef7"} Jan 27 16:03:19 crc kubenswrapper[4966]: W0127 16:03:19.105326 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod688b0294_ca80_4f78_8704_31c17a81345b.slice/crio-24f10ced640eef48bb95b93468c0523ec076bb62cdea9404eab5d27dede53262 WatchSource:0}: Error finding container 24f10ced640eef48bb95b93468c0523ec076bb62cdea9404eab5d27dede53262: Status 404 returned error can't find the container with id 24f10ced640eef48bb95b93468c0523ec076bb62cdea9404eab5d27dede53262 Jan 27 16:03:19 crc kubenswrapper[4966]: I0127 16:03:19.105611 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vzpmd" Jan 27 16:03:19 crc kubenswrapper[4966]: I0127 16:03:19.105629 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vzpmd" event={"ID":"a85c8d57-6d1c-4e5b-ad0c-0074ea018944","Type":"ContainerDied","Data":"e36d66f23abe3ddd787e16c86165edad4635e24bf8d8b93d078215568c64a8d7"} Jan 27 16:03:19 crc kubenswrapper[4966]: I0127 16:03:19.107420 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pdbrb" event={"ID":"52545325-12bb-4111-9ace-f323527adf59","Type":"ContainerDied","Data":"463574447ad7aa5127252bbbe58786d30b7d149d2a30ad0502657cd33d55cd60"} Jan 27 16:03:19 crc kubenswrapper[4966]: I0127 16:03:19.107494 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pdbrb" Jan 27 16:03:19 crc kubenswrapper[4966]: I0127 16:03:19.109456 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"bfe12ec4-548f-4242-94fb-1ac7cac46c73","Type":"ContainerStarted","Data":"d5ed1a54c062fed1feb7f28a7ac4d1b0fb373cedf58224db66bd860876899181"} Jan 27 16:03:19 crc kubenswrapper[4966]: I0127 16:03:19.122021 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 16:03:19 crc kubenswrapper[4966]: I0127 16:03:19.136099 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 16:03:19 crc kubenswrapper[4966]: W0127 16:03:19.150796 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3778d5b6_0474_4399_b163_521cb18b3eda.slice/crio-1b8aa528ece5e3751711ef43e73e56394420b9bd174780811179a42f452ea537 WatchSource:0}: Error finding container 1b8aa528ece5e3751711ef43e73e56394420b9bd174780811179a42f452ea537: Status 404 returned error can't find the container with id 1b8aa528ece5e3751711ef43e73e56394420b9bd174780811179a42f452ea537 Jan 27 16:03:19 crc kubenswrapper[4966]: I0127 16:03:19.170735 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vzpmd"] Jan 27 16:03:19 crc kubenswrapper[4966]: I0127 16:03:19.191667 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vzpmd"] Jan 27 16:03:19 crc kubenswrapper[4966]: I0127 16:03:19.207378 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pdbrb"] Jan 27 16:03:19 crc kubenswrapper[4966]: I0127 16:03:19.219498 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pdbrb"] Jan 27 16:03:19 crc kubenswrapper[4966]: I0127 16:03:19.552541 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 16:03:19 crc kubenswrapper[4966]: I0127 16:03:19.573153 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96kvf"] Jan 27 16:03:19 crc kubenswrapper[4966]: W0127 16:03:19.653455 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1dc01362_ea5a_48fe_b67f_1e00b193c36e.slice/crio-b605196044a364f788f4aa0c3a97b3bb0fc084f0d8229f5a8a6b836291e678a2 WatchSource:0}: Error finding container b605196044a364f788f4aa0c3a97b3bb0fc084f0d8229f5a8a6b836291e678a2: Status 404 returned error can't find the container with id b605196044a364f788f4aa0c3a97b3bb0fc084f0d8229f5a8a6b836291e678a2 Jan 27 16:03:19 crc kubenswrapper[4966]: I0127 16:03:19.724755 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-qwptr"] Jan 27 16:03:19 crc kubenswrapper[4966]: I0127 16:03:19.748731 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 16:03:19 crc kubenswrapper[4966]: I0127 16:03:19.939609 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5jgjj"] Jan 27 16:03:20 crc kubenswrapper[4966]: I0127 16:03:20.120921 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1dc01362-ea5a-48fe-b67f-1e00b193c36e","Type":"ContainerStarted","Data":"b605196044a364f788f4aa0c3a97b3bb0fc084f0d8229f5a8a6b836291e678a2"} Jan 27 16:03:20 crc kubenswrapper[4966]: I0127 16:03:20.122405 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96kvf" event={"ID":"7c442a88-8881-4780-a2c3-eddb5d940209","Type":"ContainerStarted","Data":"73b83180cec9907600dae2a03c75f364743c7fccec521390613de91e2696bfd5"} Jan 27 16:03:20 crc kubenswrapper[4966]: I0127 16:03:20.124499 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76cf6b7d9d-8vc2q" event={"ID":"f71765ab-530f-4029-9091-f63413efd9c2","Type":"ContainerStarted","Data":"14f7a452e936c7dff463604f6c2119db61c0895527f4b1ce4cdfb56bf4b7ae2f"} Jan 27 16:03:20 crc kubenswrapper[4966]: I0127 16:03:20.125920 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3778d5b6-0474-4399-b163-521cb18b3eda","Type":"ContainerStarted","Data":"1b8aa528ece5e3751711ef43e73e56394420b9bd174780811179a42f452ea537"} Jan 27 16:03:20 crc kubenswrapper[4966]: I0127 16:03:20.128390 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"688b0294-ca80-4f78-8704-31c17a81345b","Type":"ContainerStarted","Data":"24f10ced640eef48bb95b93468c0523ec076bb62cdea9404eab5d27dede53262"} Jan 27 16:03:20 crc kubenswrapper[4966]: I0127 16:03:20.133019 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"17569cb4-bb32-47c9-8fed-2bffeda09a7c","Type":"ContainerStarted","Data":"2cbe3a7bbdec707c62cd94610b20c99d4f7d35b8a63723f57e9905a3d8574c83"} Jan 27 16:03:20 crc kubenswrapper[4966]: I0127 16:03:20.155407 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-76cf6b7d9d-8vc2q" podStartSLOduration=20.155373477 podStartE2EDuration="20.155373477s" podCreationTimestamp="2026-01-27 16:03:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:03:20.145722083 +0000 UTC m=+1266.448515581" watchObservedRunningTime="2026-01-27 16:03:20.155373477 +0000 UTC m=+1266.458166955" Jan 27 16:03:20 crc kubenswrapper[4966]: I0127 16:03:20.533917 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52545325-12bb-4111-9ace-f323527adf59" path="/var/lib/kubelet/pods/52545325-12bb-4111-9ace-f323527adf59/volumes" Jan 27 16:03:20 crc kubenswrapper[4966]: I0127 16:03:20.534294 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a85c8d57-6d1c-4e5b-ad0c-0074ea018944" path="/var/lib/kubelet/pods/a85c8d57-6d1c-4e5b-ad0c-0074ea018944/volumes" Jan 27 16:03:20 crc kubenswrapper[4966]: I0127 16:03:20.941307 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:20 crc kubenswrapper[4966]: I0127 16:03:20.941360 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:20 crc kubenswrapper[4966]: I0127 16:03:20.945300 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:21 crc kubenswrapper[4966]: I0127 16:03:21.142196 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"c9809bd0-3c51-46c3-b6c0-0b2576685999","Type":"ContainerStarted","Data":"cb544cfcf80d3d721d19d5f4891a271e38e2857775344a1887bcbab76dd858e0"} Jan 27 16:03:21 crc kubenswrapper[4966]: I0127 16:03:21.144927 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb","Type":"ContainerStarted","Data":"bd0a2da95b03bf8155f1de1f0fff387b68f0343f263fee5bbb7fdf0b0e05dcb1"} Jan 27 16:03:21 crc kubenswrapper[4966]: I0127 16:03:21.147238 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3744b7e0-d355-43b7-bbf3-853416fb4483","Type":"ContainerStarted","Data":"90e10932884502890f4d6ef4308186deae8aa96e83c0dff67a14437ec9de223e"} Jan 27 16:03:21 crc kubenswrapper[4966]: I0127 16:03:21.149729 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b278fb8-add2-4ddc-9e93-71962f1bb6fa","Type":"ContainerStarted","Data":"d49edacde651fb39c747b161fac2fee2073f47d6e0a4fad1361e65b9d66a4739"} Jan 27 16:03:21 crc kubenswrapper[4966]: I0127 16:03:21.154070 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-76cf6b7d9d-8vc2q" Jan 27 16:03:21 crc kubenswrapper[4966]: I0127 16:03:21.336999 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-774cdb758b-bmhpk"] Jan 27 16:03:22 crc kubenswrapper[4966]: W0127 16:03:22.635690 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1be6855_0a73_406a_93d5_625f7fca558b.slice/crio-80de2df1ee5c3a2f2086aec7431ace17f5fdcbc239e7cf15d26df1082c7d4c39 WatchSource:0}: Error finding container 80de2df1ee5c3a2f2086aec7431ace17f5fdcbc239e7cf15d26df1082c7d4c39: Status 404 returned error can't find the container with id 80de2df1ee5c3a2f2086aec7431ace17f5fdcbc239e7cf15d26df1082c7d4c39 Jan 27 16:03:23 crc kubenswrapper[4966]: I0127 16:03:23.170216 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qwptr" event={"ID":"1e34c83b-3e54-47c0-88c7-57c3065deda1","Type":"ContainerStarted","Data":"887fdecb872f6a70627674463762d0e06ca248ab65a285626d5832cacda62c74"} Jan 27 16:03:23 crc kubenswrapper[4966]: I0127 16:03:23.171589 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a1be6855-0a73-406a-93d5-625f7fca558b","Type":"ContainerStarted","Data":"80de2df1ee5c3a2f2086aec7431ace17f5fdcbc239e7cf15d26df1082c7d4c39"} Jan 27 16:03:23 crc kubenswrapper[4966]: I0127 16:03:23.174127 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5jgjj" event={"ID":"ae54b281-3b7e-412e-8575-9096f191343e","Type":"ContainerStarted","Data":"e9aaf3f838911926f1160ae0c250c8fbb942df7b777318f5331e3179bcef7fbb"} Jan 27 16:03:31 crc kubenswrapper[4966]: I0127 16:03:31.264876 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"bfe12ec4-548f-4242-94fb-1ac7cac46c73","Type":"ContainerStarted","Data":"ba361a5db6d741a2d7ea6113493c0ab85a4cb39452d506ae14be6858219ed125"} Jan 27 16:03:32 crc kubenswrapper[4966]: I0127 16:03:32.276832 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96kvf" event={"ID":"7c442a88-8881-4780-a2c3-eddb5d940209","Type":"ContainerStarted","Data":"c6f7d49b7d834529744e245c5fa2171f936dadf25e8cb8f86d216aada01b90c1"} Jan 27 16:03:32 crc kubenswrapper[4966]: I0127 16:03:32.277447 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-96kvf" Jan 27 16:03:32 crc kubenswrapper[4966]: I0127 16:03:32.280809 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2d7787a4-9dd3-438b-8fc8-f6708ede1f4b","Type":"ContainerStarted","Data":"161f26658653b3872e9663c07498908d32eaa975eff30c7343026a6199b18c2e"} Jan 27 16:03:32 crc kubenswrapper[4966]: I0127 16:03:32.280964 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 16:03:32 crc kubenswrapper[4966]: I0127 16:03:32.284202 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3778d5b6-0474-4399-b163-521cb18b3eda","Type":"ContainerStarted","Data":"1d47e3db7c67ed373977c9b054b9bfb18c8f75ca578ea3a71fb2a1916a6f85a8"} Jan 27 16:03:32 crc kubenswrapper[4966]: I0127 16:03:32.284283 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 27 16:03:32 crc kubenswrapper[4966]: I0127 16:03:32.286315 4966 generic.go:334] "Generic (PLEG): container finished" podID="8785f00c-f81b-4bb7-9e88-25445442d30d" containerID="8f1410148aed3db5135b563705c2476aa6bd1d3788c3f6a958d5330a5c2eda88" exitCode=0 Jan 27 16:03:32 crc kubenswrapper[4966]: I0127 16:03:32.286483 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" event={"ID":"8785f00c-f81b-4bb7-9e88-25445442d30d","Type":"ContainerDied","Data":"8f1410148aed3db5135b563705c2476aa6bd1d3788c3f6a958d5330a5c2eda88"} Jan 27 16:03:32 crc kubenswrapper[4966]: I0127 16:03:32.290438 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qwptr" event={"ID":"1e34c83b-3e54-47c0-88c7-57c3065deda1","Type":"ContainerStarted","Data":"d29278e38bc00bba2edd2a6ffdba9ef23c331875c8741cd73dd698736f69d6e7"} Jan 27 16:03:32 crc kubenswrapper[4966]: I0127 16:03:32.302170 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a1be6855-0a73-406a-93d5-625f7fca558b","Type":"ContainerStarted","Data":"4d945d25b485dbb36ef27ddb65f7bef66dd6127ef642d1041a250c32aba85bad"} Jan 27 16:03:32 crc kubenswrapper[4966]: I0127 16:03:32.304598 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"17569cb4-bb32-47c9-8fed-2bffeda09a7c","Type":"ContainerStarted","Data":"186d85f73f14d4c48bc7885b8544895c3fb1e5e74cffd222811a5932ba0091e9"} Jan 27 16:03:32 crc kubenswrapper[4966]: I0127 16:03:32.310516 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1dc01362-ea5a-48fe-b67f-1e00b193c36e","Type":"ContainerStarted","Data":"08849a989a57411e68e28077b7e5c1f3158069c8bbd5dff300bc5c91793e9c87"} Jan 27 16:03:32 crc kubenswrapper[4966]: I0127 16:03:32.311852 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-96kvf" podStartSLOduration=19.457307302 podStartE2EDuration="30.31181318s" podCreationTimestamp="2026-01-27 16:03:02 +0000 UTC" firstStartedPulling="2026-01-27 16:03:19.654917976 +0000 UTC m=+1265.957711454" lastFinishedPulling="2026-01-27 16:03:30.509423834 +0000 UTC m=+1276.812217332" observedRunningTime="2026-01-27 16:03:32.300283508 +0000 UTC m=+1278.603077026" watchObservedRunningTime="2026-01-27 16:03:32.31181318 +0000 UTC m=+1278.614606678" Jan 27 16:03:32 crc kubenswrapper[4966]: I0127 16:03:32.313154 4966 generic.go:334] "Generic (PLEG): container finished" podID="ae54b281-3b7e-412e-8575-9096f191343e" containerID="4ff457199350bc6be2f490c2ca71313203d6d2b36a99705d0b8e13b7520d2a2d" exitCode=0 Jan 27 16:03:32 crc kubenswrapper[4966]: I0127 16:03:32.313208 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5jgjj" event={"ID":"ae54b281-3b7e-412e-8575-9096f191343e","Type":"ContainerDied","Data":"4ff457199350bc6be2f490c2ca71313203d6d2b36a99705d0b8e13b7520d2a2d"} Jan 27 16:03:32 crc kubenswrapper[4966]: I0127 16:03:32.317385 4966 generic.go:334] "Generic (PLEG): container finished" podID="d76b62e7-a8ef-4976-90c0-6851a364d8d0" containerID="5b32d2b910f9d60ead37a82492ee7a1447f96a1eb6ad1ddd96e0f1ca91ad3296" exitCode=0 Jan 27 16:03:32 crc kubenswrapper[4966]: I0127 16:03:32.317444 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" event={"ID":"d76b62e7-a8ef-4976-90c0-6851a364d8d0","Type":"ContainerDied","Data":"5b32d2b910f9d60ead37a82492ee7a1447f96a1eb6ad1ddd96e0f1ca91ad3296"} Jan 27 16:03:32 crc kubenswrapper[4966]: I0127 16:03:32.341658 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=24.352699932 podStartE2EDuration="35.341615965s" podCreationTimestamp="2026-01-27 16:02:57 +0000 UTC" firstStartedPulling="2026-01-27 16:03:19.162501438 +0000 UTC m=+1265.465294926" lastFinishedPulling="2026-01-27 16:03:30.151417471 +0000 UTC m=+1276.454210959" observedRunningTime="2026-01-27 16:03:32.333942664 +0000 UTC m=+1278.636736162" watchObservedRunningTime="2026-01-27 16:03:32.341615965 +0000 UTC m=+1278.644409453" Jan 27 16:03:32 crc kubenswrapper[4966]: I0127 16:03:32.356265 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=21.237245706 podStartE2EDuration="33.356246084s" podCreationTimestamp="2026-01-27 16:02:59 +0000 UTC" firstStartedPulling="2026-01-27 16:03:18.927052101 +0000 UTC m=+1265.229845589" lastFinishedPulling="2026-01-27 16:03:31.046052469 +0000 UTC m=+1277.348845967" observedRunningTime="2026-01-27 16:03:32.348205792 +0000 UTC m=+1278.650999280" watchObservedRunningTime="2026-01-27 16:03:32.356246084 +0000 UTC m=+1278.659039562" Jan 27 16:03:32 crc kubenswrapper[4966]: I0127 16:03:32.402882 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qwptr" podStartSLOduration=24.765182819 podStartE2EDuration="32.402863106s" podCreationTimestamp="2026-01-27 16:03:00 +0000 UTC" firstStartedPulling="2026-01-27 16:03:22.645998474 +0000 UTC m=+1268.948791992" lastFinishedPulling="2026-01-27 16:03:30.283678791 +0000 UTC m=+1276.586472279" observedRunningTime="2026-01-27 16:03:32.362256593 +0000 UTC m=+1278.665050091" watchObservedRunningTime="2026-01-27 16:03:32.402863106 +0000 UTC m=+1278.705656594" Jan 27 16:03:33 crc kubenswrapper[4966]: I0127 16:03:33.331522 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5jgjj" event={"ID":"ae54b281-3b7e-412e-8575-9096f191343e","Type":"ContainerStarted","Data":"b0d4d9d812164f65373482af74f714b1eba539edf5248f1e3274f5687cb7e969"} Jan 27 16:03:33 crc kubenswrapper[4966]: I0127 16:03:33.334781 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" event={"ID":"d76b62e7-a8ef-4976-90c0-6851a364d8d0","Type":"ContainerStarted","Data":"71a692cc9c83014f735efac652b5f2e94108b105a95dc7daceef970f538f7080"} Jan 27 16:03:33 crc kubenswrapper[4966]: I0127 16:03:33.335035 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" Jan 27 16:03:33 crc kubenswrapper[4966]: I0127 16:03:33.337361 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" event={"ID":"8785f00c-f81b-4bb7-9e88-25445442d30d","Type":"ContainerStarted","Data":"1ce56dffef1a0295e91023e4397b5bf72e6a34657ea261984e65f3143a2422c3"} Jan 27 16:03:33 crc kubenswrapper[4966]: I0127 16:03:33.354179 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" podStartSLOduration=3.757097214 podStartE2EDuration="40.354155301s" podCreationTimestamp="2026-01-27 16:02:53 +0000 UTC" firstStartedPulling="2026-01-27 16:02:54.449631582 +0000 UTC m=+1240.752425070" lastFinishedPulling="2026-01-27 16:03:31.046689659 +0000 UTC m=+1277.349483157" observedRunningTime="2026-01-27 16:03:33.352430717 +0000 UTC m=+1279.655224225" watchObservedRunningTime="2026-01-27 16:03:33.354155301 +0000 UTC m=+1279.656948819" Jan 27 16:03:33 crc kubenswrapper[4966]: I0127 16:03:33.385943 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" podStartSLOduration=3.5696759350000002 podStartE2EDuration="40.385926278s" podCreationTimestamp="2026-01-27 16:02:53 +0000 UTC" firstStartedPulling="2026-01-27 16:02:54.229766945 +0000 UTC m=+1240.532560433" lastFinishedPulling="2026-01-27 16:03:31.046017248 +0000 UTC m=+1277.348810776" observedRunningTime="2026-01-27 16:03:33.374298423 +0000 UTC m=+1279.677091941" watchObservedRunningTime="2026-01-27 16:03:33.385926278 +0000 UTC m=+1279.688719766" Jan 27 16:03:33 crc kubenswrapper[4966]: I0127 16:03:33.566760 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" Jan 27 16:03:35 crc kubenswrapper[4966]: I0127 16:03:35.371681 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"688b0294-ca80-4f78-8704-31c17a81345b","Type":"ContainerStarted","Data":"bdad632d356195a8fd182b8095953c65b5633b892b8a01e24b4aa8f963c3ffd9"} Jan 27 16:03:35 crc kubenswrapper[4966]: I0127 16:03:35.371805 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="688b0294-ca80-4f78-8704-31c17a81345b" containerName="init-config-reloader" containerID="cri-o://bdad632d356195a8fd182b8095953c65b5633b892b8a01e24b4aa8f963c3ffd9" gracePeriod=600 Jan 27 16:03:35 crc kubenswrapper[4966]: I0127 16:03:35.376345 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5jgjj" event={"ID":"ae54b281-3b7e-412e-8575-9096f191343e","Type":"ContainerStarted","Data":"954e89374da7edb81088f8aaf7888274d2a79eb4115ab1b5b0bf7ddddf27b78d"} Jan 27 16:03:35 crc kubenswrapper[4966]: I0127 16:03:35.376553 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:35 crc kubenswrapper[4966]: I0127 16:03:35.376580 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:03:35 crc kubenswrapper[4966]: I0127 16:03:35.415803 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-5jgjj" podStartSLOduration=25.550886686 podStartE2EDuration="33.415783011s" podCreationTimestamp="2026-01-27 16:03:02 +0000 UTC" firstStartedPulling="2026-01-27 16:03:22.644732375 +0000 UTC m=+1268.947525863" lastFinishedPulling="2026-01-27 16:03:30.50962866 +0000 UTC m=+1276.812422188" observedRunningTime="2026-01-27 16:03:35.413683875 +0000 UTC m=+1281.716477373" watchObservedRunningTime="2026-01-27 16:03:35.415783011 +0000 UTC m=+1281.718576499" Jan 27 16:03:36 crc kubenswrapper[4966]: I0127 16:03:36.387419 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"bfe12ec4-548f-4242-94fb-1ac7cac46c73","Type":"ContainerStarted","Data":"959218355decf3ca7494eb98188a6be16bfad54b330f5d297986e57086d44e8e"} Jan 27 16:03:36 crc kubenswrapper[4966]: I0127 16:03:36.390027 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"17569cb4-bb32-47c9-8fed-2bffeda09a7c","Type":"ContainerStarted","Data":"4c06dca5d5108e4f25e14b830f0344035da1a6b79acd56ade5b3b6ee8faed27f"} Jan 27 16:03:36 crc kubenswrapper[4966]: I0127 16:03:36.424211 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=14.522012967 podStartE2EDuration="31.424179427s" podCreationTimestamp="2026-01-27 16:03:05 +0000 UTC" firstStartedPulling="2026-01-27 16:03:18.199850727 +0000 UTC m=+1264.502644215" lastFinishedPulling="2026-01-27 16:03:35.102017187 +0000 UTC m=+1281.404810675" observedRunningTime="2026-01-27 16:03:36.412692237 +0000 UTC m=+1282.715485745" watchObservedRunningTime="2026-01-27 16:03:36.424179427 +0000 UTC m=+1282.726972955" Jan 27 16:03:36 crc kubenswrapper[4966]: I0127 16:03:36.444878 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=18.490961485 podStartE2EDuration="34.444854945s" podCreationTimestamp="2026-01-27 16:03:02 +0000 UTC" firstStartedPulling="2026-01-27 16:03:19.173316407 +0000 UTC m=+1265.476109895" lastFinishedPulling="2026-01-27 16:03:35.127209867 +0000 UTC m=+1281.430003355" observedRunningTime="2026-01-27 16:03:36.438221748 +0000 UTC m=+1282.741015246" watchObservedRunningTime="2026-01-27 16:03:36.444854945 +0000 UTC m=+1282.747648433" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.076689 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.077076 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.154155 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.263543 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.335798 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.403209 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.452464 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.476809 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.685485 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-r5nws"] Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.686356 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" podUID="d76b62e7-a8ef-4976-90c0-6851a364d8d0" containerName="dnsmasq-dns" containerID="cri-o://71a692cc9c83014f735efac652b5f2e94108b105a95dc7daceef970f538f7080" gracePeriod=10 Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.689853 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.761279 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-v9c7d"] Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.762986 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-v9c7d" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.772531 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.776027 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-w9xlx"] Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.777882 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-w9xlx" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.780002 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.793033 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-v9c7d"] Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.814219 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-w9xlx"] Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.861821 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.866234 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.872323 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.872542 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.872658 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.872851 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-8xkq6" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.901281 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef-ovn-rundir\") pod \"ovn-controller-metrics-w9xlx\" (UID: \"e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef\") " pod="openstack/ovn-controller-metrics-w9xlx" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.901331 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9fe0821-6e92-4212-a641-085507df448d-config\") pod \"dnsmasq-dns-6bc7876d45-v9c7d\" (UID: \"b9fe0821-6e92-4212-a641-085507df448d\") " pod="openstack/dnsmasq-dns-6bc7876d45-v9c7d" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.901367 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef-ovs-rundir\") pod \"ovn-controller-metrics-w9xlx\" (UID: \"e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef\") " pod="openstack/ovn-controller-metrics-w9xlx" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.901404 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-w9xlx\" (UID: \"e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef\") " pod="openstack/ovn-controller-metrics-w9xlx" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.901443 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdbgv\" (UniqueName: \"kubernetes.io/projected/b9fe0821-6e92-4212-a641-085507df448d-kube-api-access-sdbgv\") pod \"dnsmasq-dns-6bc7876d45-v9c7d\" (UID: \"b9fe0821-6e92-4212-a641-085507df448d\") " pod="openstack/dnsmasq-dns-6bc7876d45-v9c7d" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.901511 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9fe0821-6e92-4212-a641-085507df448d-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-v9c7d\" (UID: \"b9fe0821-6e92-4212-a641-085507df448d\") " pod="openstack/dnsmasq-dns-6bc7876d45-v9c7d" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.901536 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef-config\") pod \"ovn-controller-metrics-w9xlx\" (UID: \"e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef\") " pod="openstack/ovn-controller-metrics-w9xlx" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.901571 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkbct\" (UniqueName: \"kubernetes.io/projected/e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef-kube-api-access-nkbct\") pod \"ovn-controller-metrics-w9xlx\" (UID: \"e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef\") " pod="openstack/ovn-controller-metrics-w9xlx" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.901714 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9fe0821-6e92-4212-a641-085507df448d-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-v9c7d\" (UID: \"b9fe0821-6e92-4212-a641-085507df448d\") " pod="openstack/dnsmasq-dns-6bc7876d45-v9c7d" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.901755 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef-combined-ca-bundle\") pod \"ovn-controller-metrics-w9xlx\" (UID: \"e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef\") " pod="openstack/ovn-controller-metrics-w9xlx" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.922085 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.937412 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mzkzx"] Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.942012 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.942593 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" podUID="8785f00c-f81b-4bb7-9e88-25445442d30d" containerName="dnsmasq-dns" containerID="cri-o://1ce56dffef1a0295e91023e4397b5bf72e6a34657ea261984e65f3143a2422c3" gracePeriod=10 Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.943974 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.965975 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-zk5fg"] Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.967796 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-zk5fg" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.970592 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 27 16:03:37 crc kubenswrapper[4966]: I0127 16:03:37.974954 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-zk5fg"] Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.003701 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9fe0821-6e92-4212-a641-085507df448d-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-v9c7d\" (UID: \"b9fe0821-6e92-4212-a641-085507df448d\") " pod="openstack/dnsmasq-dns-6bc7876d45-v9c7d" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.003752 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhm65\" (UniqueName: \"kubernetes.io/projected/3b38986f-892c-45df-9229-2d4dae664b48-kube-api-access-fhm65\") pod \"ovn-northd-0\" (UID: \"3b38986f-892c-45df-9229-2d4dae664b48\") " pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.003785 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef-config\") pod \"ovn-controller-metrics-w9xlx\" (UID: \"e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef\") " pod="openstack/ovn-controller-metrics-w9xlx" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.003831 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkbct\" (UniqueName: \"kubernetes.io/projected/e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef-kube-api-access-nkbct\") pod \"ovn-controller-metrics-w9xlx\" (UID: \"e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef\") " pod="openstack/ovn-controller-metrics-w9xlx" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.003851 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b38986f-892c-45df-9229-2d4dae664b48-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3b38986f-892c-45df-9229-2d4dae664b48\") " pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.003887 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b38986f-892c-45df-9229-2d4dae664b48-scripts\") pod \"ovn-northd-0\" (UID: \"3b38986f-892c-45df-9229-2d4dae664b48\") " pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.004045 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9fe0821-6e92-4212-a641-085507df448d-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-v9c7d\" (UID: \"b9fe0821-6e92-4212-a641-085507df448d\") " pod="openstack/dnsmasq-dns-6bc7876d45-v9c7d" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.004096 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef-combined-ca-bundle\") pod \"ovn-controller-metrics-w9xlx\" (UID: \"e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef\") " pod="openstack/ovn-controller-metrics-w9xlx" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.004152 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b38986f-892c-45df-9229-2d4dae664b48-config\") pod \"ovn-northd-0\" (UID: \"3b38986f-892c-45df-9229-2d4dae664b48\") " pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.004180 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef-ovn-rundir\") pod \"ovn-controller-metrics-w9xlx\" (UID: \"e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef\") " pod="openstack/ovn-controller-metrics-w9xlx" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.004204 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9fe0821-6e92-4212-a641-085507df448d-config\") pod \"dnsmasq-dns-6bc7876d45-v9c7d\" (UID: \"b9fe0821-6e92-4212-a641-085507df448d\") " pod="openstack/dnsmasq-dns-6bc7876d45-v9c7d" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.004234 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b38986f-892c-45df-9229-2d4dae664b48-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3b38986f-892c-45df-9229-2d4dae664b48\") " pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.004276 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef-ovs-rundir\") pod \"ovn-controller-metrics-w9xlx\" (UID: \"e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef\") " pod="openstack/ovn-controller-metrics-w9xlx" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.004308 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-w9xlx\" (UID: \"e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef\") " pod="openstack/ovn-controller-metrics-w9xlx" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.004338 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdbgv\" (UniqueName: \"kubernetes.io/projected/b9fe0821-6e92-4212-a641-085507df448d-kube-api-access-sdbgv\") pod \"dnsmasq-dns-6bc7876d45-v9c7d\" (UID: \"b9fe0821-6e92-4212-a641-085507df448d\") " pod="openstack/dnsmasq-dns-6bc7876d45-v9c7d" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.004378 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b38986f-892c-45df-9229-2d4dae664b48-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3b38986f-892c-45df-9229-2d4dae664b48\") " pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.004412 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b38986f-892c-45df-9229-2d4dae664b48-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3b38986f-892c-45df-9229-2d4dae664b48\") " pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.004937 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9fe0821-6e92-4212-a641-085507df448d-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-v9c7d\" (UID: \"b9fe0821-6e92-4212-a641-085507df448d\") " pod="openstack/dnsmasq-dns-6bc7876d45-v9c7d" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.005265 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef-config\") pod \"ovn-controller-metrics-w9xlx\" (UID: \"e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef\") " pod="openstack/ovn-controller-metrics-w9xlx" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.005283 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef-ovs-rundir\") pod \"ovn-controller-metrics-w9xlx\" (UID: \"e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef\") " pod="openstack/ovn-controller-metrics-w9xlx" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.005327 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef-ovn-rundir\") pod \"ovn-controller-metrics-w9xlx\" (UID: \"e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef\") " pod="openstack/ovn-controller-metrics-w9xlx" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.011507 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9fe0821-6e92-4212-a641-085507df448d-config\") pod \"dnsmasq-dns-6bc7876d45-v9c7d\" (UID: \"b9fe0821-6e92-4212-a641-085507df448d\") " pod="openstack/dnsmasq-dns-6bc7876d45-v9c7d" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.013220 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9fe0821-6e92-4212-a641-085507df448d-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-v9c7d\" (UID: \"b9fe0821-6e92-4212-a641-085507df448d\") " pod="openstack/dnsmasq-dns-6bc7876d45-v9c7d" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.029287 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef-combined-ca-bundle\") pod \"ovn-controller-metrics-w9xlx\" (UID: \"e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef\") " pod="openstack/ovn-controller-metrics-w9xlx" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.034947 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkbct\" (UniqueName: \"kubernetes.io/projected/e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef-kube-api-access-nkbct\") pod \"ovn-controller-metrics-w9xlx\" (UID: \"e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef\") " pod="openstack/ovn-controller-metrics-w9xlx" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.064600 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdbgv\" (UniqueName: \"kubernetes.io/projected/b9fe0821-6e92-4212-a641-085507df448d-kube-api-access-sdbgv\") pod \"dnsmasq-dns-6bc7876d45-v9c7d\" (UID: \"b9fe0821-6e92-4212-a641-085507df448d\") " pod="openstack/dnsmasq-dns-6bc7876d45-v9c7d" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.084746 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-w9xlx\" (UID: \"e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef\") " pod="openstack/ovn-controller-metrics-w9xlx" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.113345 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b38986f-892c-45df-9229-2d4dae664b48-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3b38986f-892c-45df-9229-2d4dae664b48\") " pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.113397 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhm65\" (UniqueName: \"kubernetes.io/projected/3b38986f-892c-45df-9229-2d4dae664b48-kube-api-access-fhm65\") pod \"ovn-northd-0\" (UID: \"3b38986f-892c-45df-9229-2d4dae664b48\") " pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.113435 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b38986f-892c-45df-9229-2d4dae664b48-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3b38986f-892c-45df-9229-2d4dae664b48\") " pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.113457 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-zk5fg\" (UID: \"2f2f7ea2-3cca-492e-af4d-bde605e40529\") " pod="openstack/dnsmasq-dns-8554648995-zk5fg" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.113478 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-config\") pod \"dnsmasq-dns-8554648995-zk5fg\" (UID: \"2f2f7ea2-3cca-492e-af4d-bde605e40529\") " pod="openstack/dnsmasq-dns-8554648995-zk5fg" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.113498 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b38986f-892c-45df-9229-2d4dae664b48-scripts\") pod \"ovn-northd-0\" (UID: \"3b38986f-892c-45df-9229-2d4dae664b48\") " pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.113532 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-dns-svc\") pod \"dnsmasq-dns-8554648995-zk5fg\" (UID: \"2f2f7ea2-3cca-492e-af4d-bde605e40529\") " pod="openstack/dnsmasq-dns-8554648995-zk5fg" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.113584 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5f2c\" (UniqueName: \"kubernetes.io/projected/2f2f7ea2-3cca-492e-af4d-bde605e40529-kube-api-access-s5f2c\") pod \"dnsmasq-dns-8554648995-zk5fg\" (UID: \"2f2f7ea2-3cca-492e-af4d-bde605e40529\") " pod="openstack/dnsmasq-dns-8554648995-zk5fg" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.113616 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-zk5fg\" (UID: \"2f2f7ea2-3cca-492e-af4d-bde605e40529\") " pod="openstack/dnsmasq-dns-8554648995-zk5fg" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.113639 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b38986f-892c-45df-9229-2d4dae664b48-config\") pod \"ovn-northd-0\" (UID: \"3b38986f-892c-45df-9229-2d4dae664b48\") " pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.113662 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b38986f-892c-45df-9229-2d4dae664b48-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3b38986f-892c-45df-9229-2d4dae664b48\") " pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.113707 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b38986f-892c-45df-9229-2d4dae664b48-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3b38986f-892c-45df-9229-2d4dae664b48\") " pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.114846 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b38986f-892c-45df-9229-2d4dae664b48-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3b38986f-892c-45df-9229-2d4dae664b48\") " pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.115051 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b38986f-892c-45df-9229-2d4dae664b48-scripts\") pod \"ovn-northd-0\" (UID: \"3b38986f-892c-45df-9229-2d4dae664b48\") " pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.115848 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b38986f-892c-45df-9229-2d4dae664b48-config\") pod \"ovn-northd-0\" (UID: \"3b38986f-892c-45df-9229-2d4dae664b48\") " pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.133542 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b38986f-892c-45df-9229-2d4dae664b48-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3b38986f-892c-45df-9229-2d4dae664b48\") " pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.135827 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b38986f-892c-45df-9229-2d4dae664b48-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3b38986f-892c-45df-9229-2d4dae664b48\") " pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.138091 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b38986f-892c-45df-9229-2d4dae664b48-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3b38986f-892c-45df-9229-2d4dae664b48\") " pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.183447 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhm65\" (UniqueName: \"kubernetes.io/projected/3b38986f-892c-45df-9229-2d4dae664b48-kube-api-access-fhm65\") pod \"ovn-northd-0\" (UID: \"3b38986f-892c-45df-9229-2d4dae664b48\") " pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.200439 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-v9c7d" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.221036 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-w9xlx" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.221339 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-zk5fg\" (UID: \"2f2f7ea2-3cca-492e-af4d-bde605e40529\") " pod="openstack/dnsmasq-dns-8554648995-zk5fg" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.221394 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-config\") pod \"dnsmasq-dns-8554648995-zk5fg\" (UID: \"2f2f7ea2-3cca-492e-af4d-bde605e40529\") " pod="openstack/dnsmasq-dns-8554648995-zk5fg" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.221436 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-dns-svc\") pod \"dnsmasq-dns-8554648995-zk5fg\" (UID: \"2f2f7ea2-3cca-492e-af4d-bde605e40529\") " pod="openstack/dnsmasq-dns-8554648995-zk5fg" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.221490 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5f2c\" (UniqueName: \"kubernetes.io/projected/2f2f7ea2-3cca-492e-af4d-bde605e40529-kube-api-access-s5f2c\") pod \"dnsmasq-dns-8554648995-zk5fg\" (UID: \"2f2f7ea2-3cca-492e-af4d-bde605e40529\") " pod="openstack/dnsmasq-dns-8554648995-zk5fg" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.221525 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-zk5fg\" (UID: \"2f2f7ea2-3cca-492e-af4d-bde605e40529\") " pod="openstack/dnsmasq-dns-8554648995-zk5fg" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.222242 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-zk5fg\" (UID: \"2f2f7ea2-3cca-492e-af4d-bde605e40529\") " pod="openstack/dnsmasq-dns-8554648995-zk5fg" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.222601 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-zk5fg\" (UID: \"2f2f7ea2-3cca-492e-af4d-bde605e40529\") " pod="openstack/dnsmasq-dns-8554648995-zk5fg" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.222746 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-dns-svc\") pod \"dnsmasq-dns-8554648995-zk5fg\" (UID: \"2f2f7ea2-3cca-492e-af4d-bde605e40529\") " pod="openstack/dnsmasq-dns-8554648995-zk5fg" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.222827 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-config\") pod \"dnsmasq-dns-8554648995-zk5fg\" (UID: \"2f2f7ea2-3cca-492e-af4d-bde605e40529\") " pod="openstack/dnsmasq-dns-8554648995-zk5fg" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.227448 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.252363 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5f2c\" (UniqueName: \"kubernetes.io/projected/2f2f7ea2-3cca-492e-af4d-bde605e40529-kube-api-access-s5f2c\") pod \"dnsmasq-dns-8554648995-zk5fg\" (UID: \"2f2f7ea2-3cca-492e-af4d-bde605e40529\") " pod="openstack/dnsmasq-dns-8554648995-zk5fg" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.298323 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-zk5fg" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.449172 4966 generic.go:334] "Generic (PLEG): container finished" podID="1dc01362-ea5a-48fe-b67f-1e00b193c36e" containerID="08849a989a57411e68e28077b7e5c1f3158069c8bbd5dff300bc5c91793e9c87" exitCode=0 Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.449296 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1dc01362-ea5a-48fe-b67f-1e00b193c36e","Type":"ContainerDied","Data":"08849a989a57411e68e28077b7e5c1f3158069c8bbd5dff300bc5c91793e9c87"} Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.482488 4966 generic.go:334] "Generic (PLEG): container finished" podID="d76b62e7-a8ef-4976-90c0-6851a364d8d0" containerID="71a692cc9c83014f735efac652b5f2e94108b105a95dc7daceef970f538f7080" exitCode=0 Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.482602 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" event={"ID":"d76b62e7-a8ef-4976-90c0-6851a364d8d0","Type":"ContainerDied","Data":"71a692cc9c83014f735efac652b5f2e94108b105a95dc7daceef970f538f7080"} Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.495492 4966 generic.go:334] "Generic (PLEG): container finished" podID="8785f00c-f81b-4bb7-9e88-25445442d30d" containerID="1ce56dffef1a0295e91023e4397b5bf72e6a34657ea261984e65f3143a2422c3" exitCode=0 Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.495640 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" event={"ID":"8785f00c-f81b-4bb7-9e88-25445442d30d","Type":"ContainerDied","Data":"1ce56dffef1a0295e91023e4397b5bf72e6a34657ea261984e65f3143a2422c3"} Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.509500 4966 generic.go:334] "Generic (PLEG): container finished" podID="a1be6855-0a73-406a-93d5-625f7fca558b" containerID="4d945d25b485dbb36ef27ddb65f7bef66dd6127ef642d1041a250c32aba85bad" exitCode=0 Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.509595 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a1be6855-0a73-406a-93d5-625f7fca558b","Type":"ContainerDied","Data":"4d945d25b485dbb36ef27ddb65f7bef66dd6127ef642d1041a250c32aba85bad"} Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.604987 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.638783 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vz4z\" (UniqueName: \"kubernetes.io/projected/d76b62e7-a8ef-4976-90c0-6851a364d8d0-kube-api-access-8vz4z\") pod \"d76b62e7-a8ef-4976-90c0-6851a364d8d0\" (UID: \"d76b62e7-a8ef-4976-90c0-6851a364d8d0\") " Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.639105 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d76b62e7-a8ef-4976-90c0-6851a364d8d0-dns-svc\") pod \"d76b62e7-a8ef-4976-90c0-6851a364d8d0\" (UID: \"d76b62e7-a8ef-4976-90c0-6851a364d8d0\") " Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.639145 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d76b62e7-a8ef-4976-90c0-6851a364d8d0-config\") pod \"d76b62e7-a8ef-4976-90c0-6851a364d8d0\" (UID: \"d76b62e7-a8ef-4976-90c0-6851a364d8d0\") " Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.643724 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d76b62e7-a8ef-4976-90c0-6851a364d8d0-kube-api-access-8vz4z" (OuterVolumeSpecName: "kube-api-access-8vz4z") pod "d76b62e7-a8ef-4976-90c0-6851a364d8d0" (UID: "d76b62e7-a8ef-4976-90c0-6851a364d8d0"). InnerVolumeSpecName "kube-api-access-8vz4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.743953 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vz4z\" (UniqueName: \"kubernetes.io/projected/d76b62e7-a8ef-4976-90c0-6851a364d8d0-kube-api-access-8vz4z\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.744163 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d76b62e7-a8ef-4976-90c0-6851a364d8d0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d76b62e7-a8ef-4976-90c0-6851a364d8d0" (UID: "d76b62e7-a8ef-4976-90c0-6851a364d8d0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.757637 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d76b62e7-a8ef-4976-90c0-6851a364d8d0-config" (OuterVolumeSpecName: "config") pod "d76b62e7-a8ef-4976-90c0-6851a364d8d0" (UID: "d76b62e7-a8ef-4976-90c0-6851a364d8d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.794286 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.845581 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8785f00c-f81b-4bb7-9e88-25445442d30d-config\") pod \"8785f00c-f81b-4bb7-9e88-25445442d30d\" (UID: \"8785f00c-f81b-4bb7-9e88-25445442d30d\") " Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.845828 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8785f00c-f81b-4bb7-9e88-25445442d30d-dns-svc\") pod \"8785f00c-f81b-4bb7-9e88-25445442d30d\" (UID: \"8785f00c-f81b-4bb7-9e88-25445442d30d\") " Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.845876 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wkpw\" (UniqueName: \"kubernetes.io/projected/8785f00c-f81b-4bb7-9e88-25445442d30d-kube-api-access-5wkpw\") pod \"8785f00c-f81b-4bb7-9e88-25445442d30d\" (UID: \"8785f00c-f81b-4bb7-9e88-25445442d30d\") " Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.846394 4966 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d76b62e7-a8ef-4976-90c0-6851a364d8d0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.846413 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d76b62e7-a8ef-4976-90c0-6851a364d8d0-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.852381 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8785f00c-f81b-4bb7-9e88-25445442d30d-kube-api-access-5wkpw" (OuterVolumeSpecName: "kube-api-access-5wkpw") pod "8785f00c-f81b-4bb7-9e88-25445442d30d" (UID: "8785f00c-f81b-4bb7-9e88-25445442d30d"). InnerVolumeSpecName "kube-api-access-5wkpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.929656 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8785f00c-f81b-4bb7-9e88-25445442d30d-config" (OuterVolumeSpecName: "config") pod "8785f00c-f81b-4bb7-9e88-25445442d30d" (UID: "8785f00c-f81b-4bb7-9e88-25445442d30d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.948570 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wkpw\" (UniqueName: \"kubernetes.io/projected/8785f00c-f81b-4bb7-9e88-25445442d30d-kube-api-access-5wkpw\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.948602 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8785f00c-f81b-4bb7-9e88-25445442d30d-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:38 crc kubenswrapper[4966]: I0127 16:03:38.957465 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8785f00c-f81b-4bb7-9e88-25445442d30d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8785f00c-f81b-4bb7-9e88-25445442d30d" (UID: "8785f00c-f81b-4bb7-9e88-25445442d30d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.050550 4966 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8785f00c-f81b-4bb7-9e88-25445442d30d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.138851 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-w9xlx"] Jan 27 16:03:39 crc kubenswrapper[4966]: W0127 16:03:39.150052 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9fe0821_6e92_4212_a641_085507df448d.slice/crio-69c3fa6129c2bc03f74a9207a6686ebfbcf0c4daa4734c04bff8e231f61f5e38 WatchSource:0}: Error finding container 69c3fa6129c2bc03f74a9207a6686ebfbcf0c4daa4734c04bff8e231f61f5e38: Status 404 returned error can't find the container with id 69c3fa6129c2bc03f74a9207a6686ebfbcf0c4daa4734c04bff8e231f61f5e38 Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.150082 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 16:03:39 crc kubenswrapper[4966]: W0127 16:03:39.151524 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b38986f_892c_45df_9229_2d4dae664b48.slice/crio-322d2e31c6b94d25aa76c4a93ba22c3f36df769f6dab6b84fce456762d9b19fd WatchSource:0}: Error finding container 322d2e31c6b94d25aa76c4a93ba22c3f36df769f6dab6b84fce456762d9b19fd: Status 404 returned error can't find the container with id 322d2e31c6b94d25aa76c4a93ba22c3f36df769f6dab6b84fce456762d9b19fd Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.167847 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-v9c7d"] Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.197075 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-zk5fg"] Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.529663 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" event={"ID":"8785f00c-f81b-4bb7-9e88-25445442d30d","Type":"ContainerDied","Data":"26d223007cea7554b14f5d20203539f5327fd80d8f68c12592c0bc9a46d1ca22"} Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.531027 4966 scope.go:117] "RemoveContainer" containerID="1ce56dffef1a0295e91023e4397b5bf72e6a34657ea261984e65f3143a2422c3" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.530009 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.534384 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-w9xlx" event={"ID":"e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef","Type":"ContainerStarted","Data":"6c97c6e4422a46a503c2a6aee7302d2bddf5721672410ae47d1265310602ae88"} Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.534440 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-w9xlx" event={"ID":"e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef","Type":"ContainerStarted","Data":"d8782142186f69a90e539e6120b3686802c6ae2c130043cb11fcc6d8daa7d283"} Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.536960 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a1be6855-0a73-406a-93d5-625f7fca558b","Type":"ContainerStarted","Data":"8631c1d2e4e6006cc5c0581834e9d9d692faafcdd99032634f07f208377d2c31"} Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.557386 4966 generic.go:334] "Generic (PLEG): container finished" podID="b9fe0821-6e92-4212-a641-085507df448d" containerID="dea3213c0d9629c7bd904c124f66fd672d155f706afd5744202d692eb41a169c" exitCode=0 Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.557472 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-v9c7d" event={"ID":"b9fe0821-6e92-4212-a641-085507df448d","Type":"ContainerDied","Data":"dea3213c0d9629c7bd904c124f66fd672d155f706afd5744202d692eb41a169c"} Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.557503 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-v9c7d" event={"ID":"b9fe0821-6e92-4212-a641-085507df448d","Type":"ContainerStarted","Data":"69c3fa6129c2bc03f74a9207a6686ebfbcf0c4daa4734c04bff8e231f61f5e38"} Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.576007 4966 scope.go:117] "RemoveContainer" containerID="8f1410148aed3db5135b563705c2476aa6bd1d3788c3f6a958d5330a5c2eda88" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.583468 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1dc01362-ea5a-48fe-b67f-1e00b193c36e","Type":"ContainerStarted","Data":"3bb901e699a65f37bdaf20f6ec41a0f298b753e6f1d314eb609b826aa8bcd29a"} Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.585691 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3b38986f-892c-45df-9229-2d4dae664b48","Type":"ContainerStarted","Data":"322d2e31c6b94d25aa76c4a93ba22c3f36df769f6dab6b84fce456762d9b19fd"} Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.589258 4966 generic.go:334] "Generic (PLEG): container finished" podID="2f2f7ea2-3cca-492e-af4d-bde605e40529" containerID="f93e756ae0f95871edced10cc429e22a4ab8510303259779ad5c4a4735004be7" exitCode=0 Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.589323 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-zk5fg" event={"ID":"2f2f7ea2-3cca-492e-af4d-bde605e40529","Type":"ContainerDied","Data":"f93e756ae0f95871edced10cc429e22a4ab8510303259779ad5c4a4735004be7"} Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.589356 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-zk5fg" event={"ID":"2f2f7ea2-3cca-492e-af4d-bde605e40529","Type":"ContainerStarted","Data":"ffec6c60892623084bfe924b8191338981eb779047c17324d9b387aec72267b7"} Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.593131 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-w9xlx" podStartSLOduration=2.593107876 podStartE2EDuration="2.593107876s" podCreationTimestamp="2026-01-27 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:03:39.568477304 +0000 UTC m=+1285.871270822" watchObservedRunningTime="2026-01-27 16:03:39.593107876 +0000 UTC m=+1285.895901374" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.597998 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.603214 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-r5nws" event={"ID":"d76b62e7-a8ef-4976-90c0-6851a364d8d0","Type":"ContainerDied","Data":"12ee20790b5df6cd6b873f1e5cbb9d3dee73296c697b4d40d18e41ca8dc75b32"} Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.636169 4966 scope.go:117] "RemoveContainer" containerID="71a692cc9c83014f735efac652b5f2e94108b105a95dc7daceef970f538f7080" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.661923 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.679910 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=35.952463267 podStartE2EDuration="43.679870208s" podCreationTimestamp="2026-01-27 16:02:56 +0000 UTC" firstStartedPulling="2026-01-27 16:03:22.63850505 +0000 UTC m=+1268.941298548" lastFinishedPulling="2026-01-27 16:03:30.365912001 +0000 UTC m=+1276.668705489" observedRunningTime="2026-01-27 16:03:39.608006984 +0000 UTC m=+1285.910800482" watchObservedRunningTime="2026-01-27 16:03:39.679870208 +0000 UTC m=+1285.982663696" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.699108 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-v9c7d"] Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.721657 4966 scope.go:117] "RemoveContainer" containerID="5b32d2b910f9d60ead37a82492ee7a1447f96a1eb6ad1ddd96e0f1ca91ad3296" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.751274 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-7qhd5"] Jan 27 16:03:39 crc kubenswrapper[4966]: E0127 16:03:39.751721 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76b62e7-a8ef-4976-90c0-6851a364d8d0" containerName="dnsmasq-dns" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.751735 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76b62e7-a8ef-4976-90c0-6851a364d8d0" containerName="dnsmasq-dns" Jan 27 16:03:39 crc kubenswrapper[4966]: E0127 16:03:39.751756 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76b62e7-a8ef-4976-90c0-6851a364d8d0" containerName="init" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.751763 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76b62e7-a8ef-4976-90c0-6851a364d8d0" containerName="init" Jan 27 16:03:39 crc kubenswrapper[4966]: E0127 16:03:39.751781 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8785f00c-f81b-4bb7-9e88-25445442d30d" containerName="dnsmasq-dns" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.751787 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="8785f00c-f81b-4bb7-9e88-25445442d30d" containerName="dnsmasq-dns" Jan 27 16:03:39 crc kubenswrapper[4966]: E0127 16:03:39.751806 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8785f00c-f81b-4bb7-9e88-25445442d30d" containerName="init" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.751811 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="8785f00c-f81b-4bb7-9e88-25445442d30d" containerName="init" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.752007 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="d76b62e7-a8ef-4976-90c0-6851a364d8d0" containerName="dnsmasq-dns" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.752027 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="8785f00c-f81b-4bb7-9e88-25445442d30d" containerName="dnsmasq-dns" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.753825 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.791270 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=34.920541557 podStartE2EDuration="45.791245482s" podCreationTimestamp="2026-01-27 16:02:54 +0000 UTC" firstStartedPulling="2026-01-27 16:03:19.657773016 +0000 UTC m=+1265.960566504" lastFinishedPulling="2026-01-27 16:03:30.528476931 +0000 UTC m=+1276.831270429" observedRunningTime="2026-01-27 16:03:39.732344804 +0000 UTC m=+1286.035138302" watchObservedRunningTime="2026-01-27 16:03:39.791245482 +0000 UTC m=+1286.094038970" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.794567 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-7qhd5"] Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.830150 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mzkzx"] Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.867768 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mzkzx"] Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.914800 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-r5nws"] Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.923651 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-r5nws"] Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.979819 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-7qhd5\" (UID: \"df617bba-0d77-409f-a210-e49e176d7053\") " pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.980139 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-7qhd5\" (UID: \"df617bba-0d77-409f-a210-e49e176d7053\") " pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.980252 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-7qhd5\" (UID: \"df617bba-0d77-409f-a210-e49e176d7053\") " pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.980276 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-config\") pod \"dnsmasq-dns-b8fbc5445-7qhd5\" (UID: \"df617bba-0d77-409f-a210-e49e176d7053\") " pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" Jan 27 16:03:39 crc kubenswrapper[4966]: I0127 16:03:39.980377 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxr2w\" (UniqueName: \"kubernetes.io/projected/df617bba-0d77-409f-a210-e49e176d7053-kube-api-access-kxr2w\") pod \"dnsmasq-dns-b8fbc5445-7qhd5\" (UID: \"df617bba-0d77-409f-a210-e49e176d7053\") " pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.082354 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-7qhd5\" (UID: \"df617bba-0d77-409f-a210-e49e176d7053\") " pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.082417 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-7qhd5\" (UID: \"df617bba-0d77-409f-a210-e49e176d7053\") " pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.082509 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-7qhd5\" (UID: \"df617bba-0d77-409f-a210-e49e176d7053\") " pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.082533 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-config\") pod \"dnsmasq-dns-b8fbc5445-7qhd5\" (UID: \"df617bba-0d77-409f-a210-e49e176d7053\") " pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.082586 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxr2w\" (UniqueName: \"kubernetes.io/projected/df617bba-0d77-409f-a210-e49e176d7053-kube-api-access-kxr2w\") pod \"dnsmasq-dns-b8fbc5445-7qhd5\" (UID: \"df617bba-0d77-409f-a210-e49e176d7053\") " pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.085138 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-7qhd5\" (UID: \"df617bba-0d77-409f-a210-e49e176d7053\") " pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" Jan 27 16:03:40 crc kubenswrapper[4966]: E0127 16:03:40.088509 4966 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 27 16:03:40 crc kubenswrapper[4966]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/b9fe0821-6e92-4212-a641-085507df448d/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 16:03:40 crc kubenswrapper[4966]: > podSandboxID="69c3fa6129c2bc03f74a9207a6686ebfbcf0c4daa4734c04bff8e231f61f5e38" Jan 27 16:03:40 crc kubenswrapper[4966]: E0127 16:03:40.088684 4966 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 16:03:40 crc kubenswrapper[4966]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8ch647h5fdh676h5c8h566h96h5d8hdh569h64dh5b5h587h55h5cch58dh658h67h5f6h64fh648h6h59fh65ch7hf9hf6h74hf8hch596h5b8q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdbgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6bc7876d45-v9c7d_openstack(b9fe0821-6e92-4212-a641-085507df448d): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/b9fe0821-6e92-4212-a641-085507df448d/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 16:03:40 crc kubenswrapper[4966]: > logger="UnhandledError" Jan 27 16:03:40 crc kubenswrapper[4966]: E0127 16:03:40.089938 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/b9fe0821-6e92-4212-a641-085507df448d/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-6bc7876d45-v9c7d" podUID="b9fe0821-6e92-4212-a641-085507df448d" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.091406 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-7qhd5\" (UID: \"df617bba-0d77-409f-a210-e49e176d7053\") " pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.091636 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-config\") pod \"dnsmasq-dns-b8fbc5445-7qhd5\" (UID: \"df617bba-0d77-409f-a210-e49e176d7053\") " pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.094557 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-7qhd5\" (UID: \"df617bba-0d77-409f-a210-e49e176d7053\") " pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.100430 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxr2w\" (UniqueName: \"kubernetes.io/projected/df617bba-0d77-409f-a210-e49e176d7053-kube-api-access-kxr2w\") pod \"dnsmasq-dns-b8fbc5445-7qhd5\" (UID: \"df617bba-0d77-409f-a210-e49e176d7053\") " pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.105090 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.120270 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.120318 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:03:40 crc kubenswrapper[4966]: E0127 16:03:40.233627 4966 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 27 16:03:40 crc kubenswrapper[4966]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/2f2f7ea2-3cca-492e-af4d-bde605e40529/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 16:03:40 crc kubenswrapper[4966]: > podSandboxID="ffec6c60892623084bfe924b8191338981eb779047c17324d9b387aec72267b7" Jan 27 16:03:40 crc kubenswrapper[4966]: E0127 16:03:40.234474 4966 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 16:03:40 crc kubenswrapper[4966]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n654h99h64ch5dbh6dh555h587h64bh5cfh647h5fdh57ch679h9h597h5f5hbch59bh54fh575h566h667h586h5f5h65ch5bch57h68h65ch58bh694h5cfq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s5f2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8554648995-zk5fg_openstack(2f2f7ea2-3cca-492e-af4d-bde605e40529): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/2f2f7ea2-3cca-492e-af4d-bde605e40529/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 16:03:40 crc kubenswrapper[4966]: > logger="UnhandledError" Jan 27 16:03:40 crc kubenswrapper[4966]: E0127 16:03:40.235837 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/2f2f7ea2-3cca-492e-af4d-bde605e40529/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-8554648995-zk5fg" podUID="2f2f7ea2-3cca-492e-af4d-bde605e40529" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.553231 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8785f00c-f81b-4bb7-9e88-25445442d30d" path="/var/lib/kubelet/pods/8785f00c-f81b-4bb7-9e88-25445442d30d/volumes" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.555734 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d76b62e7-a8ef-4976-90c0-6851a364d8d0" path="/var/lib/kubelet/pods/d76b62e7-a8ef-4976-90c0-6851a364d8d0/volumes" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.736175 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-7qhd5"] Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.787258 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.794078 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.806614 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.806999 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.806934 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.807223 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-b7x64" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.817993 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.904347 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a59c903b-6e40-43bd-a120-e47e504cf5a9-cache\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.904419 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9e43a0d5-e5b0-422a-808f-5dce466bac51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e43a0d5-e5b0-422a-808f-5dce466bac51\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.904443 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-etc-swift\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.904464 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z85fp\" (UniqueName: \"kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-kube-api-access-z85fp\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.904551 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a59c903b-6e40-43bd-a120-e47e504cf5a9-lock\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:40 crc kubenswrapper[4966]: I0127 16:03:40.904573 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a59c903b-6e40-43bd-a120-e47e504cf5a9-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.005826 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a59c903b-6e40-43bd-a120-e47e504cf5a9-lock\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.005877 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a59c903b-6e40-43bd-a120-e47e504cf5a9-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.005968 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a59c903b-6e40-43bd-a120-e47e504cf5a9-cache\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.006006 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9e43a0d5-e5b0-422a-808f-5dce466bac51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e43a0d5-e5b0-422a-808f-5dce466bac51\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.006031 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-etc-swift\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.006046 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z85fp\" (UniqueName: \"kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-kube-api-access-z85fp\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:41 crc kubenswrapper[4966]: E0127 16:03:41.007084 4966 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 16:03:41 crc kubenswrapper[4966]: E0127 16:03:41.007100 4966 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 16:03:41 crc kubenswrapper[4966]: E0127 16:03:41.007145 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-etc-swift podName:a59c903b-6e40-43bd-a120-e47e504cf5a9 nodeName:}" failed. No retries permitted until 2026-01-27 16:03:41.507129348 +0000 UTC m=+1287.809922836 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-etc-swift") pod "swift-storage-0" (UID: "a59c903b-6e40-43bd-a120-e47e504cf5a9") : configmap "swift-ring-files" not found Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.011824 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.011869 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9e43a0d5-e5b0-422a-808f-5dce466bac51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e43a0d5-e5b0-422a-808f-5dce466bac51\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f788e4ca4882ab2213beb632ed8bd77c1ee758006bd167218117970290f047bd/globalmount\"" pod="openstack/swift-storage-0" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.013299 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a59c903b-6e40-43bd-a120-e47e504cf5a9-cache\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.013486 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a59c903b-6e40-43bd-a120-e47e504cf5a9-lock\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.025234 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z85fp\" (UniqueName: \"kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-kube-api-access-z85fp\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.038334 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a59c903b-6e40-43bd-a120-e47e504cf5a9-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.077540 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9e43a0d5-e5b0-422a-808f-5dce466bac51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e43a0d5-e5b0-422a-808f-5dce466bac51\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.418035 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-6r97n"] Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.419359 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.421800 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.422062 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.422226 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.454126 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-6r97n"] Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.516393 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5036c06b-cb10-4530-9315-4ba4dee273f0-dispersionconf\") pod \"swift-ring-rebalance-6r97n\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.516493 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5036c06b-cb10-4530-9315-4ba4dee273f0-ring-data-devices\") pod \"swift-ring-rebalance-6r97n\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.516527 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5036c06b-cb10-4530-9315-4ba4dee273f0-combined-ca-bundle\") pod \"swift-ring-rebalance-6r97n\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.516554 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5036c06b-cb10-4530-9315-4ba4dee273f0-etc-swift\") pod \"swift-ring-rebalance-6r97n\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.516728 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-etc-swift\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.516817 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5036c06b-cb10-4530-9315-4ba4dee273f0-scripts\") pod \"swift-ring-rebalance-6r97n\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: E0127 16:03:41.516927 4966 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 16:03:41 crc kubenswrapper[4966]: E0127 16:03:41.516956 4966 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 16:03:41 crc kubenswrapper[4966]: E0127 16:03:41.517014 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-etc-swift podName:a59c903b-6e40-43bd-a120-e47e504cf5a9 nodeName:}" failed. No retries permitted until 2026-01-27 16:03:42.516990504 +0000 UTC m=+1288.819784002 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-etc-swift") pod "swift-storage-0" (UID: "a59c903b-6e40-43bd-a120-e47e504cf5a9") : configmap "swift-ring-files" not found Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.516941 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xsts\" (UniqueName: \"kubernetes.io/projected/5036c06b-cb10-4530-9315-4ba4dee273f0-kube-api-access-9xsts\") pod \"swift-ring-rebalance-6r97n\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.517065 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5036c06b-cb10-4530-9315-4ba4dee273f0-swiftconf\") pod \"swift-ring-rebalance-6r97n\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.624741 4966 generic.go:334] "Generic (PLEG): container finished" podID="688b0294-ca80-4f78-8704-31c17a81345b" containerID="bdad632d356195a8fd182b8095953c65b5633b892b8a01e24b4aa8f963c3ffd9" exitCode=0 Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.624803 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"688b0294-ca80-4f78-8704-31c17a81345b","Type":"ContainerDied","Data":"bdad632d356195a8fd182b8095953c65b5633b892b8a01e24b4aa8f963c3ffd9"} Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.635240 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5036c06b-cb10-4530-9315-4ba4dee273f0-scripts\") pod \"swift-ring-rebalance-6r97n\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.635351 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xsts\" (UniqueName: \"kubernetes.io/projected/5036c06b-cb10-4530-9315-4ba4dee273f0-kube-api-access-9xsts\") pod \"swift-ring-rebalance-6r97n\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.635399 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5036c06b-cb10-4530-9315-4ba4dee273f0-swiftconf\") pod \"swift-ring-rebalance-6r97n\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.635600 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5036c06b-cb10-4530-9315-4ba4dee273f0-dispersionconf\") pod \"swift-ring-rebalance-6r97n\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.635845 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5036c06b-cb10-4530-9315-4ba4dee273f0-ring-data-devices\") pod \"swift-ring-rebalance-6r97n\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.635882 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5036c06b-cb10-4530-9315-4ba4dee273f0-combined-ca-bundle\") pod \"swift-ring-rebalance-6r97n\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.636036 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5036c06b-cb10-4530-9315-4ba4dee273f0-etc-swift\") pod \"swift-ring-rebalance-6r97n\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.640671 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5036c06b-cb10-4530-9315-4ba4dee273f0-scripts\") pod \"swift-ring-rebalance-6r97n\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.644588 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5036c06b-cb10-4530-9315-4ba4dee273f0-etc-swift\") pod \"swift-ring-rebalance-6r97n\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.645436 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5036c06b-cb10-4530-9315-4ba4dee273f0-ring-data-devices\") pod \"swift-ring-rebalance-6r97n\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.678530 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5036c06b-cb10-4530-9315-4ba4dee273f0-combined-ca-bundle\") pod \"swift-ring-rebalance-6r97n\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.678795 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5036c06b-cb10-4530-9315-4ba4dee273f0-dispersionconf\") pod \"swift-ring-rebalance-6r97n\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.679079 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5036c06b-cb10-4530-9315-4ba4dee273f0-swiftconf\") pod \"swift-ring-rebalance-6r97n\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.691604 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xsts\" (UniqueName: \"kubernetes.io/projected/5036c06b-cb10-4530-9315-4ba4dee273f0-kube-api-access-9xsts\") pod \"swift-ring-rebalance-6r97n\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:41 crc kubenswrapper[4966]: I0127 16:03:41.738677 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:03:42 crc kubenswrapper[4966]: I0127 16:03:42.555961 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-etc-swift\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:42 crc kubenswrapper[4966]: E0127 16:03:42.556233 4966 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 16:03:42 crc kubenswrapper[4966]: E0127 16:03:42.556441 4966 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 16:03:42 crc kubenswrapper[4966]: E0127 16:03:42.556515 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-etc-swift podName:a59c903b-6e40-43bd-a120-e47e504cf5a9 nodeName:}" failed. No retries permitted until 2026-01-27 16:03:44.556492486 +0000 UTC m=+1290.859285984 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-etc-swift") pod "swift-storage-0" (UID: "a59c903b-6e40-43bd-a120-e47e504cf5a9") : configmap "swift-ring-files" not found Jan 27 16:03:43 crc kubenswrapper[4966]: I0127 16:03:43.567074 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-mzkzx" podUID="8785f00c-f81b-4bb7-9e88-25445442d30d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Jan 27 16:03:43 crc kubenswrapper[4966]: I0127 16:03:43.966249 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-v9c7d" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.098540 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9fe0821-6e92-4212-a641-085507df448d-ovsdbserver-sb\") pod \"b9fe0821-6e92-4212-a641-085507df448d\" (UID: \"b9fe0821-6e92-4212-a641-085507df448d\") " Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.098812 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9fe0821-6e92-4212-a641-085507df448d-config\") pod \"b9fe0821-6e92-4212-a641-085507df448d\" (UID: \"b9fe0821-6e92-4212-a641-085507df448d\") " Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.098917 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9fe0821-6e92-4212-a641-085507df448d-dns-svc\") pod \"b9fe0821-6e92-4212-a641-085507df448d\" (UID: \"b9fe0821-6e92-4212-a641-085507df448d\") " Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.099205 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdbgv\" (UniqueName: \"kubernetes.io/projected/b9fe0821-6e92-4212-a641-085507df448d-kube-api-access-sdbgv\") pod \"b9fe0821-6e92-4212-a641-085507df448d\" (UID: \"b9fe0821-6e92-4212-a641-085507df448d\") " Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.110952 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9fe0821-6e92-4212-a641-085507df448d-kube-api-access-sdbgv" (OuterVolumeSpecName: "kube-api-access-sdbgv") pod "b9fe0821-6e92-4212-a641-085507df448d" (UID: "b9fe0821-6e92-4212-a641-085507df448d"). InnerVolumeSpecName "kube-api-access-sdbgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.203731 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdbgv\" (UniqueName: \"kubernetes.io/projected/b9fe0821-6e92-4212-a641-085507df448d-kube-api-access-sdbgv\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.205771 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9fe0821-6e92-4212-a641-085507df448d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b9fe0821-6e92-4212-a641-085507df448d" (UID: "b9fe0821-6e92-4212-a641-085507df448d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.211618 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9fe0821-6e92-4212-a641-085507df448d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b9fe0821-6e92-4212-a641-085507df448d" (UID: "b9fe0821-6e92-4212-a641-085507df448d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.220444 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9fe0821-6e92-4212-a641-085507df448d-config" (OuterVolumeSpecName: "config") pod "b9fe0821-6e92-4212-a641-085507df448d" (UID: "b9fe0821-6e92-4212-a641-085507df448d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.305118 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9fe0821-6e92-4212-a641-085507df448d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.305410 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9fe0821-6e92-4212-a641-085507df448d-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.305420 4966 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9fe0821-6e92-4212-a641-085507df448d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.333759 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.507771 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/688b0294-ca80-4f78-8704-31c17a81345b-web-config\") pod \"688b0294-ca80-4f78-8704-31c17a81345b\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.507844 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/688b0294-ca80-4f78-8704-31c17a81345b-config-out\") pod \"688b0294-ca80-4f78-8704-31c17a81345b\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.507916 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/688b0294-ca80-4f78-8704-31c17a81345b-tls-assets\") pod \"688b0294-ca80-4f78-8704-31c17a81345b\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.508005 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/688b0294-ca80-4f78-8704-31c17a81345b-prometheus-metric-storage-rulefiles-0\") pod \"688b0294-ca80-4f78-8704-31c17a81345b\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.508047 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/688b0294-ca80-4f78-8704-31c17a81345b-prometheus-metric-storage-rulefiles-1\") pod \"688b0294-ca80-4f78-8704-31c17a81345b\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.508106 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbq6s\" (UniqueName: \"kubernetes.io/projected/688b0294-ca80-4f78-8704-31c17a81345b-kube-api-access-tbq6s\") pod \"688b0294-ca80-4f78-8704-31c17a81345b\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.508138 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/688b0294-ca80-4f78-8704-31c17a81345b-config\") pod \"688b0294-ca80-4f78-8704-31c17a81345b\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.508179 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/688b0294-ca80-4f78-8704-31c17a81345b-prometheus-metric-storage-rulefiles-2\") pod \"688b0294-ca80-4f78-8704-31c17a81345b\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.508321 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\") pod \"688b0294-ca80-4f78-8704-31c17a81345b\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.508360 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/688b0294-ca80-4f78-8704-31c17a81345b-thanos-prometheus-http-client-file\") pod \"688b0294-ca80-4f78-8704-31c17a81345b\" (UID: \"688b0294-ca80-4f78-8704-31c17a81345b\") " Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.508678 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/688b0294-ca80-4f78-8704-31c17a81345b-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "688b0294-ca80-4f78-8704-31c17a81345b" (UID: "688b0294-ca80-4f78-8704-31c17a81345b"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.509186 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/688b0294-ca80-4f78-8704-31c17a81345b-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "688b0294-ca80-4f78-8704-31c17a81345b" (UID: "688b0294-ca80-4f78-8704-31c17a81345b"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.509669 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/688b0294-ca80-4f78-8704-31c17a81345b-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "688b0294-ca80-4f78-8704-31c17a81345b" (UID: "688b0294-ca80-4f78-8704-31c17a81345b"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.510272 4966 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/688b0294-ca80-4f78-8704-31c17a81345b-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.510303 4966 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/688b0294-ca80-4f78-8704-31c17a81345b-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.516528 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-6r97n"] Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.518915 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688b0294-ca80-4f78-8704-31c17a81345b-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "688b0294-ca80-4f78-8704-31c17a81345b" (UID: "688b0294-ca80-4f78-8704-31c17a81345b"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.518915 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/688b0294-ca80-4f78-8704-31c17a81345b-config-out" (OuterVolumeSpecName: "config-out") pod "688b0294-ca80-4f78-8704-31c17a81345b" (UID: "688b0294-ca80-4f78-8704-31c17a81345b"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.518967 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/688b0294-ca80-4f78-8704-31c17a81345b-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "688b0294-ca80-4f78-8704-31c17a81345b" (UID: "688b0294-ca80-4f78-8704-31c17a81345b"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.518989 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688b0294-ca80-4f78-8704-31c17a81345b-config" (OuterVolumeSpecName: "config") pod "688b0294-ca80-4f78-8704-31c17a81345b" (UID: "688b0294-ca80-4f78-8704-31c17a81345b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.519035 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/688b0294-ca80-4f78-8704-31c17a81345b-kube-api-access-tbq6s" (OuterVolumeSpecName: "kube-api-access-tbq6s") pod "688b0294-ca80-4f78-8704-31c17a81345b" (UID: "688b0294-ca80-4f78-8704-31c17a81345b"). InnerVolumeSpecName "kube-api-access-tbq6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.519109 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688b0294-ca80-4f78-8704-31c17a81345b-web-config" (OuterVolumeSpecName: "web-config") pod "688b0294-ca80-4f78-8704-31c17a81345b" (UID: "688b0294-ca80-4f78-8704-31c17a81345b"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.527733 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "688b0294-ca80-4f78-8704-31c17a81345b" (UID: "688b0294-ca80-4f78-8704-31c17a81345b"). InnerVolumeSpecName "pvc-09717264-e586-40d9-8eb7-6ef2244b94f6". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.612574 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-etc-swift\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:44 crc kubenswrapper[4966]: E0127 16:03:44.613230 4966 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 16:03:44 crc kubenswrapper[4966]: E0127 16:03:44.613255 4966 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 16:03:44 crc kubenswrapper[4966]: E0127 16:03:44.613315 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-etc-swift podName:a59c903b-6e40-43bd-a120-e47e504cf5a9 nodeName:}" failed. No retries permitted until 2026-01-27 16:03:48.613296454 +0000 UTC m=+1294.916089942 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-etc-swift") pod "swift-storage-0" (UID: "a59c903b-6e40-43bd-a120-e47e504cf5a9") : configmap "swift-ring-files" not found Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.613539 4966 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/688b0294-ca80-4f78-8704-31c17a81345b-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.613572 4966 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/688b0294-ca80-4f78-8704-31c17a81345b-web-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.613584 4966 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/688b0294-ca80-4f78-8704-31c17a81345b-config-out\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.613592 4966 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/688b0294-ca80-4f78-8704-31c17a81345b-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.613600 4966 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/688b0294-ca80-4f78-8704-31c17a81345b-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.613610 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbq6s\" (UniqueName: \"kubernetes.io/projected/688b0294-ca80-4f78-8704-31c17a81345b-kube-api-access-tbq6s\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.613618 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/688b0294-ca80-4f78-8704-31c17a81345b-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.613660 4966 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\") on node \"crc\" " Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.671511 4966 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.671688 4966 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-09717264-e586-40d9-8eb7-6ef2244b94f6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6") on node "crc" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.678978 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"688b0294-ca80-4f78-8704-31c17a81345b","Type":"ContainerDied","Data":"24f10ced640eef48bb95b93468c0523ec076bb62cdea9404eab5d27dede53262"} Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.679035 4966 scope.go:117] "RemoveContainer" containerID="bdad632d356195a8fd182b8095953c65b5633b892b8a01e24b4aa8f963c3ffd9" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.679202 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.683874 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-v9c7d" event={"ID":"b9fe0821-6e92-4212-a641-085507df448d","Type":"ContainerDied","Data":"69c3fa6129c2bc03f74a9207a6686ebfbcf0c4daa4734c04bff8e231f61f5e38"} Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.683985 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-v9c7d" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.689994 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3b38986f-892c-45df-9229-2d4dae664b48","Type":"ContainerStarted","Data":"8d4762a1a849ef0103aed24b4b8563d2c483921ee0a103cc6b3022d4706d6072"} Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.693843 4966 generic.go:334] "Generic (PLEG): container finished" podID="df617bba-0d77-409f-a210-e49e176d7053" containerID="3f2ab09668f0fd9883ec5dfffee54d9b09324805da4b8767f9d859b9dec3e332" exitCode=0 Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.693937 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" event={"ID":"df617bba-0d77-409f-a210-e49e176d7053","Type":"ContainerDied","Data":"3f2ab09668f0fd9883ec5dfffee54d9b09324805da4b8767f9d859b9dec3e332"} Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.693966 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" event={"ID":"df617bba-0d77-409f-a210-e49e176d7053","Type":"ContainerStarted","Data":"16fa5b3e58848df726bb58967f60da93d5cba533d8511106ca2e78dddbd466ba"} Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.697828 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-zk5fg" event={"ID":"2f2f7ea2-3cca-492e-af4d-bde605e40529","Type":"ContainerStarted","Data":"6b20d448c2699a2a61fa7c08aa9f9d280247f4e9124758fdfd6b1fd8cbf45625"} Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.699454 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-zk5fg" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.706885 4966 scope.go:117] "RemoveContainer" containerID="dea3213c0d9629c7bd904c124f66fd672d155f706afd5744202d692eb41a169c" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.729661 4966 reconciler_common.go:293] "Volume detached for volume \"pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.761375 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6r97n" event={"ID":"5036c06b-cb10-4530-9315-4ba4dee273f0","Type":"ContainerStarted","Data":"beb508747b2f98f9e70f7eadad05189b6abec40d48fb19462d9e3cb6b574b17f"} Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.797733 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.809555 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.839720 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 16:03:44 crc kubenswrapper[4966]: E0127 16:03:44.840181 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688b0294-ca80-4f78-8704-31c17a81345b" containerName="init-config-reloader" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.840197 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="688b0294-ca80-4f78-8704-31c17a81345b" containerName="init-config-reloader" Jan 27 16:03:44 crc kubenswrapper[4966]: E0127 16:03:44.840209 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9fe0821-6e92-4212-a641-085507df448d" containerName="init" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.840215 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9fe0821-6e92-4212-a641-085507df448d" containerName="init" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.840451 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="688b0294-ca80-4f78-8704-31c17a81345b" containerName="init-config-reloader" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.840492 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9fe0821-6e92-4212-a641-085507df448d" containerName="init" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.842390 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.856426 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.856710 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.856881 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.857157 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-lvfmv" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.857234 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.857457 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.858180 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.874256 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.874438 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.877465 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-zk5fg" podStartSLOduration=7.877448001 podStartE2EDuration="7.877448001s" podCreationTimestamp="2026-01-27 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:03:44.858419584 +0000 UTC m=+1291.161213092" watchObservedRunningTime="2026-01-27 16:03:44.877448001 +0000 UTC m=+1291.180241489" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.936241 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.936595 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.936622 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.936655 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.936744 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.936763 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.936780 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-559b4\" (UniqueName: \"kubernetes.io/projected/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-kube-api-access-559b4\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.936803 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-config\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.936828 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.936859 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.937585 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-v9c7d"] Jan 27 16:03:44 crc kubenswrapper[4966]: I0127 16:03:44.963473 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-v9c7d"] Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.038998 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.039252 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.040418 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.040686 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.040915 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.041024 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-559b4\" (UniqueName: \"kubernetes.io/projected/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-kube-api-access-559b4\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.041108 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-config\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.041545 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.041675 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.041841 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.041707 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.042144 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.042277 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.048008 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.048103 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.050497 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.061293 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.061398 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-config\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.061540 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.061575 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a33938e059c072199d3b6223bdfa367a3b3bcef4e32c284009ff56b852d373de/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.064811 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-559b4\" (UniqueName: \"kubernetes.io/projected/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-kube-api-access-559b4\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.106310 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\") pod \"prometheus-metric-storage-0\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.177878 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.797476 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3b38986f-892c-45df-9229-2d4dae664b48","Type":"ContainerStarted","Data":"308d77a7d7cc625cf45c795c2ce3d33de3b23a8d58ba3b50b5bc6dc257007470"} Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.798355 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.799538 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.801925 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" event={"ID":"df617bba-0d77-409f-a210-e49e176d7053","Type":"ContainerStarted","Data":"a7f4297af309a9f941d627484a5517f256af216067bcb479eebb16932b499fa8"} Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.802017 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.825807 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.730308343 podStartE2EDuration="8.825784584s" podCreationTimestamp="2026-01-27 16:03:37 +0000 UTC" firstStartedPulling="2026-01-27 16:03:39.159332727 +0000 UTC m=+1285.462126215" lastFinishedPulling="2026-01-27 16:03:44.254808978 +0000 UTC m=+1290.557602456" observedRunningTime="2026-01-27 16:03:45.818133583 +0000 UTC m=+1292.120927081" watchObservedRunningTime="2026-01-27 16:03:45.825784584 +0000 UTC m=+1292.128578072" Jan 27 16:03:45 crc kubenswrapper[4966]: I0127 16:03:45.852664 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" podStartSLOduration=6.852645836 podStartE2EDuration="6.852645836s" podCreationTimestamp="2026-01-27 16:03:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:03:45.845308886 +0000 UTC m=+1292.148102384" watchObservedRunningTime="2026-01-27 16:03:45.852645836 +0000 UTC m=+1292.155439324" Jan 27 16:03:46 crc kubenswrapper[4966]: I0127 16:03:46.242281 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 16:03:46 crc kubenswrapper[4966]: I0127 16:03:46.242587 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 16:03:46 crc kubenswrapper[4966]: I0127 16:03:46.358241 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 16:03:46 crc kubenswrapper[4966]: I0127 16:03:46.403381 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-774cdb758b-bmhpk" podUID="efa660ea-77f1-49d3-809c-d8f73519dd08" containerName="console" containerID="cri-o://c47139525faa9af3edeb86b680fa64ead596a43b74df629b52759e228e74153d" gracePeriod=15 Jan 27 16:03:46 crc kubenswrapper[4966]: I0127 16:03:46.544310 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="688b0294-ca80-4f78-8704-31c17a81345b" path="/var/lib/kubelet/pods/688b0294-ca80-4f78-8704-31c17a81345b/volumes" Jan 27 16:03:46 crc kubenswrapper[4966]: I0127 16:03:46.547485 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9fe0821-6e92-4212-a641-085507df448d" path="/var/lib/kubelet/pods/b9fe0821-6e92-4212-a641-085507df448d/volumes" Jan 27 16:03:46 crc kubenswrapper[4966]: I0127 16:03:46.815046 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651","Type":"ContainerStarted","Data":"c1906b55fcf5c66d8749bf1780b217583e86e1db016a907cb6e76987393c7f97"} Jan 27 16:03:46 crc kubenswrapper[4966]: I0127 16:03:46.826121 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-774cdb758b-bmhpk_efa660ea-77f1-49d3-809c-d8f73519dd08/console/0.log" Jan 27 16:03:46 crc kubenswrapper[4966]: I0127 16:03:46.826169 4966 generic.go:334] "Generic (PLEG): container finished" podID="efa660ea-77f1-49d3-809c-d8f73519dd08" containerID="c47139525faa9af3edeb86b680fa64ead596a43b74df629b52759e228e74153d" exitCode=2 Jan 27 16:03:46 crc kubenswrapper[4966]: I0127 16:03:46.826324 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-774cdb758b-bmhpk" event={"ID":"efa660ea-77f1-49d3-809c-d8f73519dd08","Type":"ContainerDied","Data":"c47139525faa9af3edeb86b680fa64ead596a43b74df629b52759e228e74153d"} Jan 27 16:03:46 crc kubenswrapper[4966]: I0127 16:03:46.994905 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.468414 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-2a93-account-create-update-hz627"] Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.469865 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2a93-account-create-update-hz627" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.478611 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.479101 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2a93-account-create-update-hz627"] Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.495357 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2t65\" (UniqueName: \"kubernetes.io/projected/6fd7137c-b5f0-4e7b-9fd3-d145e97493eb-kube-api-access-x2t65\") pod \"keystone-2a93-account-create-update-hz627\" (UID: \"6fd7137c-b5f0-4e7b-9fd3-d145e97493eb\") " pod="openstack/keystone-2a93-account-create-update-hz627" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.495443 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd7137c-b5f0-4e7b-9fd3-d145e97493eb-operator-scripts\") pod \"keystone-2a93-account-create-update-hz627\" (UID: \"6fd7137c-b5f0-4e7b-9fd3-d145e97493eb\") " pod="openstack/keystone-2a93-account-create-update-hz627" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.530791 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.530848 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.544627 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-rlmr2"] Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.546000 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rlmr2" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.551494 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rlmr2"] Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.600731 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2t65\" (UniqueName: \"kubernetes.io/projected/6fd7137c-b5f0-4e7b-9fd3-d145e97493eb-kube-api-access-x2t65\") pod \"keystone-2a93-account-create-update-hz627\" (UID: \"6fd7137c-b5f0-4e7b-9fd3-d145e97493eb\") " pod="openstack/keystone-2a93-account-create-update-hz627" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.600829 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd7137c-b5f0-4e7b-9fd3-d145e97493eb-operator-scripts\") pod \"keystone-2a93-account-create-update-hz627\" (UID: \"6fd7137c-b5f0-4e7b-9fd3-d145e97493eb\") " pod="openstack/keystone-2a93-account-create-update-hz627" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.601691 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd7137c-b5f0-4e7b-9fd3-d145e97493eb-operator-scripts\") pod \"keystone-2a93-account-create-update-hz627\" (UID: \"6fd7137c-b5f0-4e7b-9fd3-d145e97493eb\") " pod="openstack/keystone-2a93-account-create-update-hz627" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.622762 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2t65\" (UniqueName: \"kubernetes.io/projected/6fd7137c-b5f0-4e7b-9fd3-d145e97493eb-kube-api-access-x2t65\") pod \"keystone-2a93-account-create-update-hz627\" (UID: \"6fd7137c-b5f0-4e7b-9fd3-d145e97493eb\") " pod="openstack/keystone-2a93-account-create-update-hz627" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.695371 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.703193 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22399f16-c5ed-4686-9d3e-f73bc0bea72e-operator-scripts\") pod \"keystone-db-create-rlmr2\" (UID: \"22399f16-c5ed-4686-9d3e-f73bc0bea72e\") " pod="openstack/keystone-db-create-rlmr2" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.703363 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfgg5\" (UniqueName: \"kubernetes.io/projected/22399f16-c5ed-4686-9d3e-f73bc0bea72e-kube-api-access-pfgg5\") pod \"keystone-db-create-rlmr2\" (UID: \"22399f16-c5ed-4686-9d3e-f73bc0bea72e\") " pod="openstack/keystone-db-create-rlmr2" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.791350 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2a93-account-create-update-hz627" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.791346 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-c879q"] Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.792971 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c879q" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.802410 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-c879q"] Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.808374 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfgg5\" (UniqueName: \"kubernetes.io/projected/22399f16-c5ed-4686-9d3e-f73bc0bea72e-kube-api-access-pfgg5\") pod \"keystone-db-create-rlmr2\" (UID: \"22399f16-c5ed-4686-9d3e-f73bc0bea72e\") " pod="openstack/keystone-db-create-rlmr2" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.809267 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22399f16-c5ed-4686-9d3e-f73bc0bea72e-operator-scripts\") pod \"keystone-db-create-rlmr2\" (UID: \"22399f16-c5ed-4686-9d3e-f73bc0bea72e\") " pod="openstack/keystone-db-create-rlmr2" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.819336 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22399f16-c5ed-4686-9d3e-f73bc0bea72e-operator-scripts\") pod \"keystone-db-create-rlmr2\" (UID: \"22399f16-c5ed-4686-9d3e-f73bc0bea72e\") " pod="openstack/keystone-db-create-rlmr2" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.843129 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfgg5\" (UniqueName: \"kubernetes.io/projected/22399f16-c5ed-4686-9d3e-f73bc0bea72e-kube-api-access-pfgg5\") pod \"keystone-db-create-rlmr2\" (UID: \"22399f16-c5ed-4686-9d3e-f73bc0bea72e\") " pod="openstack/keystone-db-create-rlmr2" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.861856 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f2b8-account-create-update-x59c5"] Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.863321 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f2b8-account-create-update-x59c5" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.864958 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.889954 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f2b8-account-create-update-x59c5"] Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.910088 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7lcp\" (UniqueName: \"kubernetes.io/projected/e59f8f2a-53c9-4e5a-bb13-e79058d62972-kube-api-access-c7lcp\") pod \"placement-db-create-c879q\" (UID: \"e59f8f2a-53c9-4e5a-bb13-e79058d62972\") " pod="openstack/placement-db-create-c879q" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.910313 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e59f8f2a-53c9-4e5a-bb13-e79058d62972-operator-scripts\") pod \"placement-db-create-c879q\" (UID: \"e59f8f2a-53c9-4e5a-bb13-e79058d62972\") " pod="openstack/placement-db-create-c879q" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.910578 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxc99\" (UniqueName: \"kubernetes.io/projected/006407e5-c179-4f35-874a-af73cc024106-kube-api-access-lxc99\") pod \"placement-f2b8-account-create-update-x59c5\" (UID: \"006407e5-c179-4f35-874a-af73cc024106\") " pod="openstack/placement-f2b8-account-create-update-x59c5" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.910627 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006407e5-c179-4f35-874a-af73cc024106-operator-scripts\") pod \"placement-f2b8-account-create-update-x59c5\" (UID: \"006407e5-c179-4f35-874a-af73cc024106\") " pod="openstack/placement-f2b8-account-create-update-x59c5" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.913580 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rlmr2" Jan 27 16:03:47 crc kubenswrapper[4966]: I0127 16:03:47.929361 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.012824 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e59f8f2a-53c9-4e5a-bb13-e79058d62972-operator-scripts\") pod \"placement-db-create-c879q\" (UID: \"e59f8f2a-53c9-4e5a-bb13-e79058d62972\") " pod="openstack/placement-db-create-c879q" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.013392 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxc99\" (UniqueName: \"kubernetes.io/projected/006407e5-c179-4f35-874a-af73cc024106-kube-api-access-lxc99\") pod \"placement-f2b8-account-create-update-x59c5\" (UID: \"006407e5-c179-4f35-874a-af73cc024106\") " pod="openstack/placement-f2b8-account-create-update-x59c5" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.013560 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e59f8f2a-53c9-4e5a-bb13-e79058d62972-operator-scripts\") pod \"placement-db-create-c879q\" (UID: \"e59f8f2a-53c9-4e5a-bb13-e79058d62972\") " pod="openstack/placement-db-create-c879q" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.013851 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006407e5-c179-4f35-874a-af73cc024106-operator-scripts\") pod \"placement-f2b8-account-create-update-x59c5\" (UID: \"006407e5-c179-4f35-874a-af73cc024106\") " pod="openstack/placement-f2b8-account-create-update-x59c5" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.014703 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006407e5-c179-4f35-874a-af73cc024106-operator-scripts\") pod \"placement-f2b8-account-create-update-x59c5\" (UID: \"006407e5-c179-4f35-874a-af73cc024106\") " pod="openstack/placement-f2b8-account-create-update-x59c5" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.015538 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7lcp\" (UniqueName: \"kubernetes.io/projected/e59f8f2a-53c9-4e5a-bb13-e79058d62972-kube-api-access-c7lcp\") pod \"placement-db-create-c879q\" (UID: \"e59f8f2a-53c9-4e5a-bb13-e79058d62972\") " pod="openstack/placement-db-create-c879q" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.030039 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxc99\" (UniqueName: \"kubernetes.io/projected/006407e5-c179-4f35-874a-af73cc024106-kube-api-access-lxc99\") pod \"placement-f2b8-account-create-update-x59c5\" (UID: \"006407e5-c179-4f35-874a-af73cc024106\") " pod="openstack/placement-f2b8-account-create-update-x59c5" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.043190 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7lcp\" (UniqueName: \"kubernetes.io/projected/e59f8f2a-53c9-4e5a-bb13-e79058d62972-kube-api-access-c7lcp\") pod \"placement-db-create-c879q\" (UID: \"e59f8f2a-53c9-4e5a-bb13-e79058d62972\") " pod="openstack/placement-db-create-c879q" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.112650 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c879q" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.259307 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-6k7tg"] Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.261462 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6k7tg" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.267788 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-6k7tg"] Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.299088 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f2b8-account-create-update-x59c5" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.417197 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-3f19-account-create-update-h6kd6"] Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.418562 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3f19-account-create-update-h6kd6" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.423649 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd643b41-42c6-4d59-8ee9-591b9717a088-operator-scripts\") pod \"glance-db-create-6k7tg\" (UID: \"fd643b41-42c6-4d59-8ee9-591b9717a088\") " pod="openstack/glance-db-create-6k7tg" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.423767 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxn2q\" (UniqueName: \"kubernetes.io/projected/fd643b41-42c6-4d59-8ee9-591b9717a088-kube-api-access-bxn2q\") pod \"glance-db-create-6k7tg\" (UID: \"fd643b41-42c6-4d59-8ee9-591b9717a088\") " pod="openstack/glance-db-create-6k7tg" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.425107 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.430923 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3f19-account-create-update-h6kd6"] Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.526013 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b4d4d2d-fe32-40fc-a565-8cdf4acbb853-operator-scripts\") pod \"glance-3f19-account-create-update-h6kd6\" (UID: \"1b4d4d2d-fe32-40fc-a565-8cdf4acbb853\") " pod="openstack/glance-3f19-account-create-update-h6kd6" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.526148 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd643b41-42c6-4d59-8ee9-591b9717a088-operator-scripts\") pod \"glance-db-create-6k7tg\" (UID: \"fd643b41-42c6-4d59-8ee9-591b9717a088\") " pod="openstack/glance-db-create-6k7tg" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.526400 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxn2q\" (UniqueName: \"kubernetes.io/projected/fd643b41-42c6-4d59-8ee9-591b9717a088-kube-api-access-bxn2q\") pod \"glance-db-create-6k7tg\" (UID: \"fd643b41-42c6-4d59-8ee9-591b9717a088\") " pod="openstack/glance-db-create-6k7tg" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.526581 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4ljg\" (UniqueName: \"kubernetes.io/projected/1b4d4d2d-fe32-40fc-a565-8cdf4acbb853-kube-api-access-t4ljg\") pod \"glance-3f19-account-create-update-h6kd6\" (UID: \"1b4d4d2d-fe32-40fc-a565-8cdf4acbb853\") " pod="openstack/glance-3f19-account-create-update-h6kd6" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.527171 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd643b41-42c6-4d59-8ee9-591b9717a088-operator-scripts\") pod \"glance-db-create-6k7tg\" (UID: \"fd643b41-42c6-4d59-8ee9-591b9717a088\") " pod="openstack/glance-db-create-6k7tg" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.628321 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4ljg\" (UniqueName: \"kubernetes.io/projected/1b4d4d2d-fe32-40fc-a565-8cdf4acbb853-kube-api-access-t4ljg\") pod \"glance-3f19-account-create-update-h6kd6\" (UID: \"1b4d4d2d-fe32-40fc-a565-8cdf4acbb853\") " pod="openstack/glance-3f19-account-create-update-h6kd6" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.628635 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-etc-swift\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:48 crc kubenswrapper[4966]: E0127 16:03:48.628789 4966 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 16:03:48 crc kubenswrapper[4966]: E0127 16:03:48.628820 4966 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 16:03:48 crc kubenswrapper[4966]: E0127 16:03:48.628880 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-etc-swift podName:a59c903b-6e40-43bd-a120-e47e504cf5a9 nodeName:}" failed. No retries permitted until 2026-01-27 16:03:56.628862124 +0000 UTC m=+1302.931655612 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-etc-swift") pod "swift-storage-0" (UID: "a59c903b-6e40-43bd-a120-e47e504cf5a9") : configmap "swift-ring-files" not found Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.629075 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b4d4d2d-fe32-40fc-a565-8cdf4acbb853-operator-scripts\") pod \"glance-3f19-account-create-update-h6kd6\" (UID: \"1b4d4d2d-fe32-40fc-a565-8cdf4acbb853\") " pod="openstack/glance-3f19-account-create-update-h6kd6" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.630343 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b4d4d2d-fe32-40fc-a565-8cdf4acbb853-operator-scripts\") pod \"glance-3f19-account-create-update-h6kd6\" (UID: \"1b4d4d2d-fe32-40fc-a565-8cdf4acbb853\") " pod="openstack/glance-3f19-account-create-update-h6kd6" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.643447 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxn2q\" (UniqueName: \"kubernetes.io/projected/fd643b41-42c6-4d59-8ee9-591b9717a088-kube-api-access-bxn2q\") pod \"glance-db-create-6k7tg\" (UID: \"fd643b41-42c6-4d59-8ee9-591b9717a088\") " pod="openstack/glance-db-create-6k7tg" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.647559 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4ljg\" (UniqueName: \"kubernetes.io/projected/1b4d4d2d-fe32-40fc-a565-8cdf4acbb853-kube-api-access-t4ljg\") pod \"glance-3f19-account-create-update-h6kd6\" (UID: \"1b4d4d2d-fe32-40fc-a565-8cdf4acbb853\") " pod="openstack/glance-3f19-account-create-update-h6kd6" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.739433 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3f19-account-create-update-h6kd6" Jan 27 16:03:48 crc kubenswrapper[4966]: I0127 16:03:48.889353 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6k7tg" Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.479763 4966 patch_prober.go:28] interesting pod/console-774cdb758b-bmhpk container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.90:8443/health\": dial tcp 10.217.0.90:8443: connect: connection refused" start-of-body= Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.480420 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-774cdb758b-bmhpk" podUID="efa660ea-77f1-49d3-809c-d8f73519dd08" containerName="console" probeResult="failure" output="Get \"https://10.217.0.90:8443/health\": dial tcp 10.217.0.90:8443: connect: connection refused" Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.485286 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vqw86"] Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.487003 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vqw86" Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.545015 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vqw86"] Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.682103 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0c799f6-7ef0-4d2b-ae60-7808e22e9699-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-vqw86\" (UID: \"b0c799f6-7ef0-4d2b-ae60-7808e22e9699\") " pod="openstack/mysqld-exporter-openstack-db-create-vqw86" Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.682298 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rkqg\" (UniqueName: \"kubernetes.io/projected/b0c799f6-7ef0-4d2b-ae60-7808e22e9699-kube-api-access-8rkqg\") pod \"mysqld-exporter-openstack-db-create-vqw86\" (UID: \"b0c799f6-7ef0-4d2b-ae60-7808e22e9699\") " pod="openstack/mysqld-exporter-openstack-db-create-vqw86" Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.685461 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-456c-account-create-update-s5w5c"] Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.686801 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-456c-account-create-update-s5w5c" Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.704665 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.719256 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-456c-account-create-update-s5w5c"] Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.787384 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mp2t\" (UniqueName: \"kubernetes.io/projected/7033d440-7efe-4838-b31b-d84a86491a1f-kube-api-access-8mp2t\") pod \"mysqld-exporter-456c-account-create-update-s5w5c\" (UID: \"7033d440-7efe-4838-b31b-d84a86491a1f\") " pod="openstack/mysqld-exporter-456c-account-create-update-s5w5c" Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.787820 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rkqg\" (UniqueName: \"kubernetes.io/projected/b0c799f6-7ef0-4d2b-ae60-7808e22e9699-kube-api-access-8rkqg\") pod \"mysqld-exporter-openstack-db-create-vqw86\" (UID: \"b0c799f6-7ef0-4d2b-ae60-7808e22e9699\") " pod="openstack/mysqld-exporter-openstack-db-create-vqw86" Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.787853 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7033d440-7efe-4838-b31b-d84a86491a1f-operator-scripts\") pod \"mysqld-exporter-456c-account-create-update-s5w5c\" (UID: \"7033d440-7efe-4838-b31b-d84a86491a1f\") " pod="openstack/mysqld-exporter-456c-account-create-update-s5w5c" Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.787929 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0c799f6-7ef0-4d2b-ae60-7808e22e9699-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-vqw86\" (UID: \"b0c799f6-7ef0-4d2b-ae60-7808e22e9699\") " pod="openstack/mysqld-exporter-openstack-db-create-vqw86" Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.788647 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0c799f6-7ef0-4d2b-ae60-7808e22e9699-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-vqw86\" (UID: \"b0c799f6-7ef0-4d2b-ae60-7808e22e9699\") " pod="openstack/mysqld-exporter-openstack-db-create-vqw86" Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.810645 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rkqg\" (UniqueName: \"kubernetes.io/projected/b0c799f6-7ef0-4d2b-ae60-7808e22e9699-kube-api-access-8rkqg\") pod \"mysqld-exporter-openstack-db-create-vqw86\" (UID: \"b0c799f6-7ef0-4d2b-ae60-7808e22e9699\") " pod="openstack/mysqld-exporter-openstack-db-create-vqw86" Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.839366 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vqw86" Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.891701 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mp2t\" (UniqueName: \"kubernetes.io/projected/7033d440-7efe-4838-b31b-d84a86491a1f-kube-api-access-8mp2t\") pod \"mysqld-exporter-456c-account-create-update-s5w5c\" (UID: \"7033d440-7efe-4838-b31b-d84a86491a1f\") " pod="openstack/mysqld-exporter-456c-account-create-update-s5w5c" Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.892099 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7033d440-7efe-4838-b31b-d84a86491a1f-operator-scripts\") pod \"mysqld-exporter-456c-account-create-update-s5w5c\" (UID: \"7033d440-7efe-4838-b31b-d84a86491a1f\") " pod="openstack/mysqld-exporter-456c-account-create-update-s5w5c" Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.892992 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7033d440-7efe-4838-b31b-d84a86491a1f-operator-scripts\") pod \"mysqld-exporter-456c-account-create-update-s5w5c\" (UID: \"7033d440-7efe-4838-b31b-d84a86491a1f\") " pod="openstack/mysqld-exporter-456c-account-create-update-s5w5c" Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.894679 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651","Type":"ContainerStarted","Data":"bdba0b211926a4b0fa7405022533bc91f629b32bffc8a5e980902fcbf6cf6cfa"} Jan 27 16:03:49 crc kubenswrapper[4966]: I0127 16:03:49.930593 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mp2t\" (UniqueName: \"kubernetes.io/projected/7033d440-7efe-4838-b31b-d84a86491a1f-kube-api-access-8mp2t\") pod \"mysqld-exporter-456c-account-create-update-s5w5c\" (UID: \"7033d440-7efe-4838-b31b-d84a86491a1f\") " pod="openstack/mysqld-exporter-456c-account-create-update-s5w5c" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.010552 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-456c-account-create-update-s5w5c" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.109316 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.192403 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-zk5fg"] Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.192648 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-zk5fg" podUID="2f2f7ea2-3cca-492e-af4d-bde605e40529" containerName="dnsmasq-dns" containerID="cri-o://6b20d448c2699a2a61fa7c08aa9f9d280247f4e9124758fdfd6b1fd8cbf45625" gracePeriod=10 Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.196119 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-zk5fg" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.488353 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-774cdb758b-bmhpk_efa660ea-77f1-49d3-809c-d8f73519dd08/console/0.log" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.488731 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.607057 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/efa660ea-77f1-49d3-809c-d8f73519dd08-console-oauth-config\") pod \"efa660ea-77f1-49d3-809c-d8f73519dd08\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.607135 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-oauth-serving-cert\") pod \"efa660ea-77f1-49d3-809c-d8f73519dd08\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.607489 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-trusted-ca-bundle\") pod \"efa660ea-77f1-49d3-809c-d8f73519dd08\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.607593 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-service-ca\") pod \"efa660ea-77f1-49d3-809c-d8f73519dd08\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.607677 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-console-config\") pod \"efa660ea-77f1-49d3-809c-d8f73519dd08\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.607768 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/efa660ea-77f1-49d3-809c-d8f73519dd08-console-serving-cert\") pod \"efa660ea-77f1-49d3-809c-d8f73519dd08\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.607816 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5m4j\" (UniqueName: \"kubernetes.io/projected/efa660ea-77f1-49d3-809c-d8f73519dd08-kube-api-access-n5m4j\") pod \"efa660ea-77f1-49d3-809c-d8f73519dd08\" (UID: \"efa660ea-77f1-49d3-809c-d8f73519dd08\") " Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.608633 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "efa660ea-77f1-49d3-809c-d8f73519dd08" (UID: "efa660ea-77f1-49d3-809c-d8f73519dd08"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.608908 4966 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.609298 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-service-ca" (OuterVolumeSpecName: "service-ca") pod "efa660ea-77f1-49d3-809c-d8f73519dd08" (UID: "efa660ea-77f1-49d3-809c-d8f73519dd08"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.609585 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "efa660ea-77f1-49d3-809c-d8f73519dd08" (UID: "efa660ea-77f1-49d3-809c-d8f73519dd08"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.609615 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-console-config" (OuterVolumeSpecName: "console-config") pod "efa660ea-77f1-49d3-809c-d8f73519dd08" (UID: "efa660ea-77f1-49d3-809c-d8f73519dd08"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.615715 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efa660ea-77f1-49d3-809c-d8f73519dd08-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "efa660ea-77f1-49d3-809c-d8f73519dd08" (UID: "efa660ea-77f1-49d3-809c-d8f73519dd08"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.615864 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efa660ea-77f1-49d3-809c-d8f73519dd08-kube-api-access-n5m4j" (OuterVolumeSpecName: "kube-api-access-n5m4j") pod "efa660ea-77f1-49d3-809c-d8f73519dd08" (UID: "efa660ea-77f1-49d3-809c-d8f73519dd08"). InnerVolumeSpecName "kube-api-access-n5m4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.617116 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efa660ea-77f1-49d3-809c-d8f73519dd08-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "efa660ea-77f1-49d3-809c-d8f73519dd08" (UID: "efa660ea-77f1-49d3-809c-d8f73519dd08"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.711995 4966 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.712022 4966 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/efa660ea-77f1-49d3-809c-d8f73519dd08-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.712032 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5m4j\" (UniqueName: \"kubernetes.io/projected/efa660ea-77f1-49d3-809c-d8f73519dd08-kube-api-access-n5m4j\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.712040 4966 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/efa660ea-77f1-49d3-809c-d8f73519dd08-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.712048 4966 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.712055 4966 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/efa660ea-77f1-49d3-809c-d8f73519dd08-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.912436 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6r97n" event={"ID":"5036c06b-cb10-4530-9315-4ba4dee273f0","Type":"ContainerStarted","Data":"a7209bfddd78309a0bfba8667676d69d42139f7126308ab7d9c0bc9f96c3dc84"} Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.915247 4966 generic.go:334] "Generic (PLEG): container finished" podID="2f2f7ea2-3cca-492e-af4d-bde605e40529" containerID="6b20d448c2699a2a61fa7c08aa9f9d280247f4e9124758fdfd6b1fd8cbf45625" exitCode=0 Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.915286 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-zk5fg" event={"ID":"2f2f7ea2-3cca-492e-af4d-bde605e40529","Type":"ContainerDied","Data":"6b20d448c2699a2a61fa7c08aa9f9d280247f4e9124758fdfd6b1fd8cbf45625"} Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.932869 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-6r97n" podStartSLOduration=4.189161211 podStartE2EDuration="9.932853497s" podCreationTimestamp="2026-01-27 16:03:41 +0000 UTC" firstStartedPulling="2026-01-27 16:03:44.526175951 +0000 UTC m=+1290.828969439" lastFinishedPulling="2026-01-27 16:03:50.269868237 +0000 UTC m=+1296.572661725" observedRunningTime="2026-01-27 16:03:50.930819653 +0000 UTC m=+1297.233613141" watchObservedRunningTime="2026-01-27 16:03:50.932853497 +0000 UTC m=+1297.235646985" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.940742 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-774cdb758b-bmhpk_efa660ea-77f1-49d3-809c-d8f73519dd08/console/0.log" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.940877 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-774cdb758b-bmhpk" Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.941091 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-774cdb758b-bmhpk" event={"ID":"efa660ea-77f1-49d3-809c-d8f73519dd08","Type":"ContainerDied","Data":"c63a4ff5788f54c3f95695abfad42cae02634213563c694837748449d09beade"} Jan 27 16:03:50 crc kubenswrapper[4966]: I0127 16:03:50.941139 4966 scope.go:117] "RemoveContainer" containerID="c47139525faa9af3edeb86b680fa64ead596a43b74df629b52759e228e74153d" Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.050152 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-zk5fg" Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.056013 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-774cdb758b-bmhpk"] Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.064539 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-774cdb758b-bmhpk"] Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.246566 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-config\") pod \"2f2f7ea2-3cca-492e-af4d-bde605e40529\" (UID: \"2f2f7ea2-3cca-492e-af4d-bde605e40529\") " Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.247044 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-dns-svc\") pod \"2f2f7ea2-3cca-492e-af4d-bde605e40529\" (UID: \"2f2f7ea2-3cca-492e-af4d-bde605e40529\") " Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.247121 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-ovsdbserver-sb\") pod \"2f2f7ea2-3cca-492e-af4d-bde605e40529\" (UID: \"2f2f7ea2-3cca-492e-af4d-bde605e40529\") " Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.247184 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-ovsdbserver-nb\") pod \"2f2f7ea2-3cca-492e-af4d-bde605e40529\" (UID: \"2f2f7ea2-3cca-492e-af4d-bde605e40529\") " Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.247283 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5f2c\" (UniqueName: \"kubernetes.io/projected/2f2f7ea2-3cca-492e-af4d-bde605e40529-kube-api-access-s5f2c\") pod \"2f2f7ea2-3cca-492e-af4d-bde605e40529\" (UID: \"2f2f7ea2-3cca-492e-af4d-bde605e40529\") " Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.272023 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f2f7ea2-3cca-492e-af4d-bde605e40529-kube-api-access-s5f2c" (OuterVolumeSpecName: "kube-api-access-s5f2c") pod "2f2f7ea2-3cca-492e-af4d-bde605e40529" (UID: "2f2f7ea2-3cca-492e-af4d-bde605e40529"). InnerVolumeSpecName "kube-api-access-s5f2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.333757 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2f2f7ea2-3cca-492e-af4d-bde605e40529" (UID: "2f2f7ea2-3cca-492e-af4d-bde605e40529"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.348726 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2f2f7ea2-3cca-492e-af4d-bde605e40529" (UID: "2f2f7ea2-3cca-492e-af4d-bde605e40529"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.349497 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2f2f7ea2-3cca-492e-af4d-bde605e40529" (UID: "2f2f7ea2-3cca-492e-af4d-bde605e40529"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:51 crc kubenswrapper[4966]: W0127 16:03:51.350603 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b4d4d2d_fe32_40fc_a565_8cdf4acbb853.slice/crio-1916d0d6ff287d70d7deb84040667f932634902ea10bbbe2591281d06c31af78 WatchSource:0}: Error finding container 1916d0d6ff287d70d7deb84040667f932634902ea10bbbe2591281d06c31af78: Status 404 returned error can't find the container with id 1916d0d6ff287d70d7deb84040667f932634902ea10bbbe2591281d06c31af78 Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.350960 4966 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.350977 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.350987 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.350999 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5f2c\" (UniqueName: \"kubernetes.io/projected/2f2f7ea2-3cca-492e-af4d-bde605e40529-kube-api-access-s5f2c\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.361787 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2a93-account-create-update-hz627"] Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.371709 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-config" (OuterVolumeSpecName: "config") pod "2f2f7ea2-3cca-492e-af4d-bde605e40529" (UID: "2f2f7ea2-3cca-492e-af4d-bde605e40529"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.375291 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3f19-account-create-update-h6kd6"] Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.389284 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vqw86"] Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.411335 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-c879q"] Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.423349 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rlmr2"] Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.434366 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-456c-account-create-update-s5w5c"] Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.451919 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f2f7ea2-3cca-492e-af4d-bde605e40529-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.493100 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f2b8-account-create-update-x59c5"] Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.521247 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-6k7tg"] Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.956490 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c879q" event={"ID":"e59f8f2a-53c9-4e5a-bb13-e79058d62972","Type":"ContainerStarted","Data":"b5697f1c8ce68aff15fa82c90e3ff2661af5049db5fbc98beb823cfd854019cf"} Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.956806 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c879q" event={"ID":"e59f8f2a-53c9-4e5a-bb13-e79058d62972","Type":"ContainerStarted","Data":"777f42fa130e47883bfce4621fab237466908aff9d26f002fcde5925d5ca9264"} Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.963710 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vqw86" event={"ID":"b0c799f6-7ef0-4d2b-ae60-7808e22e9699","Type":"ContainerStarted","Data":"f359cba5060bfa416a0561eaf54a394493f4360fd51117c5f640e97a2beb4779"} Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.963753 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vqw86" event={"ID":"b0c799f6-7ef0-4d2b-ae60-7808e22e9699","Type":"ContainerStarted","Data":"c714dafef778194dc4ee618d83491484a5ba6f5240a5df33d2bbeb53a58b2253"} Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.968037 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-456c-account-create-update-s5w5c" event={"ID":"7033d440-7efe-4838-b31b-d84a86491a1f","Type":"ContainerStarted","Data":"bda92224ebd6d3388fad27d0138bd99e486f2675ae0fcdc4086bcf2f0b4bf9ff"} Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.968081 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-456c-account-create-update-s5w5c" event={"ID":"7033d440-7efe-4838-b31b-d84a86491a1f","Type":"ContainerStarted","Data":"bc362a07a106dbbaddead49a8d868101a953d4b318a2a3e90a0dde3abfd4ac65"} Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.969434 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3f19-account-create-update-h6kd6" event={"ID":"1b4d4d2d-fe32-40fc-a565-8cdf4acbb853","Type":"ContainerStarted","Data":"b317d37e1a5a7c3729faaa312ef90f0fd1be9693cf907574f629c044b8e9e8dd"} Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.969485 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3f19-account-create-update-h6kd6" event={"ID":"1b4d4d2d-fe32-40fc-a565-8cdf4acbb853","Type":"ContainerStarted","Data":"1916d0d6ff287d70d7deb84040667f932634902ea10bbbe2591281d06c31af78"} Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.972092 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rlmr2" event={"ID":"22399f16-c5ed-4686-9d3e-f73bc0bea72e","Type":"ContainerStarted","Data":"8915f48209e200d2336575236659726fd62dd8a48afeb76b4352e4b52aacfb63"} Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.972124 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rlmr2" event={"ID":"22399f16-c5ed-4686-9d3e-f73bc0bea72e","Type":"ContainerStarted","Data":"65c2fac7e50b698d442c2c9caadbfdcbc5d1c832faa6405dc46dd7d263c5eb87"} Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.974839 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2a93-account-create-update-hz627" event={"ID":"6fd7137c-b5f0-4e7b-9fd3-d145e97493eb","Type":"ContainerStarted","Data":"9d9640c6dcbdb3d048da5960606e7fa314e5605495c9072bbec9bd39ddb5e81b"} Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.974867 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2a93-account-create-update-hz627" event={"ID":"6fd7137c-b5f0-4e7b-9fd3-d145e97493eb","Type":"ContainerStarted","Data":"2b293a7473d4f278e4286f958c9218c6a9f524e55b4359800129dc979b52ba0b"} Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.983071 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-zk5fg" event={"ID":"2f2f7ea2-3cca-492e-af4d-bde605e40529","Type":"ContainerDied","Data":"ffec6c60892623084bfe924b8191338981eb779047c17324d9b387aec72267b7"} Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.983121 4966 scope.go:117] "RemoveContainer" containerID="6b20d448c2699a2a61fa7c08aa9f9d280247f4e9124758fdfd6b1fd8cbf45625" Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.983244 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-zk5fg" Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.983285 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-c879q" podStartSLOduration=4.983263211 podStartE2EDuration="4.983263211s" podCreationTimestamp="2026-01-27 16:03:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:03:51.97525178 +0000 UTC m=+1298.278045268" watchObservedRunningTime="2026-01-27 16:03:51.983263211 +0000 UTC m=+1298.286056699" Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.995168 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-vqw86" podStartSLOduration=2.995152205 podStartE2EDuration="2.995152205s" podCreationTimestamp="2026-01-27 16:03:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:03:51.994527335 +0000 UTC m=+1298.297320823" watchObservedRunningTime="2026-01-27 16:03:51.995152205 +0000 UTC m=+1298.297945693" Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.996286 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6k7tg" event={"ID":"fd643b41-42c6-4d59-8ee9-591b9717a088","Type":"ContainerStarted","Data":"587e4147c19e024d12165e83fdaba9915d92637ded5bfefe5f8a1311922a42f6"} Jan 27 16:03:51 crc kubenswrapper[4966]: I0127 16:03:51.996328 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6k7tg" event={"ID":"fd643b41-42c6-4d59-8ee9-591b9717a088","Type":"ContainerStarted","Data":"67703f826736cdcf622284232c87cfc870fab2e5d2e8dced1888a955132e583e"} Jan 27 16:03:52 crc kubenswrapper[4966]: I0127 16:03:52.001088 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f2b8-account-create-update-x59c5" event={"ID":"006407e5-c179-4f35-874a-af73cc024106","Type":"ContainerStarted","Data":"3b05ef822c13e9be7561f4cdb501dc287ba12816f27d46cde20ef89b83d8bea7"} Jan 27 16:03:52 crc kubenswrapper[4966]: I0127 16:03:52.001187 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f2b8-account-create-update-x59c5" event={"ID":"006407e5-c179-4f35-874a-af73cc024106","Type":"ContainerStarted","Data":"38b6284605d0efcc323061bcc9d4c863b2f64ef16975a9d873ea26438224930c"} Jan 27 16:03:52 crc kubenswrapper[4966]: I0127 16:03:52.035553 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-456c-account-create-update-s5w5c" podStartSLOduration=3.035532102 podStartE2EDuration="3.035532102s" podCreationTimestamp="2026-01-27 16:03:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:03:52.015264576 +0000 UTC m=+1298.318058064" watchObservedRunningTime="2026-01-27 16:03:52.035532102 +0000 UTC m=+1298.338325590" Jan 27 16:03:52 crc kubenswrapper[4966]: I0127 16:03:52.038568 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-rlmr2" podStartSLOduration=5.038554286 podStartE2EDuration="5.038554286s" podCreationTimestamp="2026-01-27 16:03:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:03:52.032751865 +0000 UTC m=+1298.335545353" watchObservedRunningTime="2026-01-27 16:03:52.038554286 +0000 UTC m=+1298.341347774" Jan 27 16:03:52 crc kubenswrapper[4966]: I0127 16:03:52.069095 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-3f19-account-create-update-h6kd6" podStartSLOduration=4.069079414 podStartE2EDuration="4.069079414s" podCreationTimestamp="2026-01-27 16:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:03:52.057420588 +0000 UTC m=+1298.360214076" watchObservedRunningTime="2026-01-27 16:03:52.069079414 +0000 UTC m=+1298.371872902" Jan 27 16:03:52 crc kubenswrapper[4966]: I0127 16:03:52.082702 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-6k7tg" podStartSLOduration=4.082687191 podStartE2EDuration="4.082687191s" podCreationTimestamp="2026-01-27 16:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:03:52.0714901 +0000 UTC m=+1298.374283588" watchObservedRunningTime="2026-01-27 16:03:52.082687191 +0000 UTC m=+1298.385480679" Jan 27 16:03:52 crc kubenswrapper[4966]: I0127 16:03:52.093617 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-f2b8-account-create-update-x59c5" podStartSLOduration=5.093597443 podStartE2EDuration="5.093597443s" podCreationTimestamp="2026-01-27 16:03:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:03:52.09031574 +0000 UTC m=+1298.393109228" watchObservedRunningTime="2026-01-27 16:03:52.093597443 +0000 UTC m=+1298.396390931" Jan 27 16:03:52 crc kubenswrapper[4966]: I0127 16:03:52.118791 4966 scope.go:117] "RemoveContainer" containerID="f93e756ae0f95871edced10cc429e22a4ab8510303259779ad5c4a4735004be7" Jan 27 16:03:52 crc kubenswrapper[4966]: I0127 16:03:52.124908 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-zk5fg"] Jan 27 16:03:52 crc kubenswrapper[4966]: I0127 16:03:52.134195 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-zk5fg"] Jan 27 16:03:52 crc kubenswrapper[4966]: I0127 16:03:52.548440 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f2f7ea2-3cca-492e-af4d-bde605e40529" path="/var/lib/kubelet/pods/2f2f7ea2-3cca-492e-af4d-bde605e40529/volumes" Jan 27 16:03:52 crc kubenswrapper[4966]: I0127 16:03:52.549236 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efa660ea-77f1-49d3-809c-d8f73519dd08" path="/var/lib/kubelet/pods/efa660ea-77f1-49d3-809c-d8f73519dd08/volumes" Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.021778 4966 generic.go:334] "Generic (PLEG): container finished" podID="22399f16-c5ed-4686-9d3e-f73bc0bea72e" containerID="8915f48209e200d2336575236659726fd62dd8a48afeb76b4352e4b52aacfb63" exitCode=0 Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.021819 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rlmr2" event={"ID":"22399f16-c5ed-4686-9d3e-f73bc0bea72e","Type":"ContainerDied","Data":"8915f48209e200d2336575236659726fd62dd8a48afeb76b4352e4b52aacfb63"} Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.024263 4966 generic.go:334] "Generic (PLEG): container finished" podID="6fd7137c-b5f0-4e7b-9fd3-d145e97493eb" containerID="9d9640c6dcbdb3d048da5960606e7fa314e5605495c9072bbec9bd39ddb5e81b" exitCode=0 Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.024281 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2a93-account-create-update-hz627" event={"ID":"6fd7137c-b5f0-4e7b-9fd3-d145e97493eb","Type":"ContainerDied","Data":"9d9640c6dcbdb3d048da5960606e7fa314e5605495c9072bbec9bd39ddb5e81b"} Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.026607 4966 generic.go:334] "Generic (PLEG): container finished" podID="9b278fb8-add2-4ddc-9e93-71962f1bb6fa" containerID="d49edacde651fb39c747b161fac2fee2073f47d6e0a4fad1361e65b9d66a4739" exitCode=0 Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.026664 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b278fb8-add2-4ddc-9e93-71962f1bb6fa","Type":"ContainerDied","Data":"d49edacde651fb39c747b161fac2fee2073f47d6e0a4fad1361e65b9d66a4739"} Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.027840 4966 generic.go:334] "Generic (PLEG): container finished" podID="e59f8f2a-53c9-4e5a-bb13-e79058d62972" containerID="b5697f1c8ce68aff15fa82c90e3ff2661af5049db5fbc98beb823cfd854019cf" exitCode=0 Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.027859 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c879q" event={"ID":"e59f8f2a-53c9-4e5a-bb13-e79058d62972","Type":"ContainerDied","Data":"b5697f1c8ce68aff15fa82c90e3ff2661af5049db5fbc98beb823cfd854019cf"} Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.034703 4966 generic.go:334] "Generic (PLEG): container finished" podID="006407e5-c179-4f35-874a-af73cc024106" containerID="3b05ef822c13e9be7561f4cdb501dc287ba12816f27d46cde20ef89b83d8bea7" exitCode=0 Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.034992 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f2b8-account-create-update-x59c5" event={"ID":"006407e5-c179-4f35-874a-af73cc024106","Type":"ContainerDied","Data":"3b05ef822c13e9be7561f4cdb501dc287ba12816f27d46cde20ef89b83d8bea7"} Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.038302 4966 generic.go:334] "Generic (PLEG): container finished" podID="7033d440-7efe-4838-b31b-d84a86491a1f" containerID="bda92224ebd6d3388fad27d0138bd99e486f2675ae0fcdc4086bcf2f0b4bf9ff" exitCode=0 Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.038371 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-456c-account-create-update-s5w5c" event={"ID":"7033d440-7efe-4838-b31b-d84a86491a1f","Type":"ContainerDied","Data":"bda92224ebd6d3388fad27d0138bd99e486f2675ae0fcdc4086bcf2f0b4bf9ff"} Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.044833 4966 generic.go:334] "Generic (PLEG): container finished" podID="b0c799f6-7ef0-4d2b-ae60-7808e22e9699" containerID="f359cba5060bfa416a0561eaf54a394493f4360fd51117c5f640e97a2beb4779" exitCode=0 Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.045032 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vqw86" event={"ID":"b0c799f6-7ef0-4d2b-ae60-7808e22e9699","Type":"ContainerDied","Data":"f359cba5060bfa416a0561eaf54a394493f4360fd51117c5f640e97a2beb4779"} Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.047271 4966 generic.go:334] "Generic (PLEG): container finished" podID="1b4d4d2d-fe32-40fc-a565-8cdf4acbb853" containerID="b317d37e1a5a7c3729faaa312ef90f0fd1be9693cf907574f629c044b8e9e8dd" exitCode=0 Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.047291 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3f19-account-create-update-h6kd6" event={"ID":"1b4d4d2d-fe32-40fc-a565-8cdf4acbb853","Type":"ContainerDied","Data":"b317d37e1a5a7c3729faaa312ef90f0fd1be9693cf907574f629c044b8e9e8dd"} Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.049823 4966 generic.go:334] "Generic (PLEG): container finished" podID="3744b7e0-d355-43b7-bbf3-853416fb4483" containerID="90e10932884502890f4d6ef4308186deae8aa96e83c0dff67a14437ec9de223e" exitCode=0 Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.049925 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3744b7e0-d355-43b7-bbf3-853416fb4483","Type":"ContainerDied","Data":"90e10932884502890f4d6ef4308186deae8aa96e83c0dff67a14437ec9de223e"} Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.065323 4966 generic.go:334] "Generic (PLEG): container finished" podID="c9809bd0-3c51-46c3-b6c0-0b2576685999" containerID="cb544cfcf80d3d721d19d5f4891a271e38e2857775344a1887bcbab76dd858e0" exitCode=0 Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.065434 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"c9809bd0-3c51-46c3-b6c0-0b2576685999","Type":"ContainerDied","Data":"cb544cfcf80d3d721d19d5f4891a271e38e2857775344a1887bcbab76dd858e0"} Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.075261 4966 generic.go:334] "Generic (PLEG): container finished" podID="fd643b41-42c6-4d59-8ee9-591b9717a088" containerID="587e4147c19e024d12165e83fdaba9915d92637ded5bfefe5f8a1311922a42f6" exitCode=0 Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.075301 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6k7tg" event={"ID":"fd643b41-42c6-4d59-8ee9-591b9717a088","Type":"ContainerDied","Data":"587e4147c19e024d12165e83fdaba9915d92637ded5bfefe5f8a1311922a42f6"} Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.463365 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2a93-account-create-update-hz627" Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.596073 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd7137c-b5f0-4e7b-9fd3-d145e97493eb-operator-scripts\") pod \"6fd7137c-b5f0-4e7b-9fd3-d145e97493eb\" (UID: \"6fd7137c-b5f0-4e7b-9fd3-d145e97493eb\") " Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.596167 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2t65\" (UniqueName: \"kubernetes.io/projected/6fd7137c-b5f0-4e7b-9fd3-d145e97493eb-kube-api-access-x2t65\") pod \"6fd7137c-b5f0-4e7b-9fd3-d145e97493eb\" (UID: \"6fd7137c-b5f0-4e7b-9fd3-d145e97493eb\") " Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.597917 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd7137c-b5f0-4e7b-9fd3-d145e97493eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fd7137c-b5f0-4e7b-9fd3-d145e97493eb" (UID: "6fd7137c-b5f0-4e7b-9fd3-d145e97493eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.603080 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd7137c-b5f0-4e7b-9fd3-d145e97493eb-kube-api-access-x2t65" (OuterVolumeSpecName: "kube-api-access-x2t65") pod "6fd7137c-b5f0-4e7b-9fd3-d145e97493eb" (UID: "6fd7137c-b5f0-4e7b-9fd3-d145e97493eb"). InnerVolumeSpecName "kube-api-access-x2t65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.699311 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd7137c-b5f0-4e7b-9fd3-d145e97493eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:53 crc kubenswrapper[4966]: I0127 16:03:53.699358 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2t65\" (UniqueName: \"kubernetes.io/projected/6fd7137c-b5f0-4e7b-9fd3-d145e97493eb-kube-api-access-x2t65\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.089479 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3744b7e0-d355-43b7-bbf3-853416fb4483","Type":"ContainerStarted","Data":"3018afa668b9549e8f42c06685b7e785300c78518b2be9c48c9e7d3a020be5b3"} Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.089667 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.091648 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b278fb8-add2-4ddc-9e93-71962f1bb6fa","Type":"ContainerStarted","Data":"c157f7e47f5330bec30b4bc570fc98c7930f51f7ce8e9d4a0fdcc0a31cc6e885"} Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.091866 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.094469 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"c9809bd0-3c51-46c3-b6c0-0b2576685999","Type":"ContainerStarted","Data":"83744988b8a2dae4e073a188b47d32bd81b7dbaa6c72254eb23d1078c7ae238a"} Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.094732 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.096476 4966 generic.go:334] "Generic (PLEG): container finished" podID="3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" containerID="bd0a2da95b03bf8155f1de1f0fff387b68f0343f263fee5bbb7fdf0b0e05dcb1" exitCode=0 Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.096551 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb","Type":"ContainerDied","Data":"bd0a2da95b03bf8155f1de1f0fff387b68f0343f263fee5bbb7fdf0b0e05dcb1"} Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.100039 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2a93-account-create-update-hz627" event={"ID":"6fd7137c-b5f0-4e7b-9fd3-d145e97493eb","Type":"ContainerDied","Data":"2b293a7473d4f278e4286f958c9218c6a9f524e55b4359800129dc979b52ba0b"} Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.100068 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b293a7473d4f278e4286f958c9218c6a9f524e55b4359800129dc979b52ba0b" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.100107 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2a93-account-create-update-hz627" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.131564 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.045524253 podStartE2EDuration="1m1.131540239s" podCreationTimestamp="2026-01-27 16:02:53 +0000 UTC" firstStartedPulling="2026-01-27 16:02:55.499224831 +0000 UTC m=+1241.802018319" lastFinishedPulling="2026-01-27 16:03:18.585240817 +0000 UTC m=+1264.888034305" observedRunningTime="2026-01-27 16:03:54.120634187 +0000 UTC m=+1300.423427705" watchObservedRunningTime="2026-01-27 16:03:54.131540239 +0000 UTC m=+1300.434333727" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.193295 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.817678105 podStartE2EDuration="1m1.193273846s" podCreationTimestamp="2026-01-27 16:02:53 +0000 UTC" firstStartedPulling="2026-01-27 16:02:55.399166882 +0000 UTC m=+1241.701960360" lastFinishedPulling="2026-01-27 16:03:18.774762613 +0000 UTC m=+1265.077556101" observedRunningTime="2026-01-27 16:03:54.185465461 +0000 UTC m=+1300.488258969" watchObservedRunningTime="2026-01-27 16:03:54.193273846 +0000 UTC m=+1300.496067334" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.238175 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.385486849 podStartE2EDuration="1m1.238154124s" podCreationTimestamp="2026-01-27 16:02:53 +0000 UTC" firstStartedPulling="2026-01-27 16:02:55.708995402 +0000 UTC m=+1242.011788890" lastFinishedPulling="2026-01-27 16:03:18.561662677 +0000 UTC m=+1264.864456165" observedRunningTime="2026-01-27 16:03:54.227382967 +0000 UTC m=+1300.530176475" watchObservedRunningTime="2026-01-27 16:03:54.238154124 +0000 UTC m=+1300.540947612" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.694139 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rlmr2" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.722910 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22399f16-c5ed-4686-9d3e-f73bc0bea72e-operator-scripts\") pod \"22399f16-c5ed-4686-9d3e-f73bc0bea72e\" (UID: \"22399f16-c5ed-4686-9d3e-f73bc0bea72e\") " Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.723144 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfgg5\" (UniqueName: \"kubernetes.io/projected/22399f16-c5ed-4686-9d3e-f73bc0bea72e-kube-api-access-pfgg5\") pod \"22399f16-c5ed-4686-9d3e-f73bc0bea72e\" (UID: \"22399f16-c5ed-4686-9d3e-f73bc0bea72e\") " Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.723355 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22399f16-c5ed-4686-9d3e-f73bc0bea72e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22399f16-c5ed-4686-9d3e-f73bc0bea72e" (UID: "22399f16-c5ed-4686-9d3e-f73bc0bea72e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.723841 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22399f16-c5ed-4686-9d3e-f73bc0bea72e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.728270 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22399f16-c5ed-4686-9d3e-f73bc0bea72e-kube-api-access-pfgg5" (OuterVolumeSpecName: "kube-api-access-pfgg5") pod "22399f16-c5ed-4686-9d3e-f73bc0bea72e" (UID: "22399f16-c5ed-4686-9d3e-f73bc0bea72e"). InnerVolumeSpecName "kube-api-access-pfgg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.826294 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfgg5\" (UniqueName: \"kubernetes.io/projected/22399f16-c5ed-4686-9d3e-f73bc0bea72e-kube-api-access-pfgg5\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.883784 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9h649"] Jan 27 16:03:54 crc kubenswrapper[4966]: E0127 16:03:54.884325 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f2f7ea2-3cca-492e-af4d-bde605e40529" containerName="init" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.884342 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f2f7ea2-3cca-492e-af4d-bde605e40529" containerName="init" Jan 27 16:03:54 crc kubenswrapper[4966]: E0127 16:03:54.884352 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa660ea-77f1-49d3-809c-d8f73519dd08" containerName="console" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.884359 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa660ea-77f1-49d3-809c-d8f73519dd08" containerName="console" Jan 27 16:03:54 crc kubenswrapper[4966]: E0127 16:03:54.884385 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f2f7ea2-3cca-492e-af4d-bde605e40529" containerName="dnsmasq-dns" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.884395 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f2f7ea2-3cca-492e-af4d-bde605e40529" containerName="dnsmasq-dns" Jan 27 16:03:54 crc kubenswrapper[4966]: E0127 16:03:54.884417 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd7137c-b5f0-4e7b-9fd3-d145e97493eb" containerName="mariadb-account-create-update" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.884425 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd7137c-b5f0-4e7b-9fd3-d145e97493eb" containerName="mariadb-account-create-update" Jan 27 16:03:54 crc kubenswrapper[4966]: E0127 16:03:54.884452 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22399f16-c5ed-4686-9d3e-f73bc0bea72e" containerName="mariadb-database-create" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.884460 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="22399f16-c5ed-4686-9d3e-f73bc0bea72e" containerName="mariadb-database-create" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.884684 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd7137c-b5f0-4e7b-9fd3-d145e97493eb" containerName="mariadb-account-create-update" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.884701 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f2f7ea2-3cca-492e-af4d-bde605e40529" containerName="dnsmasq-dns" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.884721 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="efa660ea-77f1-49d3-809c-d8f73519dd08" containerName="console" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.884738 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="22399f16-c5ed-4686-9d3e-f73bc0bea72e" containerName="mariadb-database-create" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.888425 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9h649" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.890786 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.904359 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9h649"] Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.929255 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzhxr\" (UniqueName: \"kubernetes.io/projected/a611a514-a28b-48ab-bd3a-00982afc19fb-kube-api-access-zzhxr\") pod \"root-account-create-update-9h649\" (UID: \"a611a514-a28b-48ab-bd3a-00982afc19fb\") " pod="openstack/root-account-create-update-9h649" Jan 27 16:03:54 crc kubenswrapper[4966]: I0127 16:03:54.929330 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a611a514-a28b-48ab-bd3a-00982afc19fb-operator-scripts\") pod \"root-account-create-update-9h649\" (UID: \"a611a514-a28b-48ab-bd3a-00982afc19fb\") " pod="openstack/root-account-create-update-9h649" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.031730 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzhxr\" (UniqueName: \"kubernetes.io/projected/a611a514-a28b-48ab-bd3a-00982afc19fb-kube-api-access-zzhxr\") pod \"root-account-create-update-9h649\" (UID: \"a611a514-a28b-48ab-bd3a-00982afc19fb\") " pod="openstack/root-account-create-update-9h649" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.032221 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a611a514-a28b-48ab-bd3a-00982afc19fb-operator-scripts\") pod \"root-account-create-update-9h649\" (UID: \"a611a514-a28b-48ab-bd3a-00982afc19fb\") " pod="openstack/root-account-create-update-9h649" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.033042 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a611a514-a28b-48ab-bd3a-00982afc19fb-operator-scripts\") pod \"root-account-create-update-9h649\" (UID: \"a611a514-a28b-48ab-bd3a-00982afc19fb\") " pod="openstack/root-account-create-update-9h649" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.052164 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzhxr\" (UniqueName: \"kubernetes.io/projected/a611a514-a28b-48ab-bd3a-00982afc19fb-kube-api-access-zzhxr\") pod \"root-account-create-update-9h649\" (UID: \"a611a514-a28b-48ab-bd3a-00982afc19fb\") " pod="openstack/root-account-create-update-9h649" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.122583 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb","Type":"ContainerStarted","Data":"fd167b1321ad76f31f0fc4ebd397387ffcc7106c1a113bb7df2740ee859fe644"} Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.122891 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.127628 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rlmr2" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.128198 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rlmr2" event={"ID":"22399f16-c5ed-4686-9d3e-f73bc0bea72e","Type":"ContainerDied","Data":"65c2fac7e50b698d442c2c9caadbfdcbc5d1c832faa6405dc46dd7d263c5eb87"} Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.128309 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65c2fac7e50b698d442c2c9caadbfdcbc5d1c832faa6405dc46dd7d263c5eb87" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.174363 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=38.743629585 podStartE2EDuration="1m2.174341455s" podCreationTimestamp="2026-01-27 16:02:53 +0000 UTC" firstStartedPulling="2026-01-27 16:02:55.324415957 +0000 UTC m=+1241.627209455" lastFinishedPulling="2026-01-27 16:03:18.755127837 +0000 UTC m=+1265.057921325" observedRunningTime="2026-01-27 16:03:55.164020451 +0000 UTC m=+1301.466813959" watchObservedRunningTime="2026-01-27 16:03:55.174341455 +0000 UTC m=+1301.477134943" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.207615 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9h649" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.445212 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vqw86" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.446040 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-456c-account-create-update-s5w5c" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.468798 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3f19-account-create-update-h6kd6" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.532580 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6k7tg" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.542442 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f2b8-account-create-update-x59c5" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.550717 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c879q" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.569958 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rkqg\" (UniqueName: \"kubernetes.io/projected/b0c799f6-7ef0-4d2b-ae60-7808e22e9699-kube-api-access-8rkqg\") pod \"b0c799f6-7ef0-4d2b-ae60-7808e22e9699\" (UID: \"b0c799f6-7ef0-4d2b-ae60-7808e22e9699\") " Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.570152 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mp2t\" (UniqueName: \"kubernetes.io/projected/7033d440-7efe-4838-b31b-d84a86491a1f-kube-api-access-8mp2t\") pod \"7033d440-7efe-4838-b31b-d84a86491a1f\" (UID: \"7033d440-7efe-4838-b31b-d84a86491a1f\") " Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.571072 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0c799f6-7ef0-4d2b-ae60-7808e22e9699-operator-scripts\") pod \"b0c799f6-7ef0-4d2b-ae60-7808e22e9699\" (UID: \"b0c799f6-7ef0-4d2b-ae60-7808e22e9699\") " Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.571107 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4ljg\" (UniqueName: \"kubernetes.io/projected/1b4d4d2d-fe32-40fc-a565-8cdf4acbb853-kube-api-access-t4ljg\") pod \"1b4d4d2d-fe32-40fc-a565-8cdf4acbb853\" (UID: \"1b4d4d2d-fe32-40fc-a565-8cdf4acbb853\") " Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.571227 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7033d440-7efe-4838-b31b-d84a86491a1f-operator-scripts\") pod \"7033d440-7efe-4838-b31b-d84a86491a1f\" (UID: \"7033d440-7efe-4838-b31b-d84a86491a1f\") " Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.571382 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b4d4d2d-fe32-40fc-a565-8cdf4acbb853-operator-scripts\") pod \"1b4d4d2d-fe32-40fc-a565-8cdf4acbb853\" (UID: \"1b4d4d2d-fe32-40fc-a565-8cdf4acbb853\") " Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.574190 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0c799f6-7ef0-4d2b-ae60-7808e22e9699-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b0c799f6-7ef0-4d2b-ae60-7808e22e9699" (UID: "b0c799f6-7ef0-4d2b-ae60-7808e22e9699"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.574597 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b4d4d2d-fe32-40fc-a565-8cdf4acbb853-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1b4d4d2d-fe32-40fc-a565-8cdf4acbb853" (UID: "1b4d4d2d-fe32-40fc-a565-8cdf4acbb853"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.575005 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0c799f6-7ef0-4d2b-ae60-7808e22e9699-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.575028 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b4d4d2d-fe32-40fc-a565-8cdf4acbb853-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.576452 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7033d440-7efe-4838-b31b-d84a86491a1f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7033d440-7efe-4838-b31b-d84a86491a1f" (UID: "7033d440-7efe-4838-b31b-d84a86491a1f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.578223 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b4d4d2d-fe32-40fc-a565-8cdf4acbb853-kube-api-access-t4ljg" (OuterVolumeSpecName: "kube-api-access-t4ljg") pod "1b4d4d2d-fe32-40fc-a565-8cdf4acbb853" (UID: "1b4d4d2d-fe32-40fc-a565-8cdf4acbb853"). InnerVolumeSpecName "kube-api-access-t4ljg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.601759 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0c799f6-7ef0-4d2b-ae60-7808e22e9699-kube-api-access-8rkqg" (OuterVolumeSpecName: "kube-api-access-8rkqg") pod "b0c799f6-7ef0-4d2b-ae60-7808e22e9699" (UID: "b0c799f6-7ef0-4d2b-ae60-7808e22e9699"). InnerVolumeSpecName "kube-api-access-8rkqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.601871 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7033d440-7efe-4838-b31b-d84a86491a1f-kube-api-access-8mp2t" (OuterVolumeSpecName: "kube-api-access-8mp2t") pod "7033d440-7efe-4838-b31b-d84a86491a1f" (UID: "7033d440-7efe-4838-b31b-d84a86491a1f"). InnerVolumeSpecName "kube-api-access-8mp2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.677516 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7lcp\" (UniqueName: \"kubernetes.io/projected/e59f8f2a-53c9-4e5a-bb13-e79058d62972-kube-api-access-c7lcp\") pod \"e59f8f2a-53c9-4e5a-bb13-e79058d62972\" (UID: \"e59f8f2a-53c9-4e5a-bb13-e79058d62972\") " Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.677573 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006407e5-c179-4f35-874a-af73cc024106-operator-scripts\") pod \"006407e5-c179-4f35-874a-af73cc024106\" (UID: \"006407e5-c179-4f35-874a-af73cc024106\") " Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.677675 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxc99\" (UniqueName: \"kubernetes.io/projected/006407e5-c179-4f35-874a-af73cc024106-kube-api-access-lxc99\") pod \"006407e5-c179-4f35-874a-af73cc024106\" (UID: \"006407e5-c179-4f35-874a-af73cc024106\") " Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.677791 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e59f8f2a-53c9-4e5a-bb13-e79058d62972-operator-scripts\") pod \"e59f8f2a-53c9-4e5a-bb13-e79058d62972\" (UID: \"e59f8f2a-53c9-4e5a-bb13-e79058d62972\") " Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.677862 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxn2q\" (UniqueName: \"kubernetes.io/projected/fd643b41-42c6-4d59-8ee9-591b9717a088-kube-api-access-bxn2q\") pod \"fd643b41-42c6-4d59-8ee9-591b9717a088\" (UID: \"fd643b41-42c6-4d59-8ee9-591b9717a088\") " Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.677882 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd643b41-42c6-4d59-8ee9-591b9717a088-operator-scripts\") pod \"fd643b41-42c6-4d59-8ee9-591b9717a088\" (UID: \"fd643b41-42c6-4d59-8ee9-591b9717a088\") " Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.678522 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7033d440-7efe-4838-b31b-d84a86491a1f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.678543 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rkqg\" (UniqueName: \"kubernetes.io/projected/b0c799f6-7ef0-4d2b-ae60-7808e22e9699-kube-api-access-8rkqg\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.678555 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mp2t\" (UniqueName: \"kubernetes.io/projected/7033d440-7efe-4838-b31b-d84a86491a1f-kube-api-access-8mp2t\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.678564 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4ljg\" (UniqueName: \"kubernetes.io/projected/1b4d4d2d-fe32-40fc-a565-8cdf4acbb853-kube-api-access-t4ljg\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.680328 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59f8f2a-53c9-4e5a-bb13-e79058d62972-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e59f8f2a-53c9-4e5a-bb13-e79058d62972" (UID: "e59f8f2a-53c9-4e5a-bb13-e79058d62972"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.681486 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd643b41-42c6-4d59-8ee9-591b9717a088-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd643b41-42c6-4d59-8ee9-591b9717a088" (UID: "fd643b41-42c6-4d59-8ee9-591b9717a088"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.681854 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/006407e5-c179-4f35-874a-af73cc024106-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "006407e5-c179-4f35-874a-af73cc024106" (UID: "006407e5-c179-4f35-874a-af73cc024106"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.684019 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/006407e5-c179-4f35-874a-af73cc024106-kube-api-access-lxc99" (OuterVolumeSpecName: "kube-api-access-lxc99") pod "006407e5-c179-4f35-874a-af73cc024106" (UID: "006407e5-c179-4f35-874a-af73cc024106"). InnerVolumeSpecName "kube-api-access-lxc99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.685697 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59f8f2a-53c9-4e5a-bb13-e79058d62972-kube-api-access-c7lcp" (OuterVolumeSpecName: "kube-api-access-c7lcp") pod "e59f8f2a-53c9-4e5a-bb13-e79058d62972" (UID: "e59f8f2a-53c9-4e5a-bb13-e79058d62972"). InnerVolumeSpecName "kube-api-access-c7lcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.688323 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd643b41-42c6-4d59-8ee9-591b9717a088-kube-api-access-bxn2q" (OuterVolumeSpecName: "kube-api-access-bxn2q") pod "fd643b41-42c6-4d59-8ee9-591b9717a088" (UID: "fd643b41-42c6-4d59-8ee9-591b9717a088"). InnerVolumeSpecName "kube-api-access-bxn2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.780355 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxn2q\" (UniqueName: \"kubernetes.io/projected/fd643b41-42c6-4d59-8ee9-591b9717a088-kube-api-access-bxn2q\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.780395 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd643b41-42c6-4d59-8ee9-591b9717a088-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.780405 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7lcp\" (UniqueName: \"kubernetes.io/projected/e59f8f2a-53c9-4e5a-bb13-e79058d62972-kube-api-access-c7lcp\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.780414 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006407e5-c179-4f35-874a-af73cc024106-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.780424 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxc99\" (UniqueName: \"kubernetes.io/projected/006407e5-c179-4f35-874a-af73cc024106-kube-api-access-lxc99\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.780433 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e59f8f2a-53c9-4e5a-bb13-e79058d62972-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:55 crc kubenswrapper[4966]: I0127 16:03:55.915328 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9h649"] Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.135244 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9h649" event={"ID":"a611a514-a28b-48ab-bd3a-00982afc19fb","Type":"ContainerStarted","Data":"eca22351a8e229a3e2cbb55423f77189e617805e5a7b74045375430297c98684"} Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.137247 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-456c-account-create-update-s5w5c" event={"ID":"7033d440-7efe-4838-b31b-d84a86491a1f","Type":"ContainerDied","Data":"bc362a07a106dbbaddead49a8d868101a953d4b318a2a3e90a0dde3abfd4ac65"} Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.137293 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc362a07a106dbbaddead49a8d868101a953d4b318a2a3e90a0dde3abfd4ac65" Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.137372 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-456c-account-create-update-s5w5c" Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.146283 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vqw86" event={"ID":"b0c799f6-7ef0-4d2b-ae60-7808e22e9699","Type":"ContainerDied","Data":"c714dafef778194dc4ee618d83491484a5ba6f5240a5df33d2bbeb53a58b2253"} Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.146330 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c714dafef778194dc4ee618d83491484a5ba6f5240a5df33d2bbeb53a58b2253" Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.146410 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vqw86" Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.148485 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3f19-account-create-update-h6kd6" event={"ID":"1b4d4d2d-fe32-40fc-a565-8cdf4acbb853","Type":"ContainerDied","Data":"1916d0d6ff287d70d7deb84040667f932634902ea10bbbe2591281d06c31af78"} Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.148514 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1916d0d6ff287d70d7deb84040667f932634902ea10bbbe2591281d06c31af78" Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.148568 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3f19-account-create-update-h6kd6" Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.158177 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c879q" Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.161248 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c879q" event={"ID":"e59f8f2a-53c9-4e5a-bb13-e79058d62972","Type":"ContainerDied","Data":"777f42fa130e47883bfce4621fab237466908aff9d26f002fcde5925d5ca9264"} Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.161308 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="777f42fa130e47883bfce4621fab237466908aff9d26f002fcde5925d5ca9264" Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.163461 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6k7tg" Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.163748 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6k7tg" event={"ID":"fd643b41-42c6-4d59-8ee9-591b9717a088","Type":"ContainerDied","Data":"67703f826736cdcf622284232c87cfc870fab2e5d2e8dced1888a955132e583e"} Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.163790 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67703f826736cdcf622284232c87cfc870fab2e5d2e8dced1888a955132e583e" Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.173184 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f2b8-account-create-update-x59c5" event={"ID":"006407e5-c179-4f35-874a-af73cc024106","Type":"ContainerDied","Data":"38b6284605d0efcc323061bcc9d4c863b2f64ef16975a9d873ea26438224930c"} Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.173225 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38b6284605d0efcc323061bcc9d4c863b2f64ef16975a9d873ea26438224930c" Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.173281 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f2b8-account-create-update-x59c5" Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.178224 4966 generic.go:334] "Generic (PLEG): container finished" podID="fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" containerID="bdba0b211926a4b0fa7405022533bc91f629b32bffc8a5e980902fcbf6cf6cfa" exitCode=0 Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.178307 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651","Type":"ContainerDied","Data":"bdba0b211926a4b0fa7405022533bc91f629b32bffc8a5e980902fcbf6cf6cfa"} Jan 27 16:03:56 crc kubenswrapper[4966]: I0127 16:03:56.701141 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-etc-swift\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:03:56 crc kubenswrapper[4966]: E0127 16:03:56.701370 4966 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 16:03:56 crc kubenswrapper[4966]: E0127 16:03:56.701548 4966 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 16:03:56 crc kubenswrapper[4966]: E0127 16:03:56.701621 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-etc-swift podName:a59c903b-6e40-43bd-a120-e47e504cf5a9 nodeName:}" failed. No retries permitted until 2026-01-27 16:04:12.70159635 +0000 UTC m=+1319.004389838 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-etc-swift") pod "swift-storage-0" (UID: "a59c903b-6e40-43bd-a120-e47e504cf5a9") : configmap "swift-ring-files" not found Jan 27 16:03:57 crc kubenswrapper[4966]: I0127 16:03:57.188409 4966 generic.go:334] "Generic (PLEG): container finished" podID="a611a514-a28b-48ab-bd3a-00982afc19fb" containerID="15f5c81dd9d223e1dfecc2cdc46452c5a6b63c767021b2b37e8a04cb85a7ba5f" exitCode=0 Jan 27 16:03:57 crc kubenswrapper[4966]: I0127 16:03:57.188568 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9h649" event={"ID":"a611a514-a28b-48ab-bd3a-00982afc19fb","Type":"ContainerDied","Data":"15f5c81dd9d223e1dfecc2cdc46452c5a6b63c767021b2b37e8a04cb85a7ba5f"} Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.315382 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.626816 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-s9cgg"] Jan 27 16:03:58 crc kubenswrapper[4966]: E0127 16:03:58.627512 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b4d4d2d-fe32-40fc-a565-8cdf4acbb853" containerName="mariadb-account-create-update" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.627530 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b4d4d2d-fe32-40fc-a565-8cdf4acbb853" containerName="mariadb-account-create-update" Jan 27 16:03:58 crc kubenswrapper[4966]: E0127 16:03:58.627545 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7033d440-7efe-4838-b31b-d84a86491a1f" containerName="mariadb-account-create-update" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.627552 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="7033d440-7efe-4838-b31b-d84a86491a1f" containerName="mariadb-account-create-update" Jan 27 16:03:58 crc kubenswrapper[4966]: E0127 16:03:58.627563 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59f8f2a-53c9-4e5a-bb13-e79058d62972" containerName="mariadb-database-create" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.627569 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59f8f2a-53c9-4e5a-bb13-e79058d62972" containerName="mariadb-database-create" Jan 27 16:03:58 crc kubenswrapper[4966]: E0127 16:03:58.627580 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006407e5-c179-4f35-874a-af73cc024106" containerName="mariadb-account-create-update" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.627586 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="006407e5-c179-4f35-874a-af73cc024106" containerName="mariadb-account-create-update" Jan 27 16:03:58 crc kubenswrapper[4966]: E0127 16:03:58.627604 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd643b41-42c6-4d59-8ee9-591b9717a088" containerName="mariadb-database-create" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.627610 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd643b41-42c6-4d59-8ee9-591b9717a088" containerName="mariadb-database-create" Jan 27 16:03:58 crc kubenswrapper[4966]: E0127 16:03:58.627623 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c799f6-7ef0-4d2b-ae60-7808e22e9699" containerName="mariadb-database-create" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.627629 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c799f6-7ef0-4d2b-ae60-7808e22e9699" containerName="mariadb-database-create" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.627809 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd643b41-42c6-4d59-8ee9-591b9717a088" containerName="mariadb-database-create" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.627820 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="7033d440-7efe-4838-b31b-d84a86491a1f" containerName="mariadb-account-create-update" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.627834 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b4d4d2d-fe32-40fc-a565-8cdf4acbb853" containerName="mariadb-account-create-update" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.627841 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59f8f2a-53c9-4e5a-bb13-e79058d62972" containerName="mariadb-database-create" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.627852 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="006407e5-c179-4f35-874a-af73cc024106" containerName="mariadb-account-create-update" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.627862 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c799f6-7ef0-4d2b-ae60-7808e22e9699" containerName="mariadb-database-create" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.628516 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-s9cgg" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.632761 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mrw97" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.632968 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.638573 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-s9cgg"] Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.686123 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9h649" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.761220 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a611a514-a28b-48ab-bd3a-00982afc19fb-operator-scripts\") pod \"a611a514-a28b-48ab-bd3a-00982afc19fb\" (UID: \"a611a514-a28b-48ab-bd3a-00982afc19fb\") " Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.761353 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzhxr\" (UniqueName: \"kubernetes.io/projected/a611a514-a28b-48ab-bd3a-00982afc19fb-kube-api-access-zzhxr\") pod \"a611a514-a28b-48ab-bd3a-00982afc19fb\" (UID: \"a611a514-a28b-48ab-bd3a-00982afc19fb\") " Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.761537 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/33f50afa-bf36-4363-8076-0b8271d89a85-db-sync-config-data\") pod \"glance-db-sync-s9cgg\" (UID: \"33f50afa-bf36-4363-8076-0b8271d89a85\") " pod="openstack/glance-db-sync-s9cgg" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.761567 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f50afa-bf36-4363-8076-0b8271d89a85-combined-ca-bundle\") pod \"glance-db-sync-s9cgg\" (UID: \"33f50afa-bf36-4363-8076-0b8271d89a85\") " pod="openstack/glance-db-sync-s9cgg" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.761665 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f50afa-bf36-4363-8076-0b8271d89a85-config-data\") pod \"glance-db-sync-s9cgg\" (UID: \"33f50afa-bf36-4363-8076-0b8271d89a85\") " pod="openstack/glance-db-sync-s9cgg" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.761728 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt8p7\" (UniqueName: \"kubernetes.io/projected/33f50afa-bf36-4363-8076-0b8271d89a85-kube-api-access-dt8p7\") pod \"glance-db-sync-s9cgg\" (UID: \"33f50afa-bf36-4363-8076-0b8271d89a85\") " pod="openstack/glance-db-sync-s9cgg" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.762181 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a611a514-a28b-48ab-bd3a-00982afc19fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a611a514-a28b-48ab-bd3a-00982afc19fb" (UID: "a611a514-a28b-48ab-bd3a-00982afc19fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.788099 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a611a514-a28b-48ab-bd3a-00982afc19fb-kube-api-access-zzhxr" (OuterVolumeSpecName: "kube-api-access-zzhxr") pod "a611a514-a28b-48ab-bd3a-00982afc19fb" (UID: "a611a514-a28b-48ab-bd3a-00982afc19fb"). InnerVolumeSpecName "kube-api-access-zzhxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.863603 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f50afa-bf36-4363-8076-0b8271d89a85-combined-ca-bundle\") pod \"glance-db-sync-s9cgg\" (UID: \"33f50afa-bf36-4363-8076-0b8271d89a85\") " pod="openstack/glance-db-sync-s9cgg" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.863749 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f50afa-bf36-4363-8076-0b8271d89a85-config-data\") pod \"glance-db-sync-s9cgg\" (UID: \"33f50afa-bf36-4363-8076-0b8271d89a85\") " pod="openstack/glance-db-sync-s9cgg" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.863825 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt8p7\" (UniqueName: \"kubernetes.io/projected/33f50afa-bf36-4363-8076-0b8271d89a85-kube-api-access-dt8p7\") pod \"glance-db-sync-s9cgg\" (UID: \"33f50afa-bf36-4363-8076-0b8271d89a85\") " pod="openstack/glance-db-sync-s9cgg" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.863878 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/33f50afa-bf36-4363-8076-0b8271d89a85-db-sync-config-data\") pod \"glance-db-sync-s9cgg\" (UID: \"33f50afa-bf36-4363-8076-0b8271d89a85\") " pod="openstack/glance-db-sync-s9cgg" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.863941 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzhxr\" (UniqueName: \"kubernetes.io/projected/a611a514-a28b-48ab-bd3a-00982afc19fb-kube-api-access-zzhxr\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.863953 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a611a514-a28b-48ab-bd3a-00982afc19fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.885678 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f50afa-bf36-4363-8076-0b8271d89a85-combined-ca-bundle\") pod \"glance-db-sync-s9cgg\" (UID: \"33f50afa-bf36-4363-8076-0b8271d89a85\") " pod="openstack/glance-db-sync-s9cgg" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.886620 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/33f50afa-bf36-4363-8076-0b8271d89a85-db-sync-config-data\") pod \"glance-db-sync-s9cgg\" (UID: \"33f50afa-bf36-4363-8076-0b8271d89a85\") " pod="openstack/glance-db-sync-s9cgg" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.892560 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt8p7\" (UniqueName: \"kubernetes.io/projected/33f50afa-bf36-4363-8076-0b8271d89a85-kube-api-access-dt8p7\") pod \"glance-db-sync-s9cgg\" (UID: \"33f50afa-bf36-4363-8076-0b8271d89a85\") " pod="openstack/glance-db-sync-s9cgg" Jan 27 16:03:58 crc kubenswrapper[4966]: I0127 16:03:58.893576 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f50afa-bf36-4363-8076-0b8271d89a85-config-data\") pod \"glance-db-sync-s9cgg\" (UID: \"33f50afa-bf36-4363-8076-0b8271d89a85\") " pod="openstack/glance-db-sync-s9cgg" Jan 27 16:03:59 crc kubenswrapper[4966]: I0127 16:03:59.002223 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-s9cgg" Jan 27 16:03:59 crc kubenswrapper[4966]: I0127 16:03:59.236447 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9h649" event={"ID":"a611a514-a28b-48ab-bd3a-00982afc19fb","Type":"ContainerDied","Data":"eca22351a8e229a3e2cbb55423f77189e617805e5a7b74045375430297c98684"} Jan 27 16:03:59 crc kubenswrapper[4966]: I0127 16:03:59.236484 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eca22351a8e229a3e2cbb55423f77189e617805e5a7b74045375430297c98684" Jan 27 16:03:59 crc kubenswrapper[4966]: I0127 16:03:59.236547 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9h649" Jan 27 16:03:59 crc kubenswrapper[4966]: I0127 16:03:59.701339 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-s9cgg"] Jan 27 16:03:59 crc kubenswrapper[4966]: I0127 16:03:59.996837 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-4q69b"] Jan 27 16:03:59 crc kubenswrapper[4966]: E0127 16:03:59.997525 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a611a514-a28b-48ab-bd3a-00982afc19fb" containerName="mariadb-account-create-update" Jan 27 16:03:59 crc kubenswrapper[4966]: I0127 16:03:59.997541 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="a611a514-a28b-48ab-bd3a-00982afc19fb" containerName="mariadb-account-create-update" Jan 27 16:03:59 crc kubenswrapper[4966]: I0127 16:03:59.997739 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="a611a514-a28b-48ab-bd3a-00982afc19fb" containerName="mariadb-account-create-update" Jan 27 16:03:59 crc kubenswrapper[4966]: I0127 16:03:59.998383 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4q69b" Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.023444 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-4q69b"] Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.086932 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwnq2\" (UniqueName: \"kubernetes.io/projected/e5c4d7fc-d6ef-4408-84fe-d936ee3a0668-kube-api-access-lwnq2\") pod \"mysqld-exporter-openstack-cell1-db-create-4q69b\" (UID: \"e5c4d7fc-d6ef-4408-84fe-d936ee3a0668\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-4q69b" Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.087060 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c4d7fc-d6ef-4408-84fe-d936ee3a0668-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-4q69b\" (UID: \"e5c4d7fc-d6ef-4408-84fe-d936ee3a0668\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-4q69b" Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.187831 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwnq2\" (UniqueName: \"kubernetes.io/projected/e5c4d7fc-d6ef-4408-84fe-d936ee3a0668-kube-api-access-lwnq2\") pod \"mysqld-exporter-openstack-cell1-db-create-4q69b\" (UID: \"e5c4d7fc-d6ef-4408-84fe-d936ee3a0668\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-4q69b" Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.187943 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c4d7fc-d6ef-4408-84fe-d936ee3a0668-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-4q69b\" (UID: \"e5c4d7fc-d6ef-4408-84fe-d936ee3a0668\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-4q69b" Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.188643 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c4d7fc-d6ef-4408-84fe-d936ee3a0668-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-4q69b\" (UID: \"e5c4d7fc-d6ef-4408-84fe-d936ee3a0668\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-4q69b" Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.205453 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwnq2\" (UniqueName: \"kubernetes.io/projected/e5c4d7fc-d6ef-4408-84fe-d936ee3a0668-kube-api-access-lwnq2\") pod \"mysqld-exporter-openstack-cell1-db-create-4q69b\" (UID: \"e5c4d7fc-d6ef-4408-84fe-d936ee3a0668\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-4q69b" Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.247712 4966 generic.go:334] "Generic (PLEG): container finished" podID="5036c06b-cb10-4530-9315-4ba4dee273f0" containerID="a7209bfddd78309a0bfba8667676d69d42139f7126308ab7d9c0bc9f96c3dc84" exitCode=0 Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.247782 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6r97n" event={"ID":"5036c06b-cb10-4530-9315-4ba4dee273f0","Type":"ContainerDied","Data":"a7209bfddd78309a0bfba8667676d69d42139f7126308ab7d9c0bc9f96c3dc84"} Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.249646 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-s9cgg" event={"ID":"33f50afa-bf36-4363-8076-0b8271d89a85","Type":"ContainerStarted","Data":"28c6aa672a48c107be087cc8a18492ad9315504d7fb332d776fc6c6ad90595d7"} Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.302061 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-b2ff-account-create-update-8jnqx"] Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.304627 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b2ff-account-create-update-8jnqx" Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.307318 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.312849 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-b2ff-account-create-update-8jnqx"] Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.314592 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4q69b" Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.395048 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6f7696-a033-4ffd-a248-08cc900c0def-operator-scripts\") pod \"mysqld-exporter-b2ff-account-create-update-8jnqx\" (UID: \"4c6f7696-a033-4ffd-a248-08cc900c0def\") " pod="openstack/mysqld-exporter-b2ff-account-create-update-8jnqx" Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.395308 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c42hq\" (UniqueName: \"kubernetes.io/projected/4c6f7696-a033-4ffd-a248-08cc900c0def-kube-api-access-c42hq\") pod \"mysqld-exporter-b2ff-account-create-update-8jnqx\" (UID: \"4c6f7696-a033-4ffd-a248-08cc900c0def\") " pod="openstack/mysqld-exporter-b2ff-account-create-update-8jnqx" Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.496929 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c42hq\" (UniqueName: \"kubernetes.io/projected/4c6f7696-a033-4ffd-a248-08cc900c0def-kube-api-access-c42hq\") pod \"mysqld-exporter-b2ff-account-create-update-8jnqx\" (UID: \"4c6f7696-a033-4ffd-a248-08cc900c0def\") " pod="openstack/mysqld-exporter-b2ff-account-create-update-8jnqx" Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.497084 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6f7696-a033-4ffd-a248-08cc900c0def-operator-scripts\") pod \"mysqld-exporter-b2ff-account-create-update-8jnqx\" (UID: \"4c6f7696-a033-4ffd-a248-08cc900c0def\") " pod="openstack/mysqld-exporter-b2ff-account-create-update-8jnqx" Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.497996 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6f7696-a033-4ffd-a248-08cc900c0def-operator-scripts\") pod \"mysqld-exporter-b2ff-account-create-update-8jnqx\" (UID: \"4c6f7696-a033-4ffd-a248-08cc900c0def\") " pod="openstack/mysqld-exporter-b2ff-account-create-update-8jnqx" Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.538283 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c42hq\" (UniqueName: \"kubernetes.io/projected/4c6f7696-a033-4ffd-a248-08cc900c0def-kube-api-access-c42hq\") pod \"mysqld-exporter-b2ff-account-create-update-8jnqx\" (UID: \"4c6f7696-a033-4ffd-a248-08cc900c0def\") " pod="openstack/mysqld-exporter-b2ff-account-create-update-8jnqx" Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.636877 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b2ff-account-create-update-8jnqx" Jan 27 16:04:00 crc kubenswrapper[4966]: I0127 16:04:00.936890 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-4q69b"] Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.184218 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-b2ff-account-create-update-8jnqx"] Jan 27 16:04:01 crc kubenswrapper[4966]: W0127 16:04:01.193442 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c6f7696_a033_4ffd_a248_08cc900c0def.slice/crio-d566197a341c4dca6ce6495ffcb130c44b275017fb9534db36c5860cfedcfc2d WatchSource:0}: Error finding container d566197a341c4dca6ce6495ffcb130c44b275017fb9534db36c5860cfedcfc2d: Status 404 returned error can't find the container with id d566197a341c4dca6ce6495ffcb130c44b275017fb9534db36c5860cfedcfc2d Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.203462 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9h649"] Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.221124 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9h649"] Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.265470 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4q69b" event={"ID":"e5c4d7fc-d6ef-4408-84fe-d936ee3a0668","Type":"ContainerStarted","Data":"32456e11f86d656c26bfac00572591159c5d650995fb917a810013c1f1170096"} Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.265844 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4q69b" event={"ID":"e5c4d7fc-d6ef-4408-84fe-d936ee3a0668","Type":"ContainerStarted","Data":"9878f46a20a6f3bf010dfdd8ac60e51d1b5aee68c4a857f2549255110876cecc"} Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.269080 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b2ff-account-create-update-8jnqx" event={"ID":"4c6f7696-a033-4ffd-a248-08cc900c0def","Type":"ContainerStarted","Data":"d566197a341c4dca6ce6495ffcb130c44b275017fb9534db36c5860cfedcfc2d"} Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.286887 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4q69b" podStartSLOduration=2.286866533 podStartE2EDuration="2.286866533s" podCreationTimestamp="2026-01-27 16:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:04:01.279361848 +0000 UTC m=+1307.582155356" watchObservedRunningTime="2026-01-27 16:04:01.286866533 +0000 UTC m=+1307.589660021" Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.881612 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.943678 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5036c06b-cb10-4530-9315-4ba4dee273f0-ring-data-devices\") pod \"5036c06b-cb10-4530-9315-4ba4dee273f0\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.943737 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xsts\" (UniqueName: \"kubernetes.io/projected/5036c06b-cb10-4530-9315-4ba4dee273f0-kube-api-access-9xsts\") pod \"5036c06b-cb10-4530-9315-4ba4dee273f0\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.943783 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5036c06b-cb10-4530-9315-4ba4dee273f0-swiftconf\") pod \"5036c06b-cb10-4530-9315-4ba4dee273f0\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.943840 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5036c06b-cb10-4530-9315-4ba4dee273f0-dispersionconf\") pod \"5036c06b-cb10-4530-9315-4ba4dee273f0\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.943874 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5036c06b-cb10-4530-9315-4ba4dee273f0-combined-ca-bundle\") pod \"5036c06b-cb10-4530-9315-4ba4dee273f0\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.943949 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5036c06b-cb10-4530-9315-4ba4dee273f0-scripts\") pod \"5036c06b-cb10-4530-9315-4ba4dee273f0\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.943982 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5036c06b-cb10-4530-9315-4ba4dee273f0-etc-swift\") pod \"5036c06b-cb10-4530-9315-4ba4dee273f0\" (UID: \"5036c06b-cb10-4530-9315-4ba4dee273f0\") " Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.944259 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5036c06b-cb10-4530-9315-4ba4dee273f0-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5036c06b-cb10-4530-9315-4ba4dee273f0" (UID: "5036c06b-cb10-4530-9315-4ba4dee273f0"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.944664 4966 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5036c06b-cb10-4530-9315-4ba4dee273f0-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.944985 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5036c06b-cb10-4530-9315-4ba4dee273f0-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5036c06b-cb10-4530-9315-4ba4dee273f0" (UID: "5036c06b-cb10-4530-9315-4ba4dee273f0"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.951622 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5036c06b-cb10-4530-9315-4ba4dee273f0-kube-api-access-9xsts" (OuterVolumeSpecName: "kube-api-access-9xsts") pod "5036c06b-cb10-4530-9315-4ba4dee273f0" (UID: "5036c06b-cb10-4530-9315-4ba4dee273f0"). InnerVolumeSpecName "kube-api-access-9xsts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.954044 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5036c06b-cb10-4530-9315-4ba4dee273f0-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5036c06b-cb10-4530-9315-4ba4dee273f0" (UID: "5036c06b-cb10-4530-9315-4ba4dee273f0"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.974287 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5036c06b-cb10-4530-9315-4ba4dee273f0-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5036c06b-cb10-4530-9315-4ba4dee273f0" (UID: "5036c06b-cb10-4530-9315-4ba4dee273f0"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:04:01 crc kubenswrapper[4966]: I0127 16:04:01.978486 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5036c06b-cb10-4530-9315-4ba4dee273f0-scripts" (OuterVolumeSpecName: "scripts") pod "5036c06b-cb10-4530-9315-4ba4dee273f0" (UID: "5036c06b-cb10-4530-9315-4ba4dee273f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:02 crc kubenswrapper[4966]: I0127 16:04:02.004573 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5036c06b-cb10-4530-9315-4ba4dee273f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5036c06b-cb10-4530-9315-4ba4dee273f0" (UID: "5036c06b-cb10-4530-9315-4ba4dee273f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:04:02 crc kubenswrapper[4966]: I0127 16:04:02.045491 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5036c06b-cb10-4530-9315-4ba4dee273f0-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:02 crc kubenswrapper[4966]: I0127 16:04:02.045588 4966 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5036c06b-cb10-4530-9315-4ba4dee273f0-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:02 crc kubenswrapper[4966]: I0127 16:04:02.045656 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xsts\" (UniqueName: \"kubernetes.io/projected/5036c06b-cb10-4530-9315-4ba4dee273f0-kube-api-access-9xsts\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:02 crc kubenswrapper[4966]: I0127 16:04:02.045707 4966 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5036c06b-cb10-4530-9315-4ba4dee273f0-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:02 crc kubenswrapper[4966]: I0127 16:04:02.045753 4966 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5036c06b-cb10-4530-9315-4ba4dee273f0-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:02 crc kubenswrapper[4966]: I0127 16:04:02.045800 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5036c06b-cb10-4530-9315-4ba4dee273f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:02 crc kubenswrapper[4966]: I0127 16:04:02.279907 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6r97n" event={"ID":"5036c06b-cb10-4530-9315-4ba4dee273f0","Type":"ContainerDied","Data":"beb508747b2f98f9e70f7eadad05189b6abec40d48fb19462d9e3cb6b574b17f"} Jan 27 16:04:02 crc kubenswrapper[4966]: I0127 16:04:02.280231 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beb508747b2f98f9e70f7eadad05189b6abec40d48fb19462d9e3cb6b574b17f" Jan 27 16:04:02 crc kubenswrapper[4966]: I0127 16:04:02.280293 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6r97n" Jan 27 16:04:02 crc kubenswrapper[4966]: I0127 16:04:02.284970 4966 generic.go:334] "Generic (PLEG): container finished" podID="e5c4d7fc-d6ef-4408-84fe-d936ee3a0668" containerID="32456e11f86d656c26bfac00572591159c5d650995fb917a810013c1f1170096" exitCode=0 Jan 27 16:04:02 crc kubenswrapper[4966]: I0127 16:04:02.285035 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4q69b" event={"ID":"e5c4d7fc-d6ef-4408-84fe-d936ee3a0668","Type":"ContainerDied","Data":"32456e11f86d656c26bfac00572591159c5d650995fb917a810013c1f1170096"} Jan 27 16:04:02 crc kubenswrapper[4966]: I0127 16:04:02.287380 4966 generic.go:334] "Generic (PLEG): container finished" podID="4c6f7696-a033-4ffd-a248-08cc900c0def" containerID="fb967511cb241bec3eb11f46bdf77548652d12383d18266914afd69662ee2dbf" exitCode=0 Jan 27 16:04:02 crc kubenswrapper[4966]: I0127 16:04:02.287439 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b2ff-account-create-update-8jnqx" event={"ID":"4c6f7696-a033-4ffd-a248-08cc900c0def","Type":"ContainerDied","Data":"fb967511cb241bec3eb11f46bdf77548652d12383d18266914afd69662ee2dbf"} Jan 27 16:04:02 crc kubenswrapper[4966]: I0127 16:04:02.538991 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a611a514-a28b-48ab-bd3a-00982afc19fb" path="/var/lib/kubelet/pods/a611a514-a28b-48ab-bd3a-00982afc19fb/volumes" Jan 27 16:04:03 crc kubenswrapper[4966]: I0127 16:04:03.144884 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-96kvf" podUID="7c442a88-8881-4780-a2c3-eddb5d940209" containerName="ovn-controller" probeResult="failure" output=< Jan 27 16:04:03 crc kubenswrapper[4966]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 16:04:03 crc kubenswrapper[4966]: > Jan 27 16:04:03 crc kubenswrapper[4966]: I0127 16:04:03.180682 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:04:04 crc kubenswrapper[4966]: I0127 16:04:04.715503 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="3744b7e0-d355-43b7-bbf3-853416fb4483" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Jan 27 16:04:04 crc kubenswrapper[4966]: I0127 16:04:04.732923 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="c9809bd0-3c51-46c3-b6c0-0b2576685999" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 27 16:04:04 crc kubenswrapper[4966]: I0127 16:04:04.748350 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Jan 27 16:04:05 crc kubenswrapper[4966]: I0127 16:04:05.050059 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="9b278fb8-add2-4ddc-9e93-71962f1bb6fa" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Jan 27 16:04:06 crc kubenswrapper[4966]: I0127 16:04:06.175332 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-m7krk"] Jan 27 16:04:06 crc kubenswrapper[4966]: E0127 16:04:06.179072 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5036c06b-cb10-4530-9315-4ba4dee273f0" containerName="swift-ring-rebalance" Jan 27 16:04:06 crc kubenswrapper[4966]: I0127 16:04:06.181420 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="5036c06b-cb10-4530-9315-4ba4dee273f0" containerName="swift-ring-rebalance" Jan 27 16:04:06 crc kubenswrapper[4966]: I0127 16:04:06.181747 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="5036c06b-cb10-4530-9315-4ba4dee273f0" containerName="swift-ring-rebalance" Jan 27 16:04:06 crc kubenswrapper[4966]: I0127 16:04:06.182647 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m7krk" Jan 27 16:04:06 crc kubenswrapper[4966]: I0127 16:04:06.185120 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 16:04:06 crc kubenswrapper[4966]: I0127 16:04:06.185521 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-m7krk"] Jan 27 16:04:06 crc kubenswrapper[4966]: I0127 16:04:06.336942 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvq8k\" (UniqueName: \"kubernetes.io/projected/78c5e231-0221-4bc4-affd-f30db82bed7a-kube-api-access-jvq8k\") pod \"root-account-create-update-m7krk\" (UID: \"78c5e231-0221-4bc4-affd-f30db82bed7a\") " pod="openstack/root-account-create-update-m7krk" Jan 27 16:04:06 crc kubenswrapper[4966]: I0127 16:04:06.337009 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78c5e231-0221-4bc4-affd-f30db82bed7a-operator-scripts\") pod \"root-account-create-update-m7krk\" (UID: \"78c5e231-0221-4bc4-affd-f30db82bed7a\") " pod="openstack/root-account-create-update-m7krk" Jan 27 16:04:06 crc kubenswrapper[4966]: I0127 16:04:06.442657 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvq8k\" (UniqueName: \"kubernetes.io/projected/78c5e231-0221-4bc4-affd-f30db82bed7a-kube-api-access-jvq8k\") pod \"root-account-create-update-m7krk\" (UID: \"78c5e231-0221-4bc4-affd-f30db82bed7a\") " pod="openstack/root-account-create-update-m7krk" Jan 27 16:04:06 crc kubenswrapper[4966]: I0127 16:04:06.442735 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78c5e231-0221-4bc4-affd-f30db82bed7a-operator-scripts\") pod \"root-account-create-update-m7krk\" (UID: \"78c5e231-0221-4bc4-affd-f30db82bed7a\") " pod="openstack/root-account-create-update-m7krk" Jan 27 16:04:06 crc kubenswrapper[4966]: I0127 16:04:06.443930 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78c5e231-0221-4bc4-affd-f30db82bed7a-operator-scripts\") pod \"root-account-create-update-m7krk\" (UID: \"78c5e231-0221-4bc4-affd-f30db82bed7a\") " pod="openstack/root-account-create-update-m7krk" Jan 27 16:04:06 crc kubenswrapper[4966]: I0127 16:04:06.465224 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvq8k\" (UniqueName: \"kubernetes.io/projected/78c5e231-0221-4bc4-affd-f30db82bed7a-kube-api-access-jvq8k\") pod \"root-account-create-update-m7krk\" (UID: \"78c5e231-0221-4bc4-affd-f30db82bed7a\") " pod="openstack/root-account-create-update-m7krk" Jan 27 16:04:06 crc kubenswrapper[4966]: I0127 16:04:06.508733 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m7krk" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.122431 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-96kvf" podUID="7c442a88-8881-4780-a2c3-eddb5d940209" containerName="ovn-controller" probeResult="failure" output=< Jan 27 16:04:08 crc kubenswrapper[4966]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 16:04:08 crc kubenswrapper[4966]: > Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.163584 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5jgjj" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.356494 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b2ff-account-create-update-8jnqx" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.364095 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4q69b" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.380153 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4q69b" event={"ID":"e5c4d7fc-d6ef-4408-84fe-d936ee3a0668","Type":"ContainerDied","Data":"9878f46a20a6f3bf010dfdd8ac60e51d1b5aee68c4a857f2549255110876cecc"} Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.380199 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9878f46a20a6f3bf010dfdd8ac60e51d1b5aee68c4a857f2549255110876cecc" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.380279 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4q69b" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.382038 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b2ff-account-create-update-8jnqx" event={"ID":"4c6f7696-a033-4ffd-a248-08cc900c0def","Type":"ContainerDied","Data":"d566197a341c4dca6ce6495ffcb130c44b275017fb9534db36c5860cfedcfc2d"} Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.382069 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d566197a341c4dca6ce6495ffcb130c44b275017fb9534db36c5860cfedcfc2d" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.382110 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b2ff-account-create-update-8jnqx" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.490110 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-96kvf-config-9wwrm"] Jan 27 16:04:08 crc kubenswrapper[4966]: E0127 16:04:08.490613 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c6f7696-a033-4ffd-a248-08cc900c0def" containerName="mariadb-account-create-update" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.490638 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c6f7696-a033-4ffd-a248-08cc900c0def" containerName="mariadb-account-create-update" Jan 27 16:04:08 crc kubenswrapper[4966]: E0127 16:04:08.490684 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5c4d7fc-d6ef-4408-84fe-d936ee3a0668" containerName="mariadb-database-create" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.490693 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5c4d7fc-d6ef-4408-84fe-d936ee3a0668" containerName="mariadb-database-create" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.490975 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c6f7696-a033-4ffd-a248-08cc900c0def" containerName="mariadb-account-create-update" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.491004 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5c4d7fc-d6ef-4408-84fe-d936ee3a0668" containerName="mariadb-database-create" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.491823 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.492379 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwnq2\" (UniqueName: \"kubernetes.io/projected/e5c4d7fc-d6ef-4408-84fe-d936ee3a0668-kube-api-access-lwnq2\") pod \"e5c4d7fc-d6ef-4408-84fe-d936ee3a0668\" (UID: \"e5c4d7fc-d6ef-4408-84fe-d936ee3a0668\") " Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.492615 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6f7696-a033-4ffd-a248-08cc900c0def-operator-scripts\") pod \"4c6f7696-a033-4ffd-a248-08cc900c0def\" (UID: \"4c6f7696-a033-4ffd-a248-08cc900c0def\") " Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.492782 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c42hq\" (UniqueName: \"kubernetes.io/projected/4c6f7696-a033-4ffd-a248-08cc900c0def-kube-api-access-c42hq\") pod \"4c6f7696-a033-4ffd-a248-08cc900c0def\" (UID: \"4c6f7696-a033-4ffd-a248-08cc900c0def\") " Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.492860 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c4d7fc-d6ef-4408-84fe-d936ee3a0668-operator-scripts\") pod \"e5c4d7fc-d6ef-4408-84fe-d936ee3a0668\" (UID: \"e5c4d7fc-d6ef-4408-84fe-d936ee3a0668\") " Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.493378 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c6f7696-a033-4ffd-a248-08cc900c0def-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c6f7696-a033-4ffd-a248-08cc900c0def" (UID: "4c6f7696-a033-4ffd-a248-08cc900c0def"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.493543 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6f7696-a033-4ffd-a248-08cc900c0def-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.493817 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5c4d7fc-d6ef-4408-84fe-d936ee3a0668-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5c4d7fc-d6ef-4408-84fe-d936ee3a0668" (UID: "e5c4d7fc-d6ef-4408-84fe-d936ee3a0668"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.496553 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.498013 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c6f7696-a033-4ffd-a248-08cc900c0def-kube-api-access-c42hq" (OuterVolumeSpecName: "kube-api-access-c42hq") pod "4c6f7696-a033-4ffd-a248-08cc900c0def" (UID: "4c6f7696-a033-4ffd-a248-08cc900c0def"). InnerVolumeSpecName "kube-api-access-c42hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.498804 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5c4d7fc-d6ef-4408-84fe-d936ee3a0668-kube-api-access-lwnq2" (OuterVolumeSpecName: "kube-api-access-lwnq2") pod "e5c4d7fc-d6ef-4408-84fe-d936ee3a0668" (UID: "e5c4d7fc-d6ef-4408-84fe-d936ee3a0668"). InnerVolumeSpecName "kube-api-access-lwnq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.536706 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96kvf-config-9wwrm"] Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.595680 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-var-run-ovn\") pod \"ovn-controller-96kvf-config-9wwrm\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.595818 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-scripts\") pod \"ovn-controller-96kvf-config-9wwrm\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.595839 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w24n\" (UniqueName: \"kubernetes.io/projected/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-kube-api-access-2w24n\") pod \"ovn-controller-96kvf-config-9wwrm\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.595858 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-var-log-ovn\") pod \"ovn-controller-96kvf-config-9wwrm\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.595933 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-additional-scripts\") pod \"ovn-controller-96kvf-config-9wwrm\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.596005 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-var-run\") pod \"ovn-controller-96kvf-config-9wwrm\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.596144 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c4d7fc-d6ef-4408-84fe-d936ee3a0668-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.596184 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwnq2\" (UniqueName: \"kubernetes.io/projected/e5c4d7fc-d6ef-4408-84fe-d936ee3a0668-kube-api-access-lwnq2\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.596197 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c42hq\" (UniqueName: \"kubernetes.io/projected/4c6f7696-a033-4ffd-a248-08cc900c0def-kube-api-access-c42hq\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.709046 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-var-run-ovn\") pod \"ovn-controller-96kvf-config-9wwrm\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.709153 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-scripts\") pod \"ovn-controller-96kvf-config-9wwrm\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.709180 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w24n\" (UniqueName: \"kubernetes.io/projected/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-kube-api-access-2w24n\") pod \"ovn-controller-96kvf-config-9wwrm\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.709205 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-var-log-ovn\") pod \"ovn-controller-96kvf-config-9wwrm\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.709242 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-additional-scripts\") pod \"ovn-controller-96kvf-config-9wwrm\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.709276 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-var-run\") pod \"ovn-controller-96kvf-config-9wwrm\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.710477 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-var-run-ovn\") pod \"ovn-controller-96kvf-config-9wwrm\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.711477 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-var-log-ovn\") pod \"ovn-controller-96kvf-config-9wwrm\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.712205 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-additional-scripts\") pod \"ovn-controller-96kvf-config-9wwrm\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.712701 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-scripts\") pod \"ovn-controller-96kvf-config-9wwrm\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.712784 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-var-run\") pod \"ovn-controller-96kvf-config-9wwrm\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.756014 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w24n\" (UniqueName: \"kubernetes.io/projected/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-kube-api-access-2w24n\") pod \"ovn-controller-96kvf-config-9wwrm\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:08 crc kubenswrapper[4966]: I0127 16:04:08.888364 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.119221 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.119573 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.120063 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.120869 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9cad2005eaacff8196cf8bd744c3709abce6b9766c6adb25e972b7211933a53f"} pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.120957 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" containerID="cri-o://9cad2005eaacff8196cf8bd744c3709abce6b9766c6adb25e972b7211933a53f" gracePeriod=600 Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.416765 4966 generic.go:334] "Generic (PLEG): container finished" podID="75889828-fc5d-4516-a3c4-db3affd4f810" containerID="9cad2005eaacff8196cf8bd744c3709abce6b9766c6adb25e972b7211933a53f" exitCode=0 Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.416771 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerDied","Data":"9cad2005eaacff8196cf8bd744c3709abce6b9766c6adb25e972b7211933a53f"} Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.416873 4966 scope.go:117] "RemoveContainer" containerID="ea82bc681b618d3fe42f05ec74306309a62aca633f08e8c15e2eb2ef6d9d0842" Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.472405 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.475466 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.478298 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.501595 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.549066 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gggm2\" (UniqueName: \"kubernetes.io/projected/1af731ff-1327-4db7-b2e7-a90c451390e4-kube-api-access-gggm2\") pod \"mysqld-exporter-0\" (UID: \"1af731ff-1327-4db7-b2e7-a90c451390e4\") " pod="openstack/mysqld-exporter-0" Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.549190 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af731ff-1327-4db7-b2e7-a90c451390e4-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"1af731ff-1327-4db7-b2e7-a90c451390e4\") " pod="openstack/mysqld-exporter-0" Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.549407 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1af731ff-1327-4db7-b2e7-a90c451390e4-config-data\") pod \"mysqld-exporter-0\" (UID: \"1af731ff-1327-4db7-b2e7-a90c451390e4\") " pod="openstack/mysqld-exporter-0" Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.651520 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1af731ff-1327-4db7-b2e7-a90c451390e4-config-data\") pod \"mysqld-exporter-0\" (UID: \"1af731ff-1327-4db7-b2e7-a90c451390e4\") " pod="openstack/mysqld-exporter-0" Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.651646 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gggm2\" (UniqueName: \"kubernetes.io/projected/1af731ff-1327-4db7-b2e7-a90c451390e4-kube-api-access-gggm2\") pod \"mysqld-exporter-0\" (UID: \"1af731ff-1327-4db7-b2e7-a90c451390e4\") " pod="openstack/mysqld-exporter-0" Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.651712 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af731ff-1327-4db7-b2e7-a90c451390e4-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"1af731ff-1327-4db7-b2e7-a90c451390e4\") " pod="openstack/mysqld-exporter-0" Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.659123 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af731ff-1327-4db7-b2e7-a90c451390e4-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"1af731ff-1327-4db7-b2e7-a90c451390e4\") " pod="openstack/mysqld-exporter-0" Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.660376 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1af731ff-1327-4db7-b2e7-a90c451390e4-config-data\") pod \"mysqld-exporter-0\" (UID: \"1af731ff-1327-4db7-b2e7-a90c451390e4\") " pod="openstack/mysqld-exporter-0" Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.674760 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gggm2\" (UniqueName: \"kubernetes.io/projected/1af731ff-1327-4db7-b2e7-a90c451390e4-kube-api-access-gggm2\") pod \"mysqld-exporter-0\" (UID: \"1af731ff-1327-4db7-b2e7-a90c451390e4\") " pod="openstack/mysqld-exporter-0" Jan 27 16:04:10 crc kubenswrapper[4966]: I0127 16:04:10.798034 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 16:04:12 crc kubenswrapper[4966]: I0127 16:04:12.793241 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-etc-swift\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:04:12 crc kubenswrapper[4966]: I0127 16:04:12.804384 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a59c903b-6e40-43bd-a120-e47e504cf5a9-etc-swift\") pod \"swift-storage-0\" (UID: \"a59c903b-6e40-43bd-a120-e47e504cf5a9\") " pod="openstack/swift-storage-0" Jan 27 16:04:12 crc kubenswrapper[4966]: I0127 16:04:12.921209 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 16:04:13 crc kubenswrapper[4966]: I0127 16:04:13.084184 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-96kvf" podUID="7c442a88-8881-4780-a2c3-eddb5d940209" containerName="ovn-controller" probeResult="failure" output=< Jan 27 16:04:13 crc kubenswrapper[4966]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 16:04:13 crc kubenswrapper[4966]: > Jan 27 16:04:14 crc kubenswrapper[4966]: I0127 16:04:14.714403 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="3744b7e0-d355-43b7-bbf3-853416fb4483" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Jan 27 16:04:14 crc kubenswrapper[4966]: I0127 16:04:14.731302 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="c9809bd0-3c51-46c3-b6c0-0b2576685999" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 27 16:04:14 crc kubenswrapper[4966]: I0127 16:04:14.745531 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Jan 27 16:04:15 crc kubenswrapper[4966]: I0127 16:04:15.048110 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:04:17 crc kubenswrapper[4966]: I0127 16:04:17.799151 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96kvf-config-9wwrm"] Jan 27 16:04:17 crc kubenswrapper[4966]: I0127 16:04:17.855424 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 16:04:17 crc kubenswrapper[4966]: W0127 16:04:17.868367 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c5e231_0221_4bc4_affd_f30db82bed7a.slice/crio-04a9e95cc1e144627b25603630ad46f496658a22977f13e79838febde24ec46a WatchSource:0}: Error finding container 04a9e95cc1e144627b25603630ad46f496658a22977f13e79838febde24ec46a: Status 404 returned error can't find the container with id 04a9e95cc1e144627b25603630ad46f496658a22977f13e79838febde24ec46a Jan 27 16:04:17 crc kubenswrapper[4966]: I0127 16:04:17.873320 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-m7krk"] Jan 27 16:04:17 crc kubenswrapper[4966]: I0127 16:04:17.877272 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 16:04:17 crc kubenswrapper[4966]: I0127 16:04:17.972378 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 16:04:18 crc kubenswrapper[4966]: W0127 16:04:18.002566 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda59c903b_6e40_43bd_a120_e47e504cf5a9.slice/crio-8233d0190943a08dc516773e7502e16d57cf110d3003c87a18a2f415b2398909 WatchSource:0}: Error finding container 8233d0190943a08dc516773e7502e16d57cf110d3003c87a18a2f415b2398909: Status 404 returned error can't find the container with id 8233d0190943a08dc516773e7502e16d57cf110d3003c87a18a2f415b2398909 Jan 27 16:04:18 crc kubenswrapper[4966]: I0127 16:04:18.162114 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-96kvf" podUID="7c442a88-8881-4780-a2c3-eddb5d940209" containerName="ovn-controller" probeResult="failure" output=< Jan 27 16:04:18 crc kubenswrapper[4966]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 16:04:18 crc kubenswrapper[4966]: > Jan 27 16:04:18 crc kubenswrapper[4966]: I0127 16:04:18.516813 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"1af731ff-1327-4db7-b2e7-a90c451390e4","Type":"ContainerStarted","Data":"13cdd312ec12b363c87e9386de34142a18679682d5ebddc69fbb08afcde29d96"} Jan 27 16:04:18 crc kubenswrapper[4966]: I0127 16:04:18.532903 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96kvf-config-9wwrm" event={"ID":"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b","Type":"ContainerStarted","Data":"6445ae7fbca2748c1be5760b43581fe71bb7bb18839c7b2bc3e612f69dbf0ae4"} Jan 27 16:04:18 crc kubenswrapper[4966]: I0127 16:04:18.532945 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96kvf-config-9wwrm" event={"ID":"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b","Type":"ContainerStarted","Data":"6de03285a987a2ed174af7b5f412b4a263c1ef9a050e2dfa9cfde543081d06c5"} Jan 27 16:04:18 crc kubenswrapper[4966]: I0127 16:04:18.532959 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651","Type":"ContainerStarted","Data":"cf08ac47789ffbc8ac65ce0a6803363969539d88327444d57c1b629ca8a356de"} Jan 27 16:04:18 crc kubenswrapper[4966]: I0127 16:04:18.532971 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795"} Jan 27 16:04:18 crc kubenswrapper[4966]: I0127 16:04:18.534870 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-s9cgg" event={"ID":"33f50afa-bf36-4363-8076-0b8271d89a85","Type":"ContainerStarted","Data":"dde934ed23745608fcb281716783c17ade0d45f63e3f471eb90462553e4cf4bf"} Jan 27 16:04:18 crc kubenswrapper[4966]: I0127 16:04:18.537214 4966 generic.go:334] "Generic (PLEG): container finished" podID="78c5e231-0221-4bc4-affd-f30db82bed7a" containerID="1acdeccc416a100a99742414f605a35121ae3df1b644d120337569744e12ed5f" exitCode=0 Jan 27 16:04:18 crc kubenswrapper[4966]: I0127 16:04:18.537278 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m7krk" event={"ID":"78c5e231-0221-4bc4-affd-f30db82bed7a","Type":"ContainerDied","Data":"1acdeccc416a100a99742414f605a35121ae3df1b644d120337569744e12ed5f"} Jan 27 16:04:18 crc kubenswrapper[4966]: I0127 16:04:18.537301 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m7krk" event={"ID":"78c5e231-0221-4bc4-affd-f30db82bed7a","Type":"ContainerStarted","Data":"04a9e95cc1e144627b25603630ad46f496658a22977f13e79838febde24ec46a"} Jan 27 16:04:18 crc kubenswrapper[4966]: I0127 16:04:18.539085 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a59c903b-6e40-43bd-a120-e47e504cf5a9","Type":"ContainerStarted","Data":"8233d0190943a08dc516773e7502e16d57cf110d3003c87a18a2f415b2398909"} Jan 27 16:04:18 crc kubenswrapper[4966]: I0127 16:04:18.579255 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-96kvf-config-9wwrm" podStartSLOduration=10.579230615 podStartE2EDuration="10.579230615s" podCreationTimestamp="2026-01-27 16:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:04:18.556193882 +0000 UTC m=+1324.858987390" watchObservedRunningTime="2026-01-27 16:04:18.579230615 +0000 UTC m=+1324.882024103" Jan 27 16:04:18 crc kubenswrapper[4966]: I0127 16:04:18.602434 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-s9cgg" podStartSLOduration=3.08533324 podStartE2EDuration="20.602414032s" podCreationTimestamp="2026-01-27 16:03:58 +0000 UTC" firstStartedPulling="2026-01-27 16:03:59.705933585 +0000 UTC m=+1306.008727073" lastFinishedPulling="2026-01-27 16:04:17.223014377 +0000 UTC m=+1323.525807865" observedRunningTime="2026-01-27 16:04:18.594172624 +0000 UTC m=+1324.896966122" watchObservedRunningTime="2026-01-27 16:04:18.602414032 +0000 UTC m=+1324.905207520" Jan 27 16:04:19 crc kubenswrapper[4966]: I0127 16:04:19.549730 4966 generic.go:334] "Generic (PLEG): container finished" podID="226820e8-7e13-433e-ba0d-f3c6c9ec2b8b" containerID="6445ae7fbca2748c1be5760b43581fe71bb7bb18839c7b2bc3e612f69dbf0ae4" exitCode=0 Jan 27 16:04:19 crc kubenswrapper[4966]: I0127 16:04:19.549819 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96kvf-config-9wwrm" event={"ID":"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b","Type":"ContainerDied","Data":"6445ae7fbca2748c1be5760b43581fe71bb7bb18839c7b2bc3e612f69dbf0ae4"} Jan 27 16:04:20 crc kubenswrapper[4966]: I0127 16:04:20.456692 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m7krk" Jan 27 16:04:20 crc kubenswrapper[4966]: I0127 16:04:20.478271 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78c5e231-0221-4bc4-affd-f30db82bed7a-operator-scripts\") pod \"78c5e231-0221-4bc4-affd-f30db82bed7a\" (UID: \"78c5e231-0221-4bc4-affd-f30db82bed7a\") " Jan 27 16:04:20 crc kubenswrapper[4966]: I0127 16:04:20.478559 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvq8k\" (UniqueName: \"kubernetes.io/projected/78c5e231-0221-4bc4-affd-f30db82bed7a-kube-api-access-jvq8k\") pod \"78c5e231-0221-4bc4-affd-f30db82bed7a\" (UID: \"78c5e231-0221-4bc4-affd-f30db82bed7a\") " Jan 27 16:04:20 crc kubenswrapper[4966]: I0127 16:04:20.479830 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78c5e231-0221-4bc4-affd-f30db82bed7a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "78c5e231-0221-4bc4-affd-f30db82bed7a" (UID: "78c5e231-0221-4bc4-affd-f30db82bed7a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:20 crc kubenswrapper[4966]: I0127 16:04:20.482958 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c5e231-0221-4bc4-affd-f30db82bed7a-kube-api-access-jvq8k" (OuterVolumeSpecName: "kube-api-access-jvq8k") pod "78c5e231-0221-4bc4-affd-f30db82bed7a" (UID: "78c5e231-0221-4bc4-affd-f30db82bed7a"). InnerVolumeSpecName "kube-api-access-jvq8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:20 crc kubenswrapper[4966]: I0127 16:04:20.484955 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78c5e231-0221-4bc4-affd-f30db82bed7a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:20 crc kubenswrapper[4966]: I0127 16:04:20.484997 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvq8k\" (UniqueName: \"kubernetes.io/projected/78c5e231-0221-4bc4-affd-f30db82bed7a-kube-api-access-jvq8k\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:20 crc kubenswrapper[4966]: I0127 16:04:20.561719 4966 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:04:20 crc kubenswrapper[4966]: I0127 16:04:20.572293 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m7krk" Jan 27 16:04:20 crc kubenswrapper[4966]: I0127 16:04:20.572280 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m7krk" event={"ID":"78c5e231-0221-4bc4-affd-f30db82bed7a","Type":"ContainerDied","Data":"04a9e95cc1e144627b25603630ad46f496658a22977f13e79838febde24ec46a"} Jan 27 16:04:20 crc kubenswrapper[4966]: I0127 16:04:20.572369 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04a9e95cc1e144627b25603630ad46f496658a22977f13e79838febde24ec46a" Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.293070 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.404591 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-additional-scripts\") pod \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.405080 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w24n\" (UniqueName: \"kubernetes.io/projected/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-kube-api-access-2w24n\") pod \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.405125 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-var-run\") pod \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.405177 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-var-run-ovn\") pod \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.405256 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-scripts\") pod \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.405294 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-var-log-ovn\") pod \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\" (UID: \"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b\") " Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.405857 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "226820e8-7e13-433e-ba0d-f3c6c9ec2b8b" (UID: "226820e8-7e13-433e-ba0d-f3c6c9ec2b8b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.406727 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "226820e8-7e13-433e-ba0d-f3c6c9ec2b8b" (UID: "226820e8-7e13-433e-ba0d-f3c6c9ec2b8b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.408166 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-var-run" (OuterVolumeSpecName: "var-run") pod "226820e8-7e13-433e-ba0d-f3c6c9ec2b8b" (UID: "226820e8-7e13-433e-ba0d-f3c6c9ec2b8b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.408141 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "226820e8-7e13-433e-ba0d-f3c6c9ec2b8b" (UID: "226820e8-7e13-433e-ba0d-f3c6c9ec2b8b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.409035 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-scripts" (OuterVolumeSpecName: "scripts") pod "226820e8-7e13-433e-ba0d-f3c6c9ec2b8b" (UID: "226820e8-7e13-433e-ba0d-f3c6c9ec2b8b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.415983 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-kube-api-access-2w24n" (OuterVolumeSpecName: "kube-api-access-2w24n") pod "226820e8-7e13-433e-ba0d-f3c6c9ec2b8b" (UID: "226820e8-7e13-433e-ba0d-f3c6c9ec2b8b"). InnerVolumeSpecName "kube-api-access-2w24n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.507601 4966 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.507845 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w24n\" (UniqueName: \"kubernetes.io/projected/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-kube-api-access-2w24n\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.507938 4966 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.508027 4966 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.508135 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.508237 4966 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.583240 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96kvf-config-9wwrm" event={"ID":"226820e8-7e13-433e-ba0d-f3c6c9ec2b8b","Type":"ContainerDied","Data":"6de03285a987a2ed174af7b5f412b4a263c1ef9a050e2dfa9cfde543081d06c5"} Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.583284 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6de03285a987a2ed174af7b5f412b4a263c1ef9a050e2dfa9cfde543081d06c5" Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.583255 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96kvf-config-9wwrm" Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.584857 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a59c903b-6e40-43bd-a120-e47e504cf5a9","Type":"ContainerStarted","Data":"3c64c0c5c5219f662e4b621c2effba0dfa6008e3b895853f7bcd542959735392"} Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.586749 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651","Type":"ContainerStarted","Data":"e0c9009d1c35891a4e6f54e0dffa1b4d9023a4a09b6706b57a973b768e038f6d"} Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.588778 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"1af731ff-1327-4db7-b2e7-a90c451390e4","Type":"ContainerStarted","Data":"89be93a796137a45ba13c5e9527e2967c52e649ebf3f3579d47fc3933d7e33f9"} Jan 27 16:04:21 crc kubenswrapper[4966]: I0127 16:04:21.608515 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=9.423421079 podStartE2EDuration="11.608494831s" podCreationTimestamp="2026-01-27 16:04:10 +0000 UTC" firstStartedPulling="2026-01-27 16:04:17.883345943 +0000 UTC m=+1324.186139421" lastFinishedPulling="2026-01-27 16:04:20.068419675 +0000 UTC m=+1326.371213173" observedRunningTime="2026-01-27 16:04:21.606604762 +0000 UTC m=+1327.909398260" watchObservedRunningTime="2026-01-27 16:04:21.608494831 +0000 UTC m=+1327.911288329" Jan 27 16:04:22 crc kubenswrapper[4966]: I0127 16:04:22.385973 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-96kvf-config-9wwrm"] Jan 27 16:04:22 crc kubenswrapper[4966]: I0127 16:04:22.394648 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-96kvf-config-9wwrm"] Jan 27 16:04:22 crc kubenswrapper[4966]: I0127 16:04:22.538479 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="226820e8-7e13-433e-ba0d-f3c6c9ec2b8b" path="/var/lib/kubelet/pods/226820e8-7e13-433e-ba0d-f3c6c9ec2b8b/volumes" Jan 27 16:04:22 crc kubenswrapper[4966]: I0127 16:04:22.600633 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a59c903b-6e40-43bd-a120-e47e504cf5a9","Type":"ContainerStarted","Data":"cf67ca2c2cbbcd4554bb5792d271866dae36595624a13fff9046f967521344bd"} Jan 27 16:04:22 crc kubenswrapper[4966]: I0127 16:04:22.600689 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a59c903b-6e40-43bd-a120-e47e504cf5a9","Type":"ContainerStarted","Data":"74a1e4af45572885f7e9da5053f0dde3f76ef62f523df9504f177d3b76ec6b51"} Jan 27 16:04:22 crc kubenswrapper[4966]: I0127 16:04:22.600702 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a59c903b-6e40-43bd-a120-e47e504cf5a9","Type":"ContainerStarted","Data":"0035cd5311bc4d7752af849f9040ec34c90dbea1aeacae3cab6cbad7c191609f"} Jan 27 16:04:23 crc kubenswrapper[4966]: I0127 16:04:23.110722 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-96kvf" Jan 27 16:04:24 crc kubenswrapper[4966]: I0127 16:04:24.715125 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 16:04:24 crc kubenswrapper[4966]: I0127 16:04:24.732126 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 27 16:04:24 crc kubenswrapper[4966]: I0127 16:04:24.747816 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 27 16:04:25 crc kubenswrapper[4966]: I0127 16:04:25.629231 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a59c903b-6e40-43bd-a120-e47e504cf5a9","Type":"ContainerStarted","Data":"260eff4718ee765a0bf8cf3dd6bfc669b932af2ead0895da845d84b936336d1f"} Jan 27 16:04:25 crc kubenswrapper[4966]: I0127 16:04:25.629733 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a59c903b-6e40-43bd-a120-e47e504cf5a9","Type":"ContainerStarted","Data":"5b4c2259a341e593e9bd30c66979449483a94055cd1705a02c1e2040eda46aea"} Jan 27 16:04:25 crc kubenswrapper[4966]: I0127 16:04:25.629746 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a59c903b-6e40-43bd-a120-e47e504cf5a9","Type":"ContainerStarted","Data":"fca968af7e4ae08cb0ed6cf50422b227b42713ea5822fc84377ba613fd57ea7e"} Jan 27 16:04:25 crc kubenswrapper[4966]: I0127 16:04:25.629756 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a59c903b-6e40-43bd-a120-e47e504cf5a9","Type":"ContainerStarted","Data":"c0a4c8f3515f967aea1e9d245d0c6f5804191d1001d9b63323f0798c34371c49"} Jan 27 16:04:25 crc kubenswrapper[4966]: I0127 16:04:25.631618 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651","Type":"ContainerStarted","Data":"1ff889379f525d5cb134a5e03332a2a08366031fb54bf2aad960f0645badaf20"} Jan 27 16:04:25 crc kubenswrapper[4966]: I0127 16:04:25.666686 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=13.378410543 podStartE2EDuration="41.666667109s" podCreationTimestamp="2026-01-27 16:03:44 +0000 UTC" firstStartedPulling="2026-01-27 16:03:56.181990367 +0000 UTC m=+1302.484783855" lastFinishedPulling="2026-01-27 16:04:24.470246933 +0000 UTC m=+1330.773040421" observedRunningTime="2026-01-27 16:04:25.660085032 +0000 UTC m=+1331.962878530" watchObservedRunningTime="2026-01-27 16:04:25.666667109 +0000 UTC m=+1331.969460587" Jan 27 16:04:26 crc kubenswrapper[4966]: I0127 16:04:26.936100 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-5fdfx"] Jan 27 16:04:26 crc kubenswrapper[4966]: E0127 16:04:26.936804 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226820e8-7e13-433e-ba0d-f3c6c9ec2b8b" containerName="ovn-config" Jan 27 16:04:26 crc kubenswrapper[4966]: I0127 16:04:26.936820 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="226820e8-7e13-433e-ba0d-f3c6c9ec2b8b" containerName="ovn-config" Jan 27 16:04:26 crc kubenswrapper[4966]: E0127 16:04:26.936839 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78c5e231-0221-4bc4-affd-f30db82bed7a" containerName="mariadb-account-create-update" Jan 27 16:04:26 crc kubenswrapper[4966]: I0127 16:04:26.936845 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="78c5e231-0221-4bc4-affd-f30db82bed7a" containerName="mariadb-account-create-update" Jan 27 16:04:26 crc kubenswrapper[4966]: I0127 16:04:26.937031 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="78c5e231-0221-4bc4-affd-f30db82bed7a" containerName="mariadb-account-create-update" Jan 27 16:04:26 crc kubenswrapper[4966]: I0127 16:04:26.937057 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="226820e8-7e13-433e-ba0d-f3c6c9ec2b8b" containerName="ovn-config" Jan 27 16:04:26 crc kubenswrapper[4966]: I0127 16:04:26.937719 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5fdfx" Jan 27 16:04:26 crc kubenswrapper[4966]: I0127 16:04:26.951746 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-5fdfx"] Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.032376 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-9d6a-account-create-update-4cs7w"] Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.034547 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-9d6a-account-create-update-4cs7w" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.040675 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.045719 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v54vc\" (UniqueName: \"kubernetes.io/projected/bca13755-0032-4941-bfcd-7550197712c7-kube-api-access-v54vc\") pod \"heat-db-create-5fdfx\" (UID: \"bca13755-0032-4941-bfcd-7550197712c7\") " pod="openstack/heat-db-create-5fdfx" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.045792 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca13755-0032-4941-bfcd-7550197712c7-operator-scripts\") pod \"heat-db-create-5fdfx\" (UID: \"bca13755-0032-4941-bfcd-7550197712c7\") " pod="openstack/heat-db-create-5fdfx" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.104246 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-9d6a-account-create-update-4cs7w"] Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.148071 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7608642-7d28-462c-9838-2b1aa51a69ed-operator-scripts\") pod \"heat-9d6a-account-create-update-4cs7w\" (UID: \"e7608642-7d28-462c-9838-2b1aa51a69ed\") " pod="openstack/heat-9d6a-account-create-update-4cs7w" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.148353 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zt8w\" (UniqueName: \"kubernetes.io/projected/e7608642-7d28-462c-9838-2b1aa51a69ed-kube-api-access-6zt8w\") pod \"heat-9d6a-account-create-update-4cs7w\" (UID: \"e7608642-7d28-462c-9838-2b1aa51a69ed\") " pod="openstack/heat-9d6a-account-create-update-4cs7w" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.148510 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v54vc\" (UniqueName: \"kubernetes.io/projected/bca13755-0032-4941-bfcd-7550197712c7-kube-api-access-v54vc\") pod \"heat-db-create-5fdfx\" (UID: \"bca13755-0032-4941-bfcd-7550197712c7\") " pod="openstack/heat-db-create-5fdfx" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.148593 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca13755-0032-4941-bfcd-7550197712c7-operator-scripts\") pod \"heat-db-create-5fdfx\" (UID: \"bca13755-0032-4941-bfcd-7550197712c7\") " pod="openstack/heat-db-create-5fdfx" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.149348 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca13755-0032-4941-bfcd-7550197712c7-operator-scripts\") pod \"heat-db-create-5fdfx\" (UID: \"bca13755-0032-4941-bfcd-7550197712c7\") " pod="openstack/heat-db-create-5fdfx" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.168613 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v54vc\" (UniqueName: \"kubernetes.io/projected/bca13755-0032-4941-bfcd-7550197712c7-kube-api-access-v54vc\") pod \"heat-db-create-5fdfx\" (UID: \"bca13755-0032-4941-bfcd-7550197712c7\") " pod="openstack/heat-db-create-5fdfx" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.233804 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-9524-account-create-update-nwkpp"] Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.243987 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9524-account-create-update-nwkpp" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.246544 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.252335 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zt8w\" (UniqueName: \"kubernetes.io/projected/e7608642-7d28-462c-9838-2b1aa51a69ed-kube-api-access-6zt8w\") pod \"heat-9d6a-account-create-update-4cs7w\" (UID: \"e7608642-7d28-462c-9838-2b1aa51a69ed\") " pod="openstack/heat-9d6a-account-create-update-4cs7w" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.252534 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7608642-7d28-462c-9838-2b1aa51a69ed-operator-scripts\") pod \"heat-9d6a-account-create-update-4cs7w\" (UID: \"e7608642-7d28-462c-9838-2b1aa51a69ed\") " pod="openstack/heat-9d6a-account-create-update-4cs7w" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.253427 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7608642-7d28-462c-9838-2b1aa51a69ed-operator-scripts\") pod \"heat-9d6a-account-create-update-4cs7w\" (UID: \"e7608642-7d28-462c-9838-2b1aa51a69ed\") " pod="openstack/heat-9d6a-account-create-update-4cs7w" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.261454 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-nd6n6"] Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.262916 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nd6n6" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.280185 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5fdfx" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.284017 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zt8w\" (UniqueName: \"kubernetes.io/projected/e7608642-7d28-462c-9838-2b1aa51a69ed-kube-api-access-6zt8w\") pod \"heat-9d6a-account-create-update-4cs7w\" (UID: \"e7608642-7d28-462c-9838-2b1aa51a69ed\") " pod="openstack/heat-9d6a-account-create-update-4cs7w" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.286420 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9524-account-create-update-nwkpp"] Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.294991 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-nd6n6"] Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.355213 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxg8h\" (UniqueName: \"kubernetes.io/projected/e84c4662-7140-4e74-beff-dc84f3c0b6c7-kube-api-access-mxg8h\") pod \"cinder-db-create-nd6n6\" (UID: \"e84c4662-7140-4e74-beff-dc84f3c0b6c7\") " pod="openstack/cinder-db-create-nd6n6" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.355342 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e84c4662-7140-4e74-beff-dc84f3c0b6c7-operator-scripts\") pod \"cinder-db-create-nd6n6\" (UID: \"e84c4662-7140-4e74-beff-dc84f3c0b6c7\") " pod="openstack/cinder-db-create-nd6n6" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.355437 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7p2m\" (UniqueName: \"kubernetes.io/projected/4fe63661-1f35-4598-ab4e-86f934127864-kube-api-access-k7p2m\") pod \"barbican-9524-account-create-update-nwkpp\" (UID: \"4fe63661-1f35-4598-ab4e-86f934127864\") " pod="openstack/barbican-9524-account-create-update-nwkpp" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.355584 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fe63661-1f35-4598-ab4e-86f934127864-operator-scripts\") pod \"barbican-9524-account-create-update-nwkpp\" (UID: \"4fe63661-1f35-4598-ab4e-86f934127864\") " pod="openstack/barbican-9524-account-create-update-nwkpp" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.366329 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-9d6a-account-create-update-4cs7w" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.372002 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-tzbwt"] Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.373546 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tzbwt" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.404037 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-tzbwt"] Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.430121 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-sllzs"] Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.431940 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sllzs" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.435439 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.435750 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-97zfb" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.435880 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.440979 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-sllzs"] Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.445526 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.459663 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7p2m\" (UniqueName: \"kubernetes.io/projected/4fe63661-1f35-4598-ab4e-86f934127864-kube-api-access-k7p2m\") pod \"barbican-9524-account-create-update-nwkpp\" (UID: \"4fe63661-1f35-4598-ab4e-86f934127864\") " pod="openstack/barbican-9524-account-create-update-nwkpp" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.459773 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a-operator-scripts\") pod \"barbican-db-create-tzbwt\" (UID: \"83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a\") " pod="openstack/barbican-db-create-tzbwt" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.459800 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcm4k\" (UniqueName: \"kubernetes.io/projected/83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a-kube-api-access-zcm4k\") pod \"barbican-db-create-tzbwt\" (UID: \"83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a\") " pod="openstack/barbican-db-create-tzbwt" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.459865 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fe63661-1f35-4598-ab4e-86f934127864-operator-scripts\") pod \"barbican-9524-account-create-update-nwkpp\" (UID: \"4fe63661-1f35-4598-ab4e-86f934127864\") " pod="openstack/barbican-9524-account-create-update-nwkpp" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.459927 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxg8h\" (UniqueName: \"kubernetes.io/projected/e84c4662-7140-4e74-beff-dc84f3c0b6c7-kube-api-access-mxg8h\") pod \"cinder-db-create-nd6n6\" (UID: \"e84c4662-7140-4e74-beff-dc84f3c0b6c7\") " pod="openstack/cinder-db-create-nd6n6" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.460569 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e84c4662-7140-4e74-beff-dc84f3c0b6c7-operator-scripts\") pod \"cinder-db-create-nd6n6\" (UID: \"e84c4662-7140-4e74-beff-dc84f3c0b6c7\") " pod="openstack/cinder-db-create-nd6n6" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.461308 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2dda-account-create-update-q4kct"] Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.461346 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fe63661-1f35-4598-ab4e-86f934127864-operator-scripts\") pod \"barbican-9524-account-create-update-nwkpp\" (UID: \"4fe63661-1f35-4598-ab4e-86f934127864\") " pod="openstack/barbican-9524-account-create-update-nwkpp" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.461503 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e84c4662-7140-4e74-beff-dc84f3c0b6c7-operator-scripts\") pod \"cinder-db-create-nd6n6\" (UID: \"e84c4662-7140-4e74-beff-dc84f3c0b6c7\") " pod="openstack/cinder-db-create-nd6n6" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.462752 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2dda-account-create-update-q4kct" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.464945 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.473008 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2dda-account-create-update-q4kct"] Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.510585 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7p2m\" (UniqueName: \"kubernetes.io/projected/4fe63661-1f35-4598-ab4e-86f934127864-kube-api-access-k7p2m\") pod \"barbican-9524-account-create-update-nwkpp\" (UID: \"4fe63661-1f35-4598-ab4e-86f934127864\") " pod="openstack/barbican-9524-account-create-update-nwkpp" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.523980 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxg8h\" (UniqueName: \"kubernetes.io/projected/e84c4662-7140-4e74-beff-dc84f3c0b6c7-kube-api-access-mxg8h\") pod \"cinder-db-create-nd6n6\" (UID: \"e84c4662-7140-4e74-beff-dc84f3c0b6c7\") " pod="openstack/cinder-db-create-nd6n6" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.562159 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a-operator-scripts\") pod \"barbican-db-create-tzbwt\" (UID: \"83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a\") " pod="openstack/barbican-db-create-tzbwt" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.562202 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcm4k\" (UniqueName: \"kubernetes.io/projected/83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a-kube-api-access-zcm4k\") pod \"barbican-db-create-tzbwt\" (UID: \"83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a\") " pod="openstack/barbican-db-create-tzbwt" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.562237 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5k8n\" (UniqueName: \"kubernetes.io/projected/ee305aa0-15fe-46a9-b62f-8936153daddf-kube-api-access-t5k8n\") pod \"keystone-db-sync-sllzs\" (UID: \"ee305aa0-15fe-46a9-b62f-8936153daddf\") " pod="openstack/keystone-db-sync-sllzs" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.562257 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca52e453-b636-461d-b18f-1ca06af6a91f-operator-scripts\") pod \"cinder-2dda-account-create-update-q4kct\" (UID: \"ca52e453-b636-461d-b18f-1ca06af6a91f\") " pod="openstack/cinder-2dda-account-create-update-q4kct" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.562343 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq286\" (UniqueName: \"kubernetes.io/projected/ca52e453-b636-461d-b18f-1ca06af6a91f-kube-api-access-wq286\") pod \"cinder-2dda-account-create-update-q4kct\" (UID: \"ca52e453-b636-461d-b18f-1ca06af6a91f\") " pod="openstack/cinder-2dda-account-create-update-q4kct" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.562400 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee305aa0-15fe-46a9-b62f-8936153daddf-config-data\") pod \"keystone-db-sync-sllzs\" (UID: \"ee305aa0-15fe-46a9-b62f-8936153daddf\") " pod="openstack/keystone-db-sync-sllzs" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.562441 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee305aa0-15fe-46a9-b62f-8936153daddf-combined-ca-bundle\") pod \"keystone-db-sync-sllzs\" (UID: \"ee305aa0-15fe-46a9-b62f-8936153daddf\") " pod="openstack/keystone-db-sync-sllzs" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.565550 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a-operator-scripts\") pod \"barbican-db-create-tzbwt\" (UID: \"83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a\") " pod="openstack/barbican-db-create-tzbwt" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.574115 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9524-account-create-update-nwkpp" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.576172 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-5sjsx"] Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.577858 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5sjsx" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.594993 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5sjsx"] Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.596330 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcm4k\" (UniqueName: \"kubernetes.io/projected/83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a-kube-api-access-zcm4k\") pod \"barbican-db-create-tzbwt\" (UID: \"83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a\") " pod="openstack/barbican-db-create-tzbwt" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.663886 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crbp8\" (UniqueName: \"kubernetes.io/projected/92dd5605-46b2-44d0-b2e8-30492c2049ea-kube-api-access-crbp8\") pod \"neutron-db-create-5sjsx\" (UID: \"92dd5605-46b2-44d0-b2e8-30492c2049ea\") " pod="openstack/neutron-db-create-5sjsx" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.664181 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee305aa0-15fe-46a9-b62f-8936153daddf-config-data\") pod \"keystone-db-sync-sllzs\" (UID: \"ee305aa0-15fe-46a9-b62f-8936153daddf\") " pod="openstack/keystone-db-sync-sllzs" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.664225 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee305aa0-15fe-46a9-b62f-8936153daddf-combined-ca-bundle\") pod \"keystone-db-sync-sllzs\" (UID: \"ee305aa0-15fe-46a9-b62f-8936153daddf\") " pod="openstack/keystone-db-sync-sllzs" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.664265 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92dd5605-46b2-44d0-b2e8-30492c2049ea-operator-scripts\") pod \"neutron-db-create-5sjsx\" (UID: \"92dd5605-46b2-44d0-b2e8-30492c2049ea\") " pod="openstack/neutron-db-create-5sjsx" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.664318 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5k8n\" (UniqueName: \"kubernetes.io/projected/ee305aa0-15fe-46a9-b62f-8936153daddf-kube-api-access-t5k8n\") pod \"keystone-db-sync-sllzs\" (UID: \"ee305aa0-15fe-46a9-b62f-8936153daddf\") " pod="openstack/keystone-db-sync-sllzs" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.664339 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca52e453-b636-461d-b18f-1ca06af6a91f-operator-scripts\") pod \"cinder-2dda-account-create-update-q4kct\" (UID: \"ca52e453-b636-461d-b18f-1ca06af6a91f\") " pod="openstack/cinder-2dda-account-create-update-q4kct" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.664454 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq286\" (UniqueName: \"kubernetes.io/projected/ca52e453-b636-461d-b18f-1ca06af6a91f-kube-api-access-wq286\") pod \"cinder-2dda-account-create-update-q4kct\" (UID: \"ca52e453-b636-461d-b18f-1ca06af6a91f\") " pod="openstack/cinder-2dda-account-create-update-q4kct" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.665822 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca52e453-b636-461d-b18f-1ca06af6a91f-operator-scripts\") pod \"cinder-2dda-account-create-update-q4kct\" (UID: \"ca52e453-b636-461d-b18f-1ca06af6a91f\") " pod="openstack/cinder-2dda-account-create-update-q4kct" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.669638 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee305aa0-15fe-46a9-b62f-8936153daddf-config-data\") pod \"keystone-db-sync-sllzs\" (UID: \"ee305aa0-15fe-46a9-b62f-8936153daddf\") " pod="openstack/keystone-db-sync-sllzs" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.677029 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee305aa0-15fe-46a9-b62f-8936153daddf-combined-ca-bundle\") pod \"keystone-db-sync-sllzs\" (UID: \"ee305aa0-15fe-46a9-b62f-8936153daddf\") " pod="openstack/keystone-db-sync-sllzs" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.686500 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nd6n6" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.691084 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5k8n\" (UniqueName: \"kubernetes.io/projected/ee305aa0-15fe-46a9-b62f-8936153daddf-kube-api-access-t5k8n\") pod \"keystone-db-sync-sllzs\" (UID: \"ee305aa0-15fe-46a9-b62f-8936153daddf\") " pod="openstack/keystone-db-sync-sllzs" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.702172 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq286\" (UniqueName: \"kubernetes.io/projected/ca52e453-b636-461d-b18f-1ca06af6a91f-kube-api-access-wq286\") pod \"cinder-2dda-account-create-update-q4kct\" (UID: \"ca52e453-b636-461d-b18f-1ca06af6a91f\") " pod="openstack/cinder-2dda-account-create-update-q4kct" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.705721 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tzbwt" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.733158 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0818-account-create-update-sjz8w"] Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.734554 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0818-account-create-update-sjz8w" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.737338 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.744803 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0818-account-create-update-sjz8w"] Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.756989 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sllzs" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.766209 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92dd5605-46b2-44d0-b2e8-30492c2049ea-operator-scripts\") pod \"neutron-db-create-5sjsx\" (UID: \"92dd5605-46b2-44d0-b2e8-30492c2049ea\") " pod="openstack/neutron-db-create-5sjsx" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.766600 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crbp8\" (UniqueName: \"kubernetes.io/projected/92dd5605-46b2-44d0-b2e8-30492c2049ea-kube-api-access-crbp8\") pod \"neutron-db-create-5sjsx\" (UID: \"92dd5605-46b2-44d0-b2e8-30492c2049ea\") " pod="openstack/neutron-db-create-5sjsx" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.767005 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92dd5605-46b2-44d0-b2e8-30492c2049ea-operator-scripts\") pod \"neutron-db-create-5sjsx\" (UID: \"92dd5605-46b2-44d0-b2e8-30492c2049ea\") " pod="openstack/neutron-db-create-5sjsx" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.783382 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crbp8\" (UniqueName: \"kubernetes.io/projected/92dd5605-46b2-44d0-b2e8-30492c2049ea-kube-api-access-crbp8\") pod \"neutron-db-create-5sjsx\" (UID: \"92dd5605-46b2-44d0-b2e8-30492c2049ea\") " pod="openstack/neutron-db-create-5sjsx" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.784431 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2dda-account-create-update-q4kct" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.869499 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a15438a-18b1-4bca-9566-30c48526de56-operator-scripts\") pod \"neutron-0818-account-create-update-sjz8w\" (UID: \"1a15438a-18b1-4bca-9566-30c48526de56\") " pod="openstack/neutron-0818-account-create-update-sjz8w" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.870362 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lplhl\" (UniqueName: \"kubernetes.io/projected/1a15438a-18b1-4bca-9566-30c48526de56-kube-api-access-lplhl\") pod \"neutron-0818-account-create-update-sjz8w\" (UID: \"1a15438a-18b1-4bca-9566-30c48526de56\") " pod="openstack/neutron-0818-account-create-update-sjz8w" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.927777 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5sjsx" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.973872 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a15438a-18b1-4bca-9566-30c48526de56-operator-scripts\") pod \"neutron-0818-account-create-update-sjz8w\" (UID: \"1a15438a-18b1-4bca-9566-30c48526de56\") " pod="openstack/neutron-0818-account-create-update-sjz8w" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.974113 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lplhl\" (UniqueName: \"kubernetes.io/projected/1a15438a-18b1-4bca-9566-30c48526de56-kube-api-access-lplhl\") pod \"neutron-0818-account-create-update-sjz8w\" (UID: \"1a15438a-18b1-4bca-9566-30c48526de56\") " pod="openstack/neutron-0818-account-create-update-sjz8w" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.975658 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a15438a-18b1-4bca-9566-30c48526de56-operator-scripts\") pod \"neutron-0818-account-create-update-sjz8w\" (UID: \"1a15438a-18b1-4bca-9566-30c48526de56\") " pod="openstack/neutron-0818-account-create-update-sjz8w" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.997962 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lplhl\" (UniqueName: \"kubernetes.io/projected/1a15438a-18b1-4bca-9566-30c48526de56-kube-api-access-lplhl\") pod \"neutron-0818-account-create-update-sjz8w\" (UID: \"1a15438a-18b1-4bca-9566-30c48526de56\") " pod="openstack/neutron-0818-account-create-update-sjz8w" Jan 27 16:04:27 crc kubenswrapper[4966]: I0127 16:04:27.998344 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-5fdfx"] Jan 27 16:04:28 crc kubenswrapper[4966]: I0127 16:04:28.057265 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0818-account-create-update-sjz8w" Jan 27 16:04:28 crc kubenswrapper[4966]: I0127 16:04:28.687071 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5fdfx" event={"ID":"bca13755-0032-4941-bfcd-7550197712c7","Type":"ContainerStarted","Data":"72fc730eef0ab3bcc9ce1f57e3074a7c606b9e79fad792ba38e139e3a4fb4564"} Jan 27 16:04:28 crc kubenswrapper[4966]: I0127 16:04:28.963596 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-nd6n6"] Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.090646 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-9d6a-account-create-update-4cs7w"] Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.130008 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9524-account-create-update-nwkpp"] Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.501801 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-tzbwt"] Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.514488 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-sllzs"] Jan 27 16:04:29 crc kubenswrapper[4966]: W0127 16:04:29.527381 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83f2360e_3cb8_4bda_b2f6_6d1fd30ea58a.slice/crio-09e9be6aa4b569672f2bd6827e69389e8a51fa78c9fb363a829dbfd286a9846e WatchSource:0}: Error finding container 09e9be6aa4b569672f2bd6827e69389e8a51fa78c9fb363a829dbfd286a9846e: Status 404 returned error can't find the container with id 09e9be6aa4b569672f2bd6827e69389e8a51fa78c9fb363a829dbfd286a9846e Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.541993 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5sjsx"] Jan 27 16:04:29 crc kubenswrapper[4966]: W0127 16:04:29.546349 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee305aa0_15fe_46a9_b62f_8936153daddf.slice/crio-202058a1a6d50b6265d5d06a3e1102efb39ad6808582a1758584a8d171692b24 WatchSource:0}: Error finding container 202058a1a6d50b6265d5d06a3e1102efb39ad6808582a1758584a8d171692b24: Status 404 returned error can't find the container with id 202058a1a6d50b6265d5d06a3e1102efb39ad6808582a1758584a8d171692b24 Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.557710 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2dda-account-create-update-q4kct"] Jan 27 16:04:29 crc kubenswrapper[4966]: W0127 16:04:29.564281 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92dd5605_46b2_44d0_b2e8_30492c2049ea.slice/crio-eb54dd6ee203986c8331a9b8d68d897a3f2d38c773c5633bb0a86c35ae3c9a27 WatchSource:0}: Error finding container eb54dd6ee203986c8331a9b8d68d897a3f2d38c773c5633bb0a86c35ae3c9a27: Status 404 returned error can't find the container with id eb54dd6ee203986c8331a9b8d68d897a3f2d38c773c5633bb0a86c35ae3c9a27 Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.626394 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0818-account-create-update-sjz8w"] Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.718019 4966 generic.go:334] "Generic (PLEG): container finished" podID="bca13755-0032-4941-bfcd-7550197712c7" containerID="ea031b24f60ca90180d7a09becc98dc20573b0c65688867bbcfe51ee475e50d1" exitCode=0 Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.718101 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5fdfx" event={"ID":"bca13755-0032-4941-bfcd-7550197712c7","Type":"ContainerDied","Data":"ea031b24f60ca90180d7a09becc98dc20573b0c65688867bbcfe51ee475e50d1"} Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.721517 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-9d6a-account-create-update-4cs7w" event={"ID":"e7608642-7d28-462c-9838-2b1aa51a69ed","Type":"ContainerStarted","Data":"46fd629d488d38bb799dfb8fdb924583cf5fa0273150b4714bf6e320c5b563ee"} Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.721580 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-9d6a-account-create-update-4cs7w" event={"ID":"e7608642-7d28-462c-9838-2b1aa51a69ed","Type":"ContainerStarted","Data":"91bcab0e8845ce73e871ae7a7848104632f75e4a9f39b19abe3ff261a135adcb"} Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.727223 4966 generic.go:334] "Generic (PLEG): container finished" podID="33f50afa-bf36-4363-8076-0b8271d89a85" containerID="dde934ed23745608fcb281716783c17ade0d45f63e3f471eb90462553e4cf4bf" exitCode=0 Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.727296 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-s9cgg" event={"ID":"33f50afa-bf36-4363-8076-0b8271d89a85","Type":"ContainerDied","Data":"dde934ed23745608fcb281716783c17ade0d45f63e3f471eb90462553e4cf4bf"} Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.737734 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5sjsx" event={"ID":"92dd5605-46b2-44d0-b2e8-30492c2049ea","Type":"ContainerStarted","Data":"eb54dd6ee203986c8331a9b8d68d897a3f2d38c773c5633bb0a86c35ae3c9a27"} Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.776540 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-9d6a-account-create-update-4cs7w" podStartSLOduration=2.776522517 podStartE2EDuration="2.776522517s" podCreationTimestamp="2026-01-27 16:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:04:29.750541551 +0000 UTC m=+1336.053335039" watchObservedRunningTime="2026-01-27 16:04:29.776522517 +0000 UTC m=+1336.079316005" Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.783026 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a59c903b-6e40-43bd-a120-e47e504cf5a9","Type":"ContainerStarted","Data":"16797316907bf98b6f36f71f077823b9eb243630074528cf3c985a0489f35a0d"} Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.783098 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a59c903b-6e40-43bd-a120-e47e504cf5a9","Type":"ContainerStarted","Data":"7e68de45f121e2ba2d89d95405137f59f44760d67d565e89f8c39d0d0c3ed556"} Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.783111 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a59c903b-6e40-43bd-a120-e47e504cf5a9","Type":"ContainerStarted","Data":"97b374dccbb890960dcb6d229b243e4ee5ce90713650fb9d09c339cc4f29cb00"} Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.800450 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9524-account-create-update-nwkpp" event={"ID":"4fe63661-1f35-4598-ab4e-86f934127864","Type":"ContainerStarted","Data":"85e4314d666f44850abd9172f03c796f4c80d82f3df6ed3bdc9714be565b6ea7"} Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.800614 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9524-account-create-update-nwkpp" event={"ID":"4fe63661-1f35-4598-ab4e-86f934127864","Type":"ContainerStarted","Data":"f66996f5f279045e597067275cee742178c30e1d6e3a834ed36373732332de51"} Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.804792 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tzbwt" event={"ID":"83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a","Type":"ContainerStarted","Data":"09e9be6aa4b569672f2bd6827e69389e8a51fa78c9fb363a829dbfd286a9846e"} Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.809993 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sllzs" event={"ID":"ee305aa0-15fe-46a9-b62f-8936153daddf","Type":"ContainerStarted","Data":"202058a1a6d50b6265d5d06a3e1102efb39ad6808582a1758584a8d171692b24"} Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.822724 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-9524-account-create-update-nwkpp" podStartSLOduration=2.822700725 podStartE2EDuration="2.822700725s" podCreationTimestamp="2026-01-27 16:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:04:29.813459405 +0000 UTC m=+1336.116252893" watchObservedRunningTime="2026-01-27 16:04:29.822700725 +0000 UTC m=+1336.125494213" Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.825792 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nd6n6" event={"ID":"e84c4662-7140-4e74-beff-dc84f3c0b6c7","Type":"ContainerStarted","Data":"22b4cae50654a749e7d911158110b93d6dca19b876d96a32a403bfd2ff90bc22"} Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.826006 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nd6n6" event={"ID":"e84c4662-7140-4e74-beff-dc84f3c0b6c7","Type":"ContainerStarted","Data":"3811598636d40dfc758abee985a864740ffa8df1117db6c42d627dd644a17c2e"} Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.830482 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0818-account-create-update-sjz8w" event={"ID":"1a15438a-18b1-4bca-9566-30c48526de56","Type":"ContainerStarted","Data":"a574b9bd24d6a9a95a441f89a72322bc7ae22fd8df2d46a896f7c2c5a3ba8a8a"} Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.836602 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2dda-account-create-update-q4kct" event={"ID":"ca52e453-b636-461d-b18f-1ca06af6a91f","Type":"ContainerStarted","Data":"8cf0f1b0a4c7e03499b15726f1bea2bbd3997087f9a3f4faaa6c84173971bcb9"} Jan 27 16:04:29 crc kubenswrapper[4966]: I0127 16:04:29.848653 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-nd6n6" podStartSLOduration=2.848634109 podStartE2EDuration="2.848634109s" podCreationTimestamp="2026-01-27 16:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:04:29.843003492 +0000 UTC m=+1336.145796970" watchObservedRunningTime="2026-01-27 16:04:29.848634109 +0000 UTC m=+1336.151427597" Jan 27 16:04:30 crc kubenswrapper[4966]: I0127 16:04:30.179046 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:30 crc kubenswrapper[4966]: I0127 16:04:30.179097 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:30 crc kubenswrapper[4966]: I0127 16:04:30.182572 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:30 crc kubenswrapper[4966]: I0127 16:04:30.857289 4966 generic.go:334] "Generic (PLEG): container finished" podID="1a15438a-18b1-4bca-9566-30c48526de56" containerID="d188a035bb6a62a4b62d1aa990a4f127b5d62b61a888cd49612e5ca36aeaae04" exitCode=0 Jan 27 16:04:30 crc kubenswrapper[4966]: I0127 16:04:30.857357 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0818-account-create-update-sjz8w" event={"ID":"1a15438a-18b1-4bca-9566-30c48526de56","Type":"ContainerDied","Data":"d188a035bb6a62a4b62d1aa990a4f127b5d62b61a888cd49612e5ca36aeaae04"} Jan 27 16:04:30 crc kubenswrapper[4966]: I0127 16:04:30.870133 4966 generic.go:334] "Generic (PLEG): container finished" podID="ca52e453-b636-461d-b18f-1ca06af6a91f" containerID="745ebd32475c1fc547532352669d70e75978f8ecbce7e457479630bbc6753e98" exitCode=0 Jan 27 16:04:30 crc kubenswrapper[4966]: I0127 16:04:30.870256 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2dda-account-create-update-q4kct" event={"ID":"ca52e453-b636-461d-b18f-1ca06af6a91f","Type":"ContainerDied","Data":"745ebd32475c1fc547532352669d70e75978f8ecbce7e457479630bbc6753e98"} Jan 27 16:04:30 crc kubenswrapper[4966]: I0127 16:04:30.892297 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a59c903b-6e40-43bd-a120-e47e504cf5a9","Type":"ContainerStarted","Data":"f5f696a4d6c2b2c63b971066b4ea8854a21cedefa78edb81f9676c44bace5843"} Jan 27 16:04:30 crc kubenswrapper[4966]: I0127 16:04:30.892351 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a59c903b-6e40-43bd-a120-e47e504cf5a9","Type":"ContainerStarted","Data":"c0c6a45f3356680c5f5d87562a1f56607eb20fd566257e652875ac06d55dc343"} Jan 27 16:04:30 crc kubenswrapper[4966]: I0127 16:04:30.892362 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a59c903b-6e40-43bd-a120-e47e504cf5a9","Type":"ContainerStarted","Data":"23a6db49bdfe7934087b6a2d877ce61587351b45d9cb6734000dd8a59d32d4d5"} Jan 27 16:04:30 crc kubenswrapper[4966]: I0127 16:04:30.898077 4966 generic.go:334] "Generic (PLEG): container finished" podID="4fe63661-1f35-4598-ab4e-86f934127864" containerID="85e4314d666f44850abd9172f03c796f4c80d82f3df6ed3bdc9714be565b6ea7" exitCode=0 Jan 27 16:04:30 crc kubenswrapper[4966]: I0127 16:04:30.898243 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9524-account-create-update-nwkpp" event={"ID":"4fe63661-1f35-4598-ab4e-86f934127864","Type":"ContainerDied","Data":"85e4314d666f44850abd9172f03c796f4c80d82f3df6ed3bdc9714be565b6ea7"} Jan 27 16:04:30 crc kubenswrapper[4966]: I0127 16:04:30.905441 4966 generic.go:334] "Generic (PLEG): container finished" podID="83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a" containerID="fa888a72c5557dcf71a41ffb4493854d81e44297a48ca215cf16b236f8d766bb" exitCode=0 Jan 27 16:04:30 crc kubenswrapper[4966]: I0127 16:04:30.905536 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tzbwt" event={"ID":"83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a","Type":"ContainerDied","Data":"fa888a72c5557dcf71a41ffb4493854d81e44297a48ca215cf16b236f8d766bb"} Jan 27 16:04:30 crc kubenswrapper[4966]: I0127 16:04:30.910950 4966 generic.go:334] "Generic (PLEG): container finished" podID="e7608642-7d28-462c-9838-2b1aa51a69ed" containerID="46fd629d488d38bb799dfb8fdb924583cf5fa0273150b4714bf6e320c5b563ee" exitCode=0 Jan 27 16:04:30 crc kubenswrapper[4966]: I0127 16:04:30.911040 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-9d6a-account-create-update-4cs7w" event={"ID":"e7608642-7d28-462c-9838-2b1aa51a69ed","Type":"ContainerDied","Data":"46fd629d488d38bb799dfb8fdb924583cf5fa0273150b4714bf6e320c5b563ee"} Jan 27 16:04:30 crc kubenswrapper[4966]: I0127 16:04:30.918448 4966 generic.go:334] "Generic (PLEG): container finished" podID="e84c4662-7140-4e74-beff-dc84f3c0b6c7" containerID="22b4cae50654a749e7d911158110b93d6dca19b876d96a32a403bfd2ff90bc22" exitCode=0 Jan 27 16:04:30 crc kubenswrapper[4966]: I0127 16:04:30.918533 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nd6n6" event={"ID":"e84c4662-7140-4e74-beff-dc84f3c0b6c7","Type":"ContainerDied","Data":"22b4cae50654a749e7d911158110b93d6dca19b876d96a32a403bfd2ff90bc22"} Jan 27 16:04:30 crc kubenswrapper[4966]: I0127 16:04:30.920448 4966 generic.go:334] "Generic (PLEG): container finished" podID="92dd5605-46b2-44d0-b2e8-30492c2049ea" containerID="a031bf4240b8bc65ada94c839fa8f64f8e1ef091d965088373fa5b826154577a" exitCode=0 Jan 27 16:04:30 crc kubenswrapper[4966]: I0127 16:04:30.922199 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5sjsx" event={"ID":"92dd5605-46b2-44d0-b2e8-30492c2049ea","Type":"ContainerDied","Data":"a031bf4240b8bc65ada94c839fa8f64f8e1ef091d965088373fa5b826154577a"} Jan 27 16:04:30 crc kubenswrapper[4966]: I0127 16:04:30.924815 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.462689 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5fdfx" Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.572131 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v54vc\" (UniqueName: \"kubernetes.io/projected/bca13755-0032-4941-bfcd-7550197712c7-kube-api-access-v54vc\") pod \"bca13755-0032-4941-bfcd-7550197712c7\" (UID: \"bca13755-0032-4941-bfcd-7550197712c7\") " Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.572350 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca13755-0032-4941-bfcd-7550197712c7-operator-scripts\") pod \"bca13755-0032-4941-bfcd-7550197712c7\" (UID: \"bca13755-0032-4941-bfcd-7550197712c7\") " Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.573587 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bca13755-0032-4941-bfcd-7550197712c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bca13755-0032-4941-bfcd-7550197712c7" (UID: "bca13755-0032-4941-bfcd-7550197712c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.578998 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bca13755-0032-4941-bfcd-7550197712c7-kube-api-access-v54vc" (OuterVolumeSpecName: "kube-api-access-v54vc") pod "bca13755-0032-4941-bfcd-7550197712c7" (UID: "bca13755-0032-4941-bfcd-7550197712c7"). InnerVolumeSpecName "kube-api-access-v54vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.606750 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-s9cgg" Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.673686 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt8p7\" (UniqueName: \"kubernetes.io/projected/33f50afa-bf36-4363-8076-0b8271d89a85-kube-api-access-dt8p7\") pod \"33f50afa-bf36-4363-8076-0b8271d89a85\" (UID: \"33f50afa-bf36-4363-8076-0b8271d89a85\") " Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.673872 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/33f50afa-bf36-4363-8076-0b8271d89a85-db-sync-config-data\") pod \"33f50afa-bf36-4363-8076-0b8271d89a85\" (UID: \"33f50afa-bf36-4363-8076-0b8271d89a85\") " Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.673947 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f50afa-bf36-4363-8076-0b8271d89a85-config-data\") pod \"33f50afa-bf36-4363-8076-0b8271d89a85\" (UID: \"33f50afa-bf36-4363-8076-0b8271d89a85\") " Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.673993 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f50afa-bf36-4363-8076-0b8271d89a85-combined-ca-bundle\") pod \"33f50afa-bf36-4363-8076-0b8271d89a85\" (UID: \"33f50afa-bf36-4363-8076-0b8271d89a85\") " Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.675688 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca13755-0032-4941-bfcd-7550197712c7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.675713 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v54vc\" (UniqueName: \"kubernetes.io/projected/bca13755-0032-4941-bfcd-7550197712c7-kube-api-access-v54vc\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.678460 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33f50afa-bf36-4363-8076-0b8271d89a85-kube-api-access-dt8p7" (OuterVolumeSpecName: "kube-api-access-dt8p7") pod "33f50afa-bf36-4363-8076-0b8271d89a85" (UID: "33f50afa-bf36-4363-8076-0b8271d89a85"). InnerVolumeSpecName "kube-api-access-dt8p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.683978 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f50afa-bf36-4363-8076-0b8271d89a85-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "33f50afa-bf36-4363-8076-0b8271d89a85" (UID: "33f50afa-bf36-4363-8076-0b8271d89a85"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.707516 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f50afa-bf36-4363-8076-0b8271d89a85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33f50afa-bf36-4363-8076-0b8271d89a85" (UID: "33f50afa-bf36-4363-8076-0b8271d89a85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.741242 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f50afa-bf36-4363-8076-0b8271d89a85-config-data" (OuterVolumeSpecName: "config-data") pod "33f50afa-bf36-4363-8076-0b8271d89a85" (UID: "33f50afa-bf36-4363-8076-0b8271d89a85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.777396 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt8p7\" (UniqueName: \"kubernetes.io/projected/33f50afa-bf36-4363-8076-0b8271d89a85-kube-api-access-dt8p7\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.777738 4966 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/33f50afa-bf36-4363-8076-0b8271d89a85-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.777753 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f50afa-bf36-4363-8076-0b8271d89a85-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.777765 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f50afa-bf36-4363-8076-0b8271d89a85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.931212 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-s9cgg" event={"ID":"33f50afa-bf36-4363-8076-0b8271d89a85","Type":"ContainerDied","Data":"28c6aa672a48c107be087cc8a18492ad9315504d7fb332d776fc6c6ad90595d7"} Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.931243 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-s9cgg" Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.931273 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28c6aa672a48c107be087cc8a18492ad9315504d7fb332d776fc6c6ad90595d7" Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.933749 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5fdfx" event={"ID":"bca13755-0032-4941-bfcd-7550197712c7","Type":"ContainerDied","Data":"72fc730eef0ab3bcc9ce1f57e3074a7c606b9e79fad792ba38e139e3a4fb4564"} Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.933821 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72fc730eef0ab3bcc9ce1f57e3074a7c606b9e79fad792ba38e139e3a4fb4564" Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.933773 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5fdfx" Jan 27 16:04:31 crc kubenswrapper[4966]: I0127 16:04:31.944155 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a59c903b-6e40-43bd-a120-e47e504cf5a9","Type":"ContainerStarted","Data":"b4297ce89fcb0003f3b717eb125c14cd3967e17b69fa060dce84301ed3ebbfd9"} Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.375259 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=43.03654615 podStartE2EDuration="53.375238945s" podCreationTimestamp="2026-01-27 16:03:39 +0000 UTC" firstStartedPulling="2026-01-27 16:04:18.012423703 +0000 UTC m=+1324.315217181" lastFinishedPulling="2026-01-27 16:04:28.351116478 +0000 UTC m=+1334.653909976" observedRunningTime="2026-01-27 16:04:32.014491478 +0000 UTC m=+1338.317284976" watchObservedRunningTime="2026-01-27 16:04:32.375238945 +0000 UTC m=+1338.678032433" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.384145 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-fl9jh"] Jan 27 16:04:32 crc kubenswrapper[4966]: E0127 16:04:32.385133 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f50afa-bf36-4363-8076-0b8271d89a85" containerName="glance-db-sync" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.385280 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f50afa-bf36-4363-8076-0b8271d89a85" containerName="glance-db-sync" Jan 27 16:04:32 crc kubenswrapper[4966]: E0127 16:04:32.385409 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca13755-0032-4941-bfcd-7550197712c7" containerName="mariadb-database-create" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.387266 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca13755-0032-4941-bfcd-7550197712c7" containerName="mariadb-database-create" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.387720 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="33f50afa-bf36-4363-8076-0b8271d89a85" containerName="glance-db-sync" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.387790 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="bca13755-0032-4941-bfcd-7550197712c7" containerName="mariadb-database-create" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.389374 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.455847 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-fl9jh"] Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.519445 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm2lj\" (UniqueName: \"kubernetes.io/projected/c3f57b1d-9728-43ba-bb02-98bc460fdebb-kube-api-access-tm2lj\") pod \"dnsmasq-dns-74dc88fc-fl9jh\" (UID: \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\") " pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.519513 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-fl9jh\" (UID: \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\") " pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.519530 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-fl9jh\" (UID: \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\") " pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.519574 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-config\") pod \"dnsmasq-dns-74dc88fc-fl9jh\" (UID: \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\") " pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.519620 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-dns-svc\") pod \"dnsmasq-dns-74dc88fc-fl9jh\" (UID: \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\") " pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.625275 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-config\") pod \"dnsmasq-dns-74dc88fc-fl9jh\" (UID: \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\") " pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.625378 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-dns-svc\") pod \"dnsmasq-dns-74dc88fc-fl9jh\" (UID: \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\") " pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.625540 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm2lj\" (UniqueName: \"kubernetes.io/projected/c3f57b1d-9728-43ba-bb02-98bc460fdebb-kube-api-access-tm2lj\") pod \"dnsmasq-dns-74dc88fc-fl9jh\" (UID: \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\") " pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.625594 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-fl9jh\" (UID: \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\") " pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.625611 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-fl9jh\" (UID: \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\") " pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.626559 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-fl9jh\" (UID: \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\") " pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.627082 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-config\") pod \"dnsmasq-dns-74dc88fc-fl9jh\" (UID: \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\") " pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.628333 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-dns-svc\") pod \"dnsmasq-dns-74dc88fc-fl9jh\" (UID: \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\") " pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.629704 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-fl9jh\" (UID: \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\") " pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.645741 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9524-account-create-update-nwkpp" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.695516 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm2lj\" (UniqueName: \"kubernetes.io/projected/c3f57b1d-9728-43ba-bb02-98bc460fdebb-kube-api-access-tm2lj\") pod \"dnsmasq-dns-74dc88fc-fl9jh\" (UID: \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\") " pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.731944 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.827282 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-fl9jh"] Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.837729 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7p2m\" (UniqueName: \"kubernetes.io/projected/4fe63661-1f35-4598-ab4e-86f934127864-kube-api-access-k7p2m\") pod \"4fe63661-1f35-4598-ab4e-86f934127864\" (UID: \"4fe63661-1f35-4598-ab4e-86f934127864\") " Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.837908 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fe63661-1f35-4598-ab4e-86f934127864-operator-scripts\") pod \"4fe63661-1f35-4598-ab4e-86f934127864\" (UID: \"4fe63661-1f35-4598-ab4e-86f934127864\") " Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.839397 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fe63661-1f35-4598-ab4e-86f934127864-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4fe63661-1f35-4598-ab4e-86f934127864" (UID: "4fe63661-1f35-4598-ab4e-86f934127864"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.857126 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe63661-1f35-4598-ab4e-86f934127864-kube-api-access-k7p2m" (OuterVolumeSpecName: "kube-api-access-k7p2m") pod "4fe63661-1f35-4598-ab4e-86f934127864" (UID: "4fe63661-1f35-4598-ab4e-86f934127864"). InnerVolumeSpecName "kube-api-access-k7p2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.914029 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-nxn4w"] Jan 27 16:04:32 crc kubenswrapper[4966]: E0127 16:04:32.914783 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe63661-1f35-4598-ab4e-86f934127864" containerName="mariadb-account-create-update" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.914796 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe63661-1f35-4598-ab4e-86f934127864" containerName="mariadb-account-create-update" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.915075 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe63661-1f35-4598-ab4e-86f934127864" containerName="mariadb-account-create-update" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.916579 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.923118 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.940835 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-nxn4w\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.940884 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-nxn4w\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.940984 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-nxn4w\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.941056 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-nxn4w"] Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.941067 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-config\") pod \"dnsmasq-dns-5f59b8f679-nxn4w\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.941094 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwhqt\" (UniqueName: \"kubernetes.io/projected/008f2af8-50bd-4efc-ab07-5dfbf858adbf-kube-api-access-nwhqt\") pod \"dnsmasq-dns-5f59b8f679-nxn4w\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.941125 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-nxn4w\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.941179 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7p2m\" (UniqueName: \"kubernetes.io/projected/4fe63661-1f35-4598-ab4e-86f934127864-kube-api-access-k7p2m\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.941191 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fe63661-1f35-4598-ab4e-86f934127864-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:32 crc kubenswrapper[4966]: I0127 16:04:32.992003 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2dda-account-create-update-q4kct" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.037235 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9524-account-create-update-nwkpp" event={"ID":"4fe63661-1f35-4598-ab4e-86f934127864","Type":"ContainerDied","Data":"f66996f5f279045e597067275cee742178c30e1d6e3a834ed36373732332de51"} Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.037286 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f66996f5f279045e597067275cee742178c30e1d6e3a834ed36373732332de51" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.037388 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9524-account-create-update-nwkpp" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.078844 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-nxn4w\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.079652 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-config\") pod \"dnsmasq-dns-5f59b8f679-nxn4w\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.079714 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwhqt\" (UniqueName: \"kubernetes.io/projected/008f2af8-50bd-4efc-ab07-5dfbf858adbf-kube-api-access-nwhqt\") pod \"dnsmasq-dns-5f59b8f679-nxn4w\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.079782 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-nxn4w\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.080956 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-config\") pod \"dnsmasq-dns-5f59b8f679-nxn4w\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.081547 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-nxn4w\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.083497 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-nxn4w\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.097304 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-nxn4w\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.114487 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-nxn4w\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.114639 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-nxn4w\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.116109 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2dda-account-create-update-q4kct" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.116485 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2dda-account-create-update-q4kct" event={"ID":"ca52e453-b636-461d-b18f-1ca06af6a91f","Type":"ContainerDied","Data":"8cf0f1b0a4c7e03499b15726f1bea2bbd3997087f9a3f4faaa6c84173971bcb9"} Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.116523 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cf0f1b0a4c7e03499b15726f1bea2bbd3997087f9a3f4faaa6c84173971bcb9" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.116627 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-nxn4w\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.120942 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwhqt\" (UniqueName: \"kubernetes.io/projected/008f2af8-50bd-4efc-ab07-5dfbf858adbf-kube-api-access-nwhqt\") pod \"dnsmasq-dns-5f59b8f679-nxn4w\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.131068 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nd6n6" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.217022 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca52e453-b636-461d-b18f-1ca06af6a91f-operator-scripts\") pod \"ca52e453-b636-461d-b18f-1ca06af6a91f\" (UID: \"ca52e453-b636-461d-b18f-1ca06af6a91f\") " Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.217843 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq286\" (UniqueName: \"kubernetes.io/projected/ca52e453-b636-461d-b18f-1ca06af6a91f-kube-api-access-wq286\") pod \"ca52e453-b636-461d-b18f-1ca06af6a91f\" (UID: \"ca52e453-b636-461d-b18f-1ca06af6a91f\") " Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.218004 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca52e453-b636-461d-b18f-1ca06af6a91f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca52e453-b636-461d-b18f-1ca06af6a91f" (UID: "ca52e453-b636-461d-b18f-1ca06af6a91f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.218886 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca52e453-b636-461d-b18f-1ca06af6a91f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.236845 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca52e453-b636-461d-b18f-1ca06af6a91f-kube-api-access-wq286" (OuterVolumeSpecName: "kube-api-access-wq286") pod "ca52e453-b636-461d-b18f-1ca06af6a91f" (UID: "ca52e453-b636-461d-b18f-1ca06af6a91f"). InnerVolumeSpecName "kube-api-access-wq286". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.299724 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.324037 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e84c4662-7140-4e74-beff-dc84f3c0b6c7-operator-scripts\") pod \"e84c4662-7140-4e74-beff-dc84f3c0b6c7\" (UID: \"e84c4662-7140-4e74-beff-dc84f3c0b6c7\") " Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.324292 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxg8h\" (UniqueName: \"kubernetes.io/projected/e84c4662-7140-4e74-beff-dc84f3c0b6c7-kube-api-access-mxg8h\") pod \"e84c4662-7140-4e74-beff-dc84f3c0b6c7\" (UID: \"e84c4662-7140-4e74-beff-dc84f3c0b6c7\") " Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.325027 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq286\" (UniqueName: \"kubernetes.io/projected/ca52e453-b636-461d-b18f-1ca06af6a91f-kube-api-access-wq286\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.330947 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e84c4662-7140-4e74-beff-dc84f3c0b6c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e84c4662-7140-4e74-beff-dc84f3c0b6c7" (UID: "e84c4662-7140-4e74-beff-dc84f3c0b6c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.345445 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e84c4662-7140-4e74-beff-dc84f3c0b6c7-kube-api-access-mxg8h" (OuterVolumeSpecName: "kube-api-access-mxg8h") pod "e84c4662-7140-4e74-beff-dc84f3c0b6c7" (UID: "e84c4662-7140-4e74-beff-dc84f3c0b6c7"). InnerVolumeSpecName "kube-api-access-mxg8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.428970 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e84c4662-7140-4e74-beff-dc84f3c0b6c7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.429017 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxg8h\" (UniqueName: \"kubernetes.io/projected/e84c4662-7140-4e74-beff-dc84f3c0b6c7-kube-api-access-mxg8h\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.498302 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tzbwt" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.511401 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-9d6a-account-create-update-4cs7w" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.532447 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5sjsx" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.536024 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0818-account-create-update-sjz8w" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.632205 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7608642-7d28-462c-9838-2b1aa51a69ed-operator-scripts\") pod \"e7608642-7d28-462c-9838-2b1aa51a69ed\" (UID: \"e7608642-7d28-462c-9838-2b1aa51a69ed\") " Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.632276 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crbp8\" (UniqueName: \"kubernetes.io/projected/92dd5605-46b2-44d0-b2e8-30492c2049ea-kube-api-access-crbp8\") pod \"92dd5605-46b2-44d0-b2e8-30492c2049ea\" (UID: \"92dd5605-46b2-44d0-b2e8-30492c2049ea\") " Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.632358 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lplhl\" (UniqueName: \"kubernetes.io/projected/1a15438a-18b1-4bca-9566-30c48526de56-kube-api-access-lplhl\") pod \"1a15438a-18b1-4bca-9566-30c48526de56\" (UID: \"1a15438a-18b1-4bca-9566-30c48526de56\") " Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.632390 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zt8w\" (UniqueName: \"kubernetes.io/projected/e7608642-7d28-462c-9838-2b1aa51a69ed-kube-api-access-6zt8w\") pod \"e7608642-7d28-462c-9838-2b1aa51a69ed\" (UID: \"e7608642-7d28-462c-9838-2b1aa51a69ed\") " Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.632434 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92dd5605-46b2-44d0-b2e8-30492c2049ea-operator-scripts\") pod \"92dd5605-46b2-44d0-b2e8-30492c2049ea\" (UID: \"92dd5605-46b2-44d0-b2e8-30492c2049ea\") " Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.632507 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcm4k\" (UniqueName: \"kubernetes.io/projected/83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a-kube-api-access-zcm4k\") pod \"83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a\" (UID: \"83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a\") " Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.632542 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a-operator-scripts\") pod \"83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a\" (UID: \"83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a\") " Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.632592 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a15438a-18b1-4bca-9566-30c48526de56-operator-scripts\") pod \"1a15438a-18b1-4bca-9566-30c48526de56\" (UID: \"1a15438a-18b1-4bca-9566-30c48526de56\") " Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.633858 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dd5605-46b2-44d0-b2e8-30492c2049ea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92dd5605-46b2-44d0-b2e8-30492c2049ea" (UID: "92dd5605-46b2-44d0-b2e8-30492c2049ea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.634292 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a" (UID: "83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.634779 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a15438a-18b1-4bca-9566-30c48526de56-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1a15438a-18b1-4bca-9566-30c48526de56" (UID: "1a15438a-18b1-4bca-9566-30c48526de56"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.635590 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7608642-7d28-462c-9838-2b1aa51a69ed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e7608642-7d28-462c-9838-2b1aa51a69ed" (UID: "e7608642-7d28-462c-9838-2b1aa51a69ed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.644455 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a15438a-18b1-4bca-9566-30c48526de56-kube-api-access-lplhl" (OuterVolumeSpecName: "kube-api-access-lplhl") pod "1a15438a-18b1-4bca-9566-30c48526de56" (UID: "1a15438a-18b1-4bca-9566-30c48526de56"). InnerVolumeSpecName "kube-api-access-lplhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.653394 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-fl9jh"] Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.661293 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dd5605-46b2-44d0-b2e8-30492c2049ea-kube-api-access-crbp8" (OuterVolumeSpecName: "kube-api-access-crbp8") pod "92dd5605-46b2-44d0-b2e8-30492c2049ea" (UID: "92dd5605-46b2-44d0-b2e8-30492c2049ea"). InnerVolumeSpecName "kube-api-access-crbp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.661409 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7608642-7d28-462c-9838-2b1aa51a69ed-kube-api-access-6zt8w" (OuterVolumeSpecName: "kube-api-access-6zt8w") pod "e7608642-7d28-462c-9838-2b1aa51a69ed" (UID: "e7608642-7d28-462c-9838-2b1aa51a69ed"). InnerVolumeSpecName "kube-api-access-6zt8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.667425 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a-kube-api-access-zcm4k" (OuterVolumeSpecName: "kube-api-access-zcm4k") pod "83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a" (UID: "83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a"). InnerVolumeSpecName "kube-api-access-zcm4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.734694 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcm4k\" (UniqueName: \"kubernetes.io/projected/83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a-kube-api-access-zcm4k\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.734727 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.734736 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a15438a-18b1-4bca-9566-30c48526de56-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.734744 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7608642-7d28-462c-9838-2b1aa51a69ed-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.734754 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crbp8\" (UniqueName: \"kubernetes.io/projected/92dd5605-46b2-44d0-b2e8-30492c2049ea-kube-api-access-crbp8\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.734762 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lplhl\" (UniqueName: \"kubernetes.io/projected/1a15438a-18b1-4bca-9566-30c48526de56-kube-api-access-lplhl\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.734770 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zt8w\" (UniqueName: \"kubernetes.io/projected/e7608642-7d28-462c-9838-2b1aa51a69ed-kube-api-access-6zt8w\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.734778 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92dd5605-46b2-44d0-b2e8-30492c2049ea-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:33 crc kubenswrapper[4966]: I0127 16:04:33.939583 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-nxn4w"] Jan 27 16:04:34 crc kubenswrapper[4966]: I0127 16:04:34.118799 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-9d6a-account-create-update-4cs7w" Jan 27 16:04:34 crc kubenswrapper[4966]: I0127 16:04:34.118807 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-9d6a-account-create-update-4cs7w" event={"ID":"e7608642-7d28-462c-9838-2b1aa51a69ed","Type":"ContainerDied","Data":"91bcab0e8845ce73e871ae7a7848104632f75e4a9f39b19abe3ff261a135adcb"} Jan 27 16:04:34 crc kubenswrapper[4966]: I0127 16:04:34.118860 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91bcab0e8845ce73e871ae7a7848104632f75e4a9f39b19abe3ff261a135adcb" Jan 27 16:04:34 crc kubenswrapper[4966]: I0127 16:04:34.130818 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nd6n6" event={"ID":"e84c4662-7140-4e74-beff-dc84f3c0b6c7","Type":"ContainerDied","Data":"3811598636d40dfc758abee985a864740ffa8df1117db6c42d627dd644a17c2e"} Jan 27 16:04:34 crc kubenswrapper[4966]: I0127 16:04:34.130862 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3811598636d40dfc758abee985a864740ffa8df1117db6c42d627dd644a17c2e" Jan 27 16:04:34 crc kubenswrapper[4966]: I0127 16:04:34.130932 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nd6n6" Jan 27 16:04:34 crc kubenswrapper[4966]: I0127 16:04:34.134759 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5sjsx" Jan 27 16:04:34 crc kubenswrapper[4966]: I0127 16:04:34.135324 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5sjsx" event={"ID":"92dd5605-46b2-44d0-b2e8-30492c2049ea","Type":"ContainerDied","Data":"eb54dd6ee203986c8331a9b8d68d897a3f2d38c773c5633bb0a86c35ae3c9a27"} Jan 27 16:04:34 crc kubenswrapper[4966]: I0127 16:04:34.135361 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb54dd6ee203986c8331a9b8d68d897a3f2d38c773c5633bb0a86c35ae3c9a27" Jan 27 16:04:34 crc kubenswrapper[4966]: I0127 16:04:34.143782 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0818-account-create-update-sjz8w" event={"ID":"1a15438a-18b1-4bca-9566-30c48526de56","Type":"ContainerDied","Data":"a574b9bd24d6a9a95a441f89a72322bc7ae22fd8df2d46a896f7c2c5a3ba8a8a"} Jan 27 16:04:34 crc kubenswrapper[4966]: I0127 16:04:34.143824 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a574b9bd24d6a9a95a441f89a72322bc7ae22fd8df2d46a896f7c2c5a3ba8a8a" Jan 27 16:04:34 crc kubenswrapper[4966]: I0127 16:04:34.143882 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0818-account-create-update-sjz8w" Jan 27 16:04:34 crc kubenswrapper[4966]: I0127 16:04:34.148326 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tzbwt" event={"ID":"83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a","Type":"ContainerDied","Data":"09e9be6aa4b569672f2bd6827e69389e8a51fa78c9fb363a829dbfd286a9846e"} Jan 27 16:04:34 crc kubenswrapper[4966]: I0127 16:04:34.148379 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09e9be6aa4b569672f2bd6827e69389e8a51fa78c9fb363a829dbfd286a9846e" Jan 27 16:04:34 crc kubenswrapper[4966]: I0127 16:04:34.148446 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tzbwt" Jan 27 16:04:34 crc kubenswrapper[4966]: I0127 16:04:34.207679 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 16:04:34 crc kubenswrapper[4966]: I0127 16:04:34.207942 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" containerName="prometheus" containerID="cri-o://cf08ac47789ffbc8ac65ce0a6803363969539d88327444d57c1b629ca8a356de" gracePeriod=600 Jan 27 16:04:34 crc kubenswrapper[4966]: I0127 16:04:34.208342 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" containerName="thanos-sidecar" containerID="cri-o://1ff889379f525d5cb134a5e03332a2a08366031fb54bf2aad960f0645badaf20" gracePeriod=600 Jan 27 16:04:34 crc kubenswrapper[4966]: I0127 16:04:34.208388 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" containerName="config-reloader" containerID="cri-o://e0c9009d1c35891a4e6f54e0dffa1b4d9023a4a09b6706b57a973b768e038f6d" gracePeriod=600 Jan 27 16:04:35 crc kubenswrapper[4966]: I0127 16:04:35.179151 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.151:9090/-/ready\": dial tcp 10.217.0.151:9090: connect: connection refused" Jan 27 16:04:35 crc kubenswrapper[4966]: I0127 16:04:35.186041 4966 generic.go:334] "Generic (PLEG): container finished" podID="fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" containerID="1ff889379f525d5cb134a5e03332a2a08366031fb54bf2aad960f0645badaf20" exitCode=0 Jan 27 16:04:35 crc kubenswrapper[4966]: I0127 16:04:35.186077 4966 generic.go:334] "Generic (PLEG): container finished" podID="fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" containerID="e0c9009d1c35891a4e6f54e0dffa1b4d9023a4a09b6706b57a973b768e038f6d" exitCode=0 Jan 27 16:04:35 crc kubenswrapper[4966]: I0127 16:04:35.186085 4966 generic.go:334] "Generic (PLEG): container finished" podID="fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" containerID="cf08ac47789ffbc8ac65ce0a6803363969539d88327444d57c1b629ca8a356de" exitCode=0 Jan 27 16:04:35 crc kubenswrapper[4966]: I0127 16:04:35.186106 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651","Type":"ContainerDied","Data":"1ff889379f525d5cb134a5e03332a2a08366031fb54bf2aad960f0645badaf20"} Jan 27 16:04:35 crc kubenswrapper[4966]: I0127 16:04:35.186134 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651","Type":"ContainerDied","Data":"e0c9009d1c35891a4e6f54e0dffa1b4d9023a4a09b6706b57a973b768e038f6d"} Jan 27 16:04:35 crc kubenswrapper[4966]: I0127 16:04:35.186146 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651","Type":"ContainerDied","Data":"cf08ac47789ffbc8ac65ce0a6803363969539d88327444d57c1b629ca8a356de"} Jan 27 16:04:37 crc kubenswrapper[4966]: W0127 16:04:37.185041 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod008f2af8_50bd_4efc_ab07_5dfbf858adbf.slice/crio-8a976ab25cbbebab1676c8d06c090c27573032d8b740218e476ba7fdc4194eeb WatchSource:0}: Error finding container 8a976ab25cbbebab1676c8d06c090c27573032d8b740218e476ba7fdc4194eeb: Status 404 returned error can't find the container with id 8a976ab25cbbebab1676c8d06c090c27573032d8b740218e476ba7fdc4194eeb Jan 27 16:04:37 crc kubenswrapper[4966]: I0127 16:04:37.225223 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" event={"ID":"008f2af8-50bd-4efc-ab07-5dfbf858adbf","Type":"ContainerStarted","Data":"8a976ab25cbbebab1676c8d06c090c27573032d8b740218e476ba7fdc4194eeb"} Jan 27 16:04:37 crc kubenswrapper[4966]: I0127 16:04:37.228513 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" event={"ID":"c3f57b1d-9728-43ba-bb02-98bc460fdebb","Type":"ContainerStarted","Data":"46db96bae505ae2ceb97f33e71816e8e7356c35b79994787c591f638240b0641"} Jan 27 16:04:37 crc kubenswrapper[4966]: I0127 16:04:37.992484 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.130609 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-thanos-prometheus-http-client-file\") pod \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.130665 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-web-config\") pod \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.130764 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-config-out\") pod \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.130822 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-tls-assets\") pod \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.130839 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-config\") pod \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.130884 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-prometheus-metric-storage-rulefiles-1\") pod \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.130947 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-prometheus-metric-storage-rulefiles-0\") pod \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.131242 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\") pod \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.131290 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-559b4\" (UniqueName: \"kubernetes.io/projected/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-kube-api-access-559b4\") pod \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.131329 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-prometheus-metric-storage-rulefiles-2\") pod \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\" (UID: \"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651\") " Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.133290 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" (UID: "fb97b579-dcb8-44d5-8ea1-1e56c6b0d651"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.134337 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" (UID: "fb97b579-dcb8-44d5-8ea1-1e56c6b0d651"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.134973 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" (UID: "fb97b579-dcb8-44d5-8ea1-1e56c6b0d651"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.150993 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" (UID: "fb97b579-dcb8-44d5-8ea1-1e56c6b0d651"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.151074 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-config-out" (OuterVolumeSpecName: "config-out") pod "fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" (UID: "fb97b579-dcb8-44d5-8ea1-1e56c6b0d651"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.154161 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" (UID: "fb97b579-dcb8-44d5-8ea1-1e56c6b0d651"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.156256 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-kube-api-access-559b4" (OuterVolumeSpecName: "kube-api-access-559b4") pod "fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" (UID: "fb97b579-dcb8-44d5-8ea1-1e56c6b0d651"). InnerVolumeSpecName "kube-api-access-559b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.165586 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-config" (OuterVolumeSpecName: "config") pod "fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" (UID: "fb97b579-dcb8-44d5-8ea1-1e56c6b0d651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.176673 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" (UID: "fb97b579-dcb8-44d5-8ea1-1e56c6b0d651"). InnerVolumeSpecName "pvc-09717264-e586-40d9-8eb7-6ef2244b94f6". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.201324 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-web-config" (OuterVolumeSpecName: "web-config") pod "fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" (UID: "fb97b579-dcb8-44d5-8ea1-1e56c6b0d651"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.234523 4966 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.234575 4966 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-web-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.234590 4966 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-config-out\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.234600 4966 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.234610 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.234621 4966 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.234637 4966 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.234673 4966 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\") on node \"crc\" " Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.234687 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-559b4\" (UniqueName: \"kubernetes.io/projected/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-kube-api-access-559b4\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.234698 4966 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.255314 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.255321 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fb97b579-dcb8-44d5-8ea1-1e56c6b0d651","Type":"ContainerDied","Data":"c1906b55fcf5c66d8749bf1780b217583e86e1db016a907cb6e76987393c7f97"} Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.255405 4966 scope.go:117] "RemoveContainer" containerID="1ff889379f525d5cb134a5e03332a2a08366031fb54bf2aad960f0645badaf20" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.272373 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sllzs" event={"ID":"ee305aa0-15fe-46a9-b62f-8936153daddf","Type":"ContainerStarted","Data":"ceb8672ab8232d5c6af3237ca37794f668ee6284d02d458ebaf61ccce70b1033"} Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.291320 4966 generic.go:334] "Generic (PLEG): container finished" podID="008f2af8-50bd-4efc-ab07-5dfbf858adbf" containerID="ab1cd6427956ddc79ebd492971f4d59b36923d54310383625b2019c8b8887456" exitCode=0 Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.291637 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" event={"ID":"008f2af8-50bd-4efc-ab07-5dfbf858adbf","Type":"ContainerDied","Data":"ab1cd6427956ddc79ebd492971f4d59b36923d54310383625b2019c8b8887456"} Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.298332 4966 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.298703 4966 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-09717264-e586-40d9-8eb7-6ef2244b94f6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6") on node "crc" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.310461 4966 generic.go:334] "Generic (PLEG): container finished" podID="c3f57b1d-9728-43ba-bb02-98bc460fdebb" containerID="02083eedbdab2be535c242a4c23e544eb3960386fae5ec21175a30910499daba" exitCode=0 Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.310528 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" event={"ID":"c3f57b1d-9728-43ba-bb02-98bc460fdebb","Type":"ContainerDied","Data":"02083eedbdab2be535c242a4c23e544eb3960386fae5ec21175a30910499daba"} Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.320052 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-sllzs" podStartSLOduration=3.509549013 podStartE2EDuration="11.320026251s" podCreationTimestamp="2026-01-27 16:04:27 +0000 UTC" firstStartedPulling="2026-01-27 16:04:29.552585841 +0000 UTC m=+1335.855379329" lastFinishedPulling="2026-01-27 16:04:37.363063079 +0000 UTC m=+1343.665856567" observedRunningTime="2026-01-27 16:04:38.302944985 +0000 UTC m=+1344.605738483" watchObservedRunningTime="2026-01-27 16:04:38.320026251 +0000 UTC m=+1344.622819749" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.337380 4966 reconciler_common.go:293] "Volume detached for volume \"pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.508434 4966 scope.go:117] "RemoveContainer" containerID="e0c9009d1c35891a4e6f54e0dffa1b4d9023a4a09b6706b57a973b768e038f6d" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.549112 4966 scope.go:117] "RemoveContainer" containerID="cf08ac47789ffbc8ac65ce0a6803363969539d88327444d57c1b629ca8a356de" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.571860 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.609968 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.626101 4966 scope.go:117] "RemoveContainer" containerID="bdba0b211926a4b0fa7405022533bc91f629b32bffc8a5e980902fcbf6cf6cfa" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.630369 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 16:04:38 crc kubenswrapper[4966]: E0127 16:04:38.631384 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" containerName="prometheus" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.631492 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" containerName="prometheus" Jan 27 16:04:38 crc kubenswrapper[4966]: E0127 16:04:38.631581 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92dd5605-46b2-44d0-b2e8-30492c2049ea" containerName="mariadb-database-create" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.631652 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="92dd5605-46b2-44d0-b2e8-30492c2049ea" containerName="mariadb-database-create" Jan 27 16:04:38 crc kubenswrapper[4966]: E0127 16:04:38.631728 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a" containerName="mariadb-database-create" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.631798 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a" containerName="mariadb-database-create" Jan 27 16:04:38 crc kubenswrapper[4966]: E0127 16:04:38.631875 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" containerName="thanos-sidecar" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.631977 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" containerName="thanos-sidecar" Jan 27 16:04:38 crc kubenswrapper[4966]: E0127 16:04:38.632081 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e84c4662-7140-4e74-beff-dc84f3c0b6c7" containerName="mariadb-database-create" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.632179 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="e84c4662-7140-4e74-beff-dc84f3c0b6c7" containerName="mariadb-database-create" Jan 27 16:04:38 crc kubenswrapper[4966]: E0127 16:04:38.632255 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca52e453-b636-461d-b18f-1ca06af6a91f" containerName="mariadb-account-create-update" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.632350 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca52e453-b636-461d-b18f-1ca06af6a91f" containerName="mariadb-account-create-update" Jan 27 16:04:38 crc kubenswrapper[4966]: E0127 16:04:38.632426 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7608642-7d28-462c-9838-2b1aa51a69ed" containerName="mariadb-account-create-update" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.632498 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7608642-7d28-462c-9838-2b1aa51a69ed" containerName="mariadb-account-create-update" Jan 27 16:04:38 crc kubenswrapper[4966]: E0127 16:04:38.632566 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" containerName="init-config-reloader" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.632630 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" containerName="init-config-reloader" Jan 27 16:04:38 crc kubenswrapper[4966]: E0127 16:04:38.632714 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a15438a-18b1-4bca-9566-30c48526de56" containerName="mariadb-account-create-update" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.632777 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a15438a-18b1-4bca-9566-30c48526de56" containerName="mariadb-account-create-update" Jan 27 16:04:38 crc kubenswrapper[4966]: E0127 16:04:38.632843 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" containerName="config-reloader" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.632973 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" containerName="config-reloader" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.633273 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a15438a-18b1-4bca-9566-30c48526de56" containerName="mariadb-account-create-update" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.633356 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" containerName="thanos-sidecar" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.633824 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca52e453-b636-461d-b18f-1ca06af6a91f" containerName="mariadb-account-create-update" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.633925 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" containerName="prometheus" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.634015 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="e84c4662-7140-4e74-beff-dc84f3c0b6c7" containerName="mariadb-database-create" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.634095 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" containerName="config-reloader" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.634168 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="92dd5605-46b2-44d0-b2e8-30492c2049ea" containerName="mariadb-database-create" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.634252 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7608642-7d28-462c-9838-2b1aa51a69ed" containerName="mariadb-account-create-update" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.634321 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a" containerName="mariadb-database-create" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.636577 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.639289 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.639867 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-lvfmv" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.640088 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.640127 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.640134 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.640795 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.653123 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.655635 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.690832 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.702458 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.729342 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.767779 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/20223cbd-a9e0-4eb8-b051-0833bebe5975-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.767923 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.769768 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/20223cbd-a9e0-4eb8-b051-0833bebe5975-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.769844 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/20223cbd-a9e0-4eb8-b051-0833bebe5975-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.770125 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/20223cbd-a9e0-4eb8-b051-0833bebe5975-config\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.770263 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/20223cbd-a9e0-4eb8-b051-0833bebe5975-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.770348 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20223cbd-a9e0-4eb8-b051-0833bebe5975-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.770597 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/20223cbd-a9e0-4eb8-b051-0833bebe5975-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.770876 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/20223cbd-a9e0-4eb8-b051-0833bebe5975-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.770942 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/20223cbd-a9e0-4eb8-b051-0833bebe5975-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.771085 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/20223cbd-a9e0-4eb8-b051-0833bebe5975-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.771139 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/20223cbd-a9e0-4eb8-b051-0833bebe5975-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.771199 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csv8v\" (UniqueName: \"kubernetes.io/projected/20223cbd-a9e0-4eb8-b051-0833bebe5975-kube-api-access-csv8v\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.872272 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-ovsdbserver-nb\") pod \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\" (UID: \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\") " Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.872598 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-ovsdbserver-sb\") pod \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\" (UID: \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\") " Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.872951 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-config\") pod \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\" (UID: \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\") " Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.873075 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-dns-svc\") pod \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\" (UID: \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\") " Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.873308 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm2lj\" (UniqueName: \"kubernetes.io/projected/c3f57b1d-9728-43ba-bb02-98bc460fdebb-kube-api-access-tm2lj\") pod \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\" (UID: \"c3f57b1d-9728-43ba-bb02-98bc460fdebb\") " Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.873630 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/20223cbd-a9e0-4eb8-b051-0833bebe5975-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.873731 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/20223cbd-a9e0-4eb8-b051-0833bebe5975-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.873834 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/20223cbd-a9e0-4eb8-b051-0833bebe5975-config\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.873944 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/20223cbd-a9e0-4eb8-b051-0833bebe5975-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.874032 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20223cbd-a9e0-4eb8-b051-0833bebe5975-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.874135 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/20223cbd-a9e0-4eb8-b051-0833bebe5975-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.874228 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/20223cbd-a9e0-4eb8-b051-0833bebe5975-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.874345 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/20223cbd-a9e0-4eb8-b051-0833bebe5975-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.874470 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/20223cbd-a9e0-4eb8-b051-0833bebe5975-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.874557 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/20223cbd-a9e0-4eb8-b051-0833bebe5975-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.874691 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csv8v\" (UniqueName: \"kubernetes.io/projected/20223cbd-a9e0-4eb8-b051-0833bebe5975-kube-api-access-csv8v\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.874821 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/20223cbd-a9e0-4eb8-b051-0833bebe5975-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.874989 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.877187 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3f57b1d-9728-43ba-bb02-98bc460fdebb-kube-api-access-tm2lj" (OuterVolumeSpecName: "kube-api-access-tm2lj") pod "c3f57b1d-9728-43ba-bb02-98bc460fdebb" (UID: "c3f57b1d-9728-43ba-bb02-98bc460fdebb"). InnerVolumeSpecName "kube-api-access-tm2lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.877200 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/20223cbd-a9e0-4eb8-b051-0833bebe5975-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.877590 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/20223cbd-a9e0-4eb8-b051-0833bebe5975-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.878050 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/20223cbd-a9e0-4eb8-b051-0833bebe5975-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.883240 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/20223cbd-a9e0-4eb8-b051-0833bebe5975-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.883717 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.883750 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a33938e059c072199d3b6223bdfa367a3b3bcef4e32c284009ff56b852d373de/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.884935 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/20223cbd-a9e0-4eb8-b051-0833bebe5975-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.892701 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/20223cbd-a9e0-4eb8-b051-0833bebe5975-config\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.893510 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/20223cbd-a9e0-4eb8-b051-0833bebe5975-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.894097 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/20223cbd-a9e0-4eb8-b051-0833bebe5975-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.894235 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20223cbd-a9e0-4eb8-b051-0833bebe5975-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.896277 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/20223cbd-a9e0-4eb8-b051-0833bebe5975-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.898433 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csv8v\" (UniqueName: \"kubernetes.io/projected/20223cbd-a9e0-4eb8-b051-0833bebe5975-kube-api-access-csv8v\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.900892 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/20223cbd-a9e0-4eb8-b051-0833bebe5975-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.928159 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c3f57b1d-9728-43ba-bb02-98bc460fdebb" (UID: "c3f57b1d-9728-43ba-bb02-98bc460fdebb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.928323 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c3f57b1d-9728-43ba-bb02-98bc460fdebb" (UID: "c3f57b1d-9728-43ba-bb02-98bc460fdebb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.929751 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3f57b1d-9728-43ba-bb02-98bc460fdebb" (UID: "c3f57b1d-9728-43ba-bb02-98bc460fdebb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.932585 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-config" (OuterVolumeSpecName: "config") pod "c3f57b1d-9728-43ba-bb02-98bc460fdebb" (UID: "c3f57b1d-9728-43ba-bb02-98bc460fdebb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.936391 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09717264-e586-40d9-8eb7-6ef2244b94f6\") pod \"prometheus-metric-storage-0\" (UID: \"20223cbd-a9e0-4eb8-b051-0833bebe5975\") " pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.977514 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm2lj\" (UniqueName: \"kubernetes.io/projected/c3f57b1d-9728-43ba-bb02-98bc460fdebb-kube-api-access-tm2lj\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.977552 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.977582 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.977604 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.977650 4966 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3f57b1d-9728-43ba-bb02-98bc460fdebb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:38 crc kubenswrapper[4966]: I0127 16:04:38.992648 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:39 crc kubenswrapper[4966]: I0127 16:04:39.322784 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" event={"ID":"c3f57b1d-9728-43ba-bb02-98bc460fdebb","Type":"ContainerDied","Data":"46db96bae505ae2ceb97f33e71816e8e7356c35b79994787c591f638240b0641"} Jan 27 16:04:39 crc kubenswrapper[4966]: I0127 16:04:39.323085 4966 scope.go:117] "RemoveContainer" containerID="02083eedbdab2be535c242a4c23e544eb3960386fae5ec21175a30910499daba" Jan 27 16:04:39 crc kubenswrapper[4966]: I0127 16:04:39.323203 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-fl9jh" Jan 27 16:04:39 crc kubenswrapper[4966]: I0127 16:04:39.342610 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" event={"ID":"008f2af8-50bd-4efc-ab07-5dfbf858adbf","Type":"ContainerStarted","Data":"17b91c1fa92e87c3067c3108be3b36234a3bb71530608be06b81511a7be2f323"} Jan 27 16:04:39 crc kubenswrapper[4966]: I0127 16:04:39.342667 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:39 crc kubenswrapper[4966]: I0127 16:04:39.378008 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" podStartSLOduration=7.3779900529999995 podStartE2EDuration="7.377990053s" podCreationTimestamp="2026-01-27 16:04:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:04:39.364092567 +0000 UTC m=+1345.666886065" watchObservedRunningTime="2026-01-27 16:04:39.377990053 +0000 UTC m=+1345.680783531" Jan 27 16:04:39 crc kubenswrapper[4966]: I0127 16:04:39.425488 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-fl9jh"] Jan 27 16:04:39 crc kubenswrapper[4966]: I0127 16:04:39.441198 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-fl9jh"] Jan 27 16:04:39 crc kubenswrapper[4966]: I0127 16:04:39.560565 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 16:04:39 crc kubenswrapper[4966]: W0127 16:04:39.569023 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20223cbd_a9e0_4eb8_b051_0833bebe5975.slice/crio-c623f6d38eca1c396ce05292704871c9a6bd3afc86dddc2a753f7f960d0d3848 WatchSource:0}: Error finding container c623f6d38eca1c396ce05292704871c9a6bd3afc86dddc2a753f7f960d0d3848: Status 404 returned error can't find the container with id c623f6d38eca1c396ce05292704871c9a6bd3afc86dddc2a753f7f960d0d3848 Jan 27 16:04:40 crc kubenswrapper[4966]: I0127 16:04:40.351255 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"20223cbd-a9e0-4eb8-b051-0833bebe5975","Type":"ContainerStarted","Data":"c623f6d38eca1c396ce05292704871c9a6bd3afc86dddc2a753f7f960d0d3848"} Jan 27 16:04:40 crc kubenswrapper[4966]: I0127 16:04:40.537439 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3f57b1d-9728-43ba-bb02-98bc460fdebb" path="/var/lib/kubelet/pods/c3f57b1d-9728-43ba-bb02-98bc460fdebb/volumes" Jan 27 16:04:40 crc kubenswrapper[4966]: I0127 16:04:40.538530 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb97b579-dcb8-44d5-8ea1-1e56c6b0d651" path="/var/lib/kubelet/pods/fb97b579-dcb8-44d5-8ea1-1e56c6b0d651/volumes" Jan 27 16:04:43 crc kubenswrapper[4966]: I0127 16:04:43.302163 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:04:43 crc kubenswrapper[4966]: I0127 16:04:43.362951 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-7qhd5"] Jan 27 16:04:43 crc kubenswrapper[4966]: I0127 16:04:43.363176 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" podUID="df617bba-0d77-409f-a210-e49e176d7053" containerName="dnsmasq-dns" containerID="cri-o://a7f4297af309a9f941d627484a5517f256af216067bcb479eebb16932b499fa8" gracePeriod=10 Jan 27 16:04:43 crc kubenswrapper[4966]: I0127 16:04:43.393402 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"20223cbd-a9e0-4eb8-b051-0833bebe5975","Type":"ContainerStarted","Data":"ccb1828e48c7985ea1c64052156a7f4a669dab9a126ca4ded3de74a6d0c182fa"} Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.141990 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.180770 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-dns-svc\") pod \"df617bba-0d77-409f-a210-e49e176d7053\" (UID: \"df617bba-0d77-409f-a210-e49e176d7053\") " Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.180970 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-config\") pod \"df617bba-0d77-409f-a210-e49e176d7053\" (UID: \"df617bba-0d77-409f-a210-e49e176d7053\") " Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.181093 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-ovsdbserver-sb\") pod \"df617bba-0d77-409f-a210-e49e176d7053\" (UID: \"df617bba-0d77-409f-a210-e49e176d7053\") " Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.181151 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxr2w\" (UniqueName: \"kubernetes.io/projected/df617bba-0d77-409f-a210-e49e176d7053-kube-api-access-kxr2w\") pod \"df617bba-0d77-409f-a210-e49e176d7053\" (UID: \"df617bba-0d77-409f-a210-e49e176d7053\") " Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.181179 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-ovsdbserver-nb\") pod \"df617bba-0d77-409f-a210-e49e176d7053\" (UID: \"df617bba-0d77-409f-a210-e49e176d7053\") " Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.265108 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df617bba-0d77-409f-a210-e49e176d7053-kube-api-access-kxr2w" (OuterVolumeSpecName: "kube-api-access-kxr2w") pod "df617bba-0d77-409f-a210-e49e176d7053" (UID: "df617bba-0d77-409f-a210-e49e176d7053"). InnerVolumeSpecName "kube-api-access-kxr2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.330456 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxr2w\" (UniqueName: \"kubernetes.io/projected/df617bba-0d77-409f-a210-e49e176d7053-kube-api-access-kxr2w\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.332858 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "df617bba-0d77-409f-a210-e49e176d7053" (UID: "df617bba-0d77-409f-a210-e49e176d7053"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.334533 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "df617bba-0d77-409f-a210-e49e176d7053" (UID: "df617bba-0d77-409f-a210-e49e176d7053"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.376881 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "df617bba-0d77-409f-a210-e49e176d7053" (UID: "df617bba-0d77-409f-a210-e49e176d7053"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.377885 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-config" (OuterVolumeSpecName: "config") pod "df617bba-0d77-409f-a210-e49e176d7053" (UID: "df617bba-0d77-409f-a210-e49e176d7053"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.408032 4966 generic.go:334] "Generic (PLEG): container finished" podID="df617bba-0d77-409f-a210-e49e176d7053" containerID="a7f4297af309a9f941d627484a5517f256af216067bcb479eebb16932b499fa8" exitCode=0 Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.409005 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.409148 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" event={"ID":"df617bba-0d77-409f-a210-e49e176d7053","Type":"ContainerDied","Data":"a7f4297af309a9f941d627484a5517f256af216067bcb479eebb16932b499fa8"} Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.409179 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-7qhd5" event={"ID":"df617bba-0d77-409f-a210-e49e176d7053","Type":"ContainerDied","Data":"16fa5b3e58848df726bb58967f60da93d5cba533d8511106ca2e78dddbd466ba"} Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.409196 4966 scope.go:117] "RemoveContainer" containerID="a7f4297af309a9f941d627484a5517f256af216067bcb479eebb16932b499fa8" Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.435645 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.435682 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.435694 4966 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.435703 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df617bba-0d77-409f-a210-e49e176d7053-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.447054 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-7qhd5"] Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.448314 4966 scope.go:117] "RemoveContainer" containerID="3f2ab09668f0fd9883ec5dfffee54d9b09324805da4b8767f9d859b9dec3e332" Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.458619 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-7qhd5"] Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.479136 4966 scope.go:117] "RemoveContainer" containerID="a7f4297af309a9f941d627484a5517f256af216067bcb479eebb16932b499fa8" Jan 27 16:04:44 crc kubenswrapper[4966]: E0127 16:04:44.479651 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7f4297af309a9f941d627484a5517f256af216067bcb479eebb16932b499fa8\": container with ID starting with a7f4297af309a9f941d627484a5517f256af216067bcb479eebb16932b499fa8 not found: ID does not exist" containerID="a7f4297af309a9f941d627484a5517f256af216067bcb479eebb16932b499fa8" Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.479694 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7f4297af309a9f941d627484a5517f256af216067bcb479eebb16932b499fa8"} err="failed to get container status \"a7f4297af309a9f941d627484a5517f256af216067bcb479eebb16932b499fa8\": rpc error: code = NotFound desc = could not find container \"a7f4297af309a9f941d627484a5517f256af216067bcb479eebb16932b499fa8\": container with ID starting with a7f4297af309a9f941d627484a5517f256af216067bcb479eebb16932b499fa8 not found: ID does not exist" Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.479723 4966 scope.go:117] "RemoveContainer" containerID="3f2ab09668f0fd9883ec5dfffee54d9b09324805da4b8767f9d859b9dec3e332" Jan 27 16:04:44 crc kubenswrapper[4966]: E0127 16:04:44.483355 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f2ab09668f0fd9883ec5dfffee54d9b09324805da4b8767f9d859b9dec3e332\": container with ID starting with 3f2ab09668f0fd9883ec5dfffee54d9b09324805da4b8767f9d859b9dec3e332 not found: ID does not exist" containerID="3f2ab09668f0fd9883ec5dfffee54d9b09324805da4b8767f9d859b9dec3e332" Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.483385 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f2ab09668f0fd9883ec5dfffee54d9b09324805da4b8767f9d859b9dec3e332"} err="failed to get container status \"3f2ab09668f0fd9883ec5dfffee54d9b09324805da4b8767f9d859b9dec3e332\": rpc error: code = NotFound desc = could not find container \"3f2ab09668f0fd9883ec5dfffee54d9b09324805da4b8767f9d859b9dec3e332\": container with ID starting with 3f2ab09668f0fd9883ec5dfffee54d9b09324805da4b8767f9d859b9dec3e332 not found: ID does not exist" Jan 27 16:04:44 crc kubenswrapper[4966]: I0127 16:04:44.561644 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df617bba-0d77-409f-a210-e49e176d7053" path="/var/lib/kubelet/pods/df617bba-0d77-409f-a210-e49e176d7053/volumes" Jan 27 16:04:50 crc kubenswrapper[4966]: I0127 16:04:50.883910 4966 generic.go:334] "Generic (PLEG): container finished" podID="20223cbd-a9e0-4eb8-b051-0833bebe5975" containerID="ccb1828e48c7985ea1c64052156a7f4a669dab9a126ca4ded3de74a6d0c182fa" exitCode=0 Jan 27 16:04:50 crc kubenswrapper[4966]: I0127 16:04:50.883990 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"20223cbd-a9e0-4eb8-b051-0833bebe5975","Type":"ContainerDied","Data":"ccb1828e48c7985ea1c64052156a7f4a669dab9a126ca4ded3de74a6d0c182fa"} Jan 27 16:04:51 crc kubenswrapper[4966]: I0127 16:04:51.902656 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"20223cbd-a9e0-4eb8-b051-0833bebe5975","Type":"ContainerStarted","Data":"8a78812cad71db9592d367a913b280e89f5e81ff198379f7e61ab2b0084f22d3"} Jan 27 16:04:54 crc kubenswrapper[4966]: I0127 16:04:54.938569 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"20223cbd-a9e0-4eb8-b051-0833bebe5975","Type":"ContainerStarted","Data":"779fde72469ddec3ac258c13f84d76eebb142a16b15690cbfcb9b9c15483f899"} Jan 27 16:04:54 crc kubenswrapper[4966]: I0127 16:04:54.939159 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"20223cbd-a9e0-4eb8-b051-0833bebe5975","Type":"ContainerStarted","Data":"1503b64e138812b8063daf91ca4a883b8e214e8bab09f6527e008a8995124171"} Jan 27 16:04:54 crc kubenswrapper[4966]: I0127 16:04:54.968264 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.968240705 podStartE2EDuration="16.968240705s" podCreationTimestamp="2026-01-27 16:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:04:54.964314472 +0000 UTC m=+1361.267107990" watchObservedRunningTime="2026-01-27 16:04:54.968240705 +0000 UTC m=+1361.271034183" Jan 27 16:04:55 crc kubenswrapper[4966]: I0127 16:04:55.951522 4966 generic.go:334] "Generic (PLEG): container finished" podID="ee305aa0-15fe-46a9-b62f-8936153daddf" containerID="ceb8672ab8232d5c6af3237ca37794f668ee6284d02d458ebaf61ccce70b1033" exitCode=0 Jan 27 16:04:55 crc kubenswrapper[4966]: I0127 16:04:55.951627 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sllzs" event={"ID":"ee305aa0-15fe-46a9-b62f-8936153daddf","Type":"ContainerDied","Data":"ceb8672ab8232d5c6af3237ca37794f668ee6284d02d458ebaf61ccce70b1033"} Jan 27 16:04:57 crc kubenswrapper[4966]: I0127 16:04:57.357151 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sllzs" Jan 27 16:04:57 crc kubenswrapper[4966]: I0127 16:04:57.518629 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee305aa0-15fe-46a9-b62f-8936153daddf-config-data\") pod \"ee305aa0-15fe-46a9-b62f-8936153daddf\" (UID: \"ee305aa0-15fe-46a9-b62f-8936153daddf\") " Jan 27 16:04:57 crc kubenswrapper[4966]: I0127 16:04:57.518749 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee305aa0-15fe-46a9-b62f-8936153daddf-combined-ca-bundle\") pod \"ee305aa0-15fe-46a9-b62f-8936153daddf\" (UID: \"ee305aa0-15fe-46a9-b62f-8936153daddf\") " Jan 27 16:04:57 crc kubenswrapper[4966]: I0127 16:04:57.518791 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5k8n\" (UniqueName: \"kubernetes.io/projected/ee305aa0-15fe-46a9-b62f-8936153daddf-kube-api-access-t5k8n\") pod \"ee305aa0-15fe-46a9-b62f-8936153daddf\" (UID: \"ee305aa0-15fe-46a9-b62f-8936153daddf\") " Jan 27 16:04:57 crc kubenswrapper[4966]: I0127 16:04:57.527379 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee305aa0-15fe-46a9-b62f-8936153daddf-kube-api-access-t5k8n" (OuterVolumeSpecName: "kube-api-access-t5k8n") pod "ee305aa0-15fe-46a9-b62f-8936153daddf" (UID: "ee305aa0-15fe-46a9-b62f-8936153daddf"). InnerVolumeSpecName "kube-api-access-t5k8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:57 crc kubenswrapper[4966]: I0127 16:04:57.551242 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee305aa0-15fe-46a9-b62f-8936153daddf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee305aa0-15fe-46a9-b62f-8936153daddf" (UID: "ee305aa0-15fe-46a9-b62f-8936153daddf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:04:57 crc kubenswrapper[4966]: I0127 16:04:57.584210 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee305aa0-15fe-46a9-b62f-8936153daddf-config-data" (OuterVolumeSpecName: "config-data") pod "ee305aa0-15fe-46a9-b62f-8936153daddf" (UID: "ee305aa0-15fe-46a9-b62f-8936153daddf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:04:57 crc kubenswrapper[4966]: I0127 16:04:57.622106 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee305aa0-15fe-46a9-b62f-8936153daddf-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:57 crc kubenswrapper[4966]: I0127 16:04:57.622163 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee305aa0-15fe-46a9-b62f-8936153daddf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:57 crc kubenswrapper[4966]: I0127 16:04:57.622181 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5k8n\" (UniqueName: \"kubernetes.io/projected/ee305aa0-15fe-46a9-b62f-8936153daddf-kube-api-access-t5k8n\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:57 crc kubenswrapper[4966]: I0127 16:04:57.980395 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sllzs" event={"ID":"ee305aa0-15fe-46a9-b62f-8936153daddf","Type":"ContainerDied","Data":"202058a1a6d50b6265d5d06a3e1102efb39ad6808582a1758584a8d171692b24"} Jan 27 16:04:57 crc kubenswrapper[4966]: I0127 16:04:57.980670 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="202058a1a6d50b6265d5d06a3e1102efb39ad6808582a1758584a8d171692b24" Jan 27 16:04:57 crc kubenswrapper[4966]: I0127 16:04:57.980689 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sllzs" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.249489 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-68j4q"] Jan 27 16:04:58 crc kubenswrapper[4966]: E0127 16:04:58.249925 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df617bba-0d77-409f-a210-e49e176d7053" containerName="dnsmasq-dns" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.249937 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="df617bba-0d77-409f-a210-e49e176d7053" containerName="dnsmasq-dns" Jan 27 16:04:58 crc kubenswrapper[4966]: E0127 16:04:58.249953 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df617bba-0d77-409f-a210-e49e176d7053" containerName="init" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.249959 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="df617bba-0d77-409f-a210-e49e176d7053" containerName="init" Jan 27 16:04:58 crc kubenswrapper[4966]: E0127 16:04:58.249972 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f57b1d-9728-43ba-bb02-98bc460fdebb" containerName="init" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.249977 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f57b1d-9728-43ba-bb02-98bc460fdebb" containerName="init" Jan 27 16:04:58 crc kubenswrapper[4966]: E0127 16:04:58.250000 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee305aa0-15fe-46a9-b62f-8936153daddf" containerName="keystone-db-sync" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.250006 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee305aa0-15fe-46a9-b62f-8936153daddf" containerName="keystone-db-sync" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.255533 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="df617bba-0d77-409f-a210-e49e176d7053" containerName="dnsmasq-dns" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.255575 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f57b1d-9728-43ba-bb02-98bc460fdebb" containerName="init" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.255587 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee305aa0-15fe-46a9-b62f-8936153daddf" containerName="keystone-db-sync" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.257469 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.265450 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.265716 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-97zfb" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.266910 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.268887 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.272366 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.305740 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-68j4q"] Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.328285 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-nlwsx"] Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.334057 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.341490 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-credential-keys\") pod \"keystone-bootstrap-68j4q\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.341579 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-combined-ca-bundle\") pod \"keystone-bootstrap-68j4q\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.341697 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-scripts\") pod \"keystone-bootstrap-68j4q\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.342005 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjqng\" (UniqueName: \"kubernetes.io/projected/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-kube-api-access-bjqng\") pod \"keystone-bootstrap-68j4q\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.342098 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-config-data\") pod \"keystone-bootstrap-68j4q\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.342234 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-fernet-keys\") pod \"keystone-bootstrap-68j4q\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.387065 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-nlwsx"] Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.443815 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82rvt\" (UniqueName: \"kubernetes.io/projected/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-kube-api-access-82rvt\") pod \"dnsmasq-dns-bbf5cc879-nlwsx\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.443903 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-credential-keys\") pod \"keystone-bootstrap-68j4q\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.451010 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-combined-ca-bundle\") pod \"keystone-bootstrap-68j4q\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.451074 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-nlwsx\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.451154 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-scripts\") pod \"keystone-bootstrap-68j4q\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.451274 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjqng\" (UniqueName: \"kubernetes.io/projected/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-kube-api-access-bjqng\") pod \"keystone-bootstrap-68j4q\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.451325 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-nlwsx\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.451345 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-config-data\") pod \"keystone-bootstrap-68j4q\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.452649 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-credential-keys\") pod \"keystone-bootstrap-68j4q\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.468714 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-config\") pod \"dnsmasq-dns-bbf5cc879-nlwsx\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.468827 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-fernet-keys\") pod \"keystone-bootstrap-68j4q\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.468862 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-nlwsx\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.468918 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-nlwsx\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.482447 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-scripts\") pod \"keystone-bootstrap-68j4q\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.487575 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-fernet-keys\") pod \"keystone-bootstrap-68j4q\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.489830 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-combined-ca-bundle\") pod \"keystone-bootstrap-68j4q\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.491724 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-config-data\") pod \"keystone-bootstrap-68j4q\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.507061 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-9mgp8"] Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.508565 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-9mgp8" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.522683 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-99swl" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.522942 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.548825 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjqng\" (UniqueName: \"kubernetes.io/projected/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-kube-api-access-bjqng\") pod \"keystone-bootstrap-68j4q\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.552491 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-9mgp8"] Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.572853 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82rvt\" (UniqueName: \"kubernetes.io/projected/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-kube-api-access-82rvt\") pod \"dnsmasq-dns-bbf5cc879-nlwsx\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.572954 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-nlwsx\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.576467 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-nlwsx\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.578141 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-nlwsx\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.578269 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-config\") pod \"dnsmasq-dns-bbf5cc879-nlwsx\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.578336 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-nlwsx\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.578391 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-nlwsx\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.579141 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-nlwsx\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.584433 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.614606 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-nlwsx\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.622335 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-config\") pod \"dnsmasq-dns-bbf5cc879-nlwsx\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.629945 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-nlwsx\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.699269 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j78g\" (UniqueName: \"kubernetes.io/projected/05fc08f5-c60a-4248-8de2-447d0415188e-kube-api-access-4j78g\") pod \"heat-db-sync-9mgp8\" (UID: \"05fc08f5-c60a-4248-8de2-447d0415188e\") " pod="openstack/heat-db-sync-9mgp8" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.699353 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05fc08f5-c60a-4248-8de2-447d0415188e-combined-ca-bundle\") pod \"heat-db-sync-9mgp8\" (UID: \"05fc08f5-c60a-4248-8de2-447d0415188e\") " pod="openstack/heat-db-sync-9mgp8" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.707078 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05fc08f5-c60a-4248-8de2-447d0415188e-config-data\") pod \"heat-db-sync-9mgp8\" (UID: \"05fc08f5-c60a-4248-8de2-447d0415188e\") " pod="openstack/heat-db-sync-9mgp8" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.756281 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82rvt\" (UniqueName: \"kubernetes.io/projected/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-kube-api-access-82rvt\") pod \"dnsmasq-dns-bbf5cc879-nlwsx\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.823495 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j78g\" (UniqueName: \"kubernetes.io/projected/05fc08f5-c60a-4248-8de2-447d0415188e-kube-api-access-4j78g\") pod \"heat-db-sync-9mgp8\" (UID: \"05fc08f5-c60a-4248-8de2-447d0415188e\") " pod="openstack/heat-db-sync-9mgp8" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.823549 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05fc08f5-c60a-4248-8de2-447d0415188e-combined-ca-bundle\") pod \"heat-db-sync-9mgp8\" (UID: \"05fc08f5-c60a-4248-8de2-447d0415188e\") " pod="openstack/heat-db-sync-9mgp8" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.823696 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05fc08f5-c60a-4248-8de2-447d0415188e-config-data\") pod \"heat-db-sync-9mgp8\" (UID: \"05fc08f5-c60a-4248-8de2-447d0415188e\") " pod="openstack/heat-db-sync-9mgp8" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.869683 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j78g\" (UniqueName: \"kubernetes.io/projected/05fc08f5-c60a-4248-8de2-447d0415188e-kube-api-access-4j78g\") pod \"heat-db-sync-9mgp8\" (UID: \"05fc08f5-c60a-4248-8de2-447d0415188e\") " pod="openstack/heat-db-sync-9mgp8" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.877301 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05fc08f5-c60a-4248-8de2-447d0415188e-config-data\") pod \"heat-db-sync-9mgp8\" (UID: \"05fc08f5-c60a-4248-8de2-447d0415188e\") " pod="openstack/heat-db-sync-9mgp8" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.877950 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05fc08f5-c60a-4248-8de2-447d0415188e-combined-ca-bundle\") pod \"heat-db-sync-9mgp8\" (UID: \"05fc08f5-c60a-4248-8de2-447d0415188e\") " pod="openstack/heat-db-sync-9mgp8" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.882030 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-7bd6d"] Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.886241 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7bd6d" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.902383 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-7bd6d"] Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.913011 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-v7pvv" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.915241 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.919987 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-dp2rw"] Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.921414 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.923486 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.937658 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bzm74" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.938027 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.938153 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.976791 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:04:58 crc kubenswrapper[4966]: I0127 16:04:58.978132 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dp2rw"] Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:58.992970 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.036503 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-config-data\") pod \"cinder-db-sync-dp2rw\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.036565 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-combined-ca-bundle\") pod \"cinder-db-sync-dp2rw\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.036605 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-etc-machine-id\") pod \"cinder-db-sync-dp2rw\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.036652 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-scripts\") pod \"cinder-db-sync-dp2rw\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.036705 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71afeadb-1cd9-461f-b899-307f7dd34fca-combined-ca-bundle\") pod \"neutron-db-sync-7bd6d\" (UID: \"71afeadb-1cd9-461f-b899-307f7dd34fca\") " pod="openstack/neutron-db-sync-7bd6d" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.036739 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-db-sync-config-data\") pod \"cinder-db-sync-dp2rw\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.036768 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4cs2\" (UniqueName: \"kubernetes.io/projected/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-kube-api-access-q4cs2\") pod \"cinder-db-sync-dp2rw\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.036831 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q2pk\" (UniqueName: \"kubernetes.io/projected/71afeadb-1cd9-461f-b899-307f7dd34fca-kube-api-access-7q2pk\") pod \"neutron-db-sync-7bd6d\" (UID: \"71afeadb-1cd9-461f-b899-307f7dd34fca\") " pod="openstack/neutron-db-sync-7bd6d" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.036883 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/71afeadb-1cd9-461f-b899-307f7dd34fca-config\") pod \"neutron-db-sync-7bd6d\" (UID: \"71afeadb-1cd9-461f-b899-307f7dd34fca\") " pod="openstack/neutron-db-sync-7bd6d" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.044512 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-9mgp8" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.095339 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-h2lbn"] Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.096785 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-h2lbn" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.103867 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hdq2c" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.104062 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.106044 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.123109 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-h2lbn"] Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.144595 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-nlwsx"] Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.163477 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q2pk\" (UniqueName: \"kubernetes.io/projected/71afeadb-1cd9-461f-b899-307f7dd34fca-kube-api-access-7q2pk\") pod \"neutron-db-sync-7bd6d\" (UID: \"71afeadb-1cd9-461f-b899-307f7dd34fca\") " pod="openstack/neutron-db-sync-7bd6d" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.173250 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/71afeadb-1cd9-461f-b899-307f7dd34fca-config\") pod \"neutron-db-sync-7bd6d\" (UID: \"71afeadb-1cd9-461f-b899-307f7dd34fca\") " pod="openstack/neutron-db-sync-7bd6d" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.173786 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-config-data\") pod \"cinder-db-sync-dp2rw\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.173843 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-combined-ca-bundle\") pod \"cinder-db-sync-dp2rw\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.173884 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-etc-machine-id\") pod \"cinder-db-sync-dp2rw\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.173984 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-scripts\") pod \"cinder-db-sync-dp2rw\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.174086 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71afeadb-1cd9-461f-b899-307f7dd34fca-combined-ca-bundle\") pod \"neutron-db-sync-7bd6d\" (UID: \"71afeadb-1cd9-461f-b899-307f7dd34fca\") " pod="openstack/neutron-db-sync-7bd6d" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.174146 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-db-sync-config-data\") pod \"cinder-db-sync-dp2rw\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.174170 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4cs2\" (UniqueName: \"kubernetes.io/projected/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-kube-api-access-q4cs2\") pod \"cinder-db-sync-dp2rw\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.181552 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71afeadb-1cd9-461f-b899-307f7dd34fca-combined-ca-bundle\") pod \"neutron-db-sync-7bd6d\" (UID: \"71afeadb-1cd9-461f-b899-307f7dd34fca\") " pod="openstack/neutron-db-sync-7bd6d" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.182419 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-db-sync-config-data\") pod \"cinder-db-sync-dp2rw\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.186234 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-combined-ca-bundle\") pod \"cinder-db-sync-dp2rw\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.186526 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-etc-machine-id\") pod \"cinder-db-sync-dp2rw\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.189095 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-config-data\") pod \"cinder-db-sync-dp2rw\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.190463 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-scripts\") pod \"cinder-db-sync-dp2rw\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.193631 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/71afeadb-1cd9-461f-b899-307f7dd34fca-config\") pod \"neutron-db-sync-7bd6d\" (UID: \"71afeadb-1cd9-461f-b899-307f7dd34fca\") " pod="openstack/neutron-db-sync-7bd6d" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.196604 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q2pk\" (UniqueName: \"kubernetes.io/projected/71afeadb-1cd9-461f-b899-307f7dd34fca-kube-api-access-7q2pk\") pod \"neutron-db-sync-7bd6d\" (UID: \"71afeadb-1cd9-461f-b899-307f7dd34fca\") " pod="openstack/neutron-db-sync-7bd6d" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.200353 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4cs2\" (UniqueName: \"kubernetes.io/projected/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-kube-api-access-q4cs2\") pod \"cinder-db-sync-dp2rw\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.230972 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lhvdf"] Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.232694 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.248495 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7bd6d" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.259004 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lhvdf"] Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.277263 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d064aa6-d9c2-4adf-a25b-33a70d86e728-logs\") pod \"placement-db-sync-h2lbn\" (UID: \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\") " pod="openstack/placement-db-sync-h2lbn" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.277321 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb7k8\" (UniqueName: \"kubernetes.io/projected/8d064aa6-d9c2-4adf-a25b-33a70d86e728-kube-api-access-sb7k8\") pod \"placement-db-sync-h2lbn\" (UID: \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\") " pod="openstack/placement-db-sync-h2lbn" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.277366 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d064aa6-d9c2-4adf-a25b-33a70d86e728-combined-ca-bundle\") pod \"placement-db-sync-h2lbn\" (UID: \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\") " pod="openstack/placement-db-sync-h2lbn" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.277458 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d064aa6-d9c2-4adf-a25b-33a70d86e728-config-data\") pod \"placement-db-sync-h2lbn\" (UID: \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\") " pod="openstack/placement-db-sync-h2lbn" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.277530 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d064aa6-d9c2-4adf-a25b-33a70d86e728-scripts\") pod \"placement-db-sync-h2lbn\" (UID: \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\") " pod="openstack/placement-db-sync-h2lbn" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.284221 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-c4wdb"] Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.286231 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-c4wdb" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.290316 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-jt6w5" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.290761 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.298231 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.298991 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-c4wdb"] Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.379480 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-config\") pod \"dnsmasq-dns-56df8fb6b7-lhvdf\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.379563 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d064aa6-d9c2-4adf-a25b-33a70d86e728-logs\") pod \"placement-db-sync-h2lbn\" (UID: \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\") " pod="openstack/placement-db-sync-h2lbn" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.379594 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb7k8\" (UniqueName: \"kubernetes.io/projected/8d064aa6-d9c2-4adf-a25b-33a70d86e728-kube-api-access-sb7k8\") pod \"placement-db-sync-h2lbn\" (UID: \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\") " pod="openstack/placement-db-sync-h2lbn" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.379626 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d064aa6-d9c2-4adf-a25b-33a70d86e728-combined-ca-bundle\") pod \"placement-db-sync-h2lbn\" (UID: \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\") " pod="openstack/placement-db-sync-h2lbn" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.379646 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-lhvdf\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.379674 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdq9h\" (UniqueName: \"kubernetes.io/projected/5989d951-ed71-4800-9400-390cbe5513f9-kube-api-access-gdq9h\") pod \"barbican-db-sync-c4wdb\" (UID: \"5989d951-ed71-4800-9400-390cbe5513f9\") " pod="openstack/barbican-db-sync-c4wdb" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.379700 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpdj4\" (UniqueName: \"kubernetes.io/projected/c1a7725b-4536-4728-9aa7-99ca1c44daa6-kube-api-access-lpdj4\") pod \"dnsmasq-dns-56df8fb6b7-lhvdf\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.379729 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-lhvdf\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.379764 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d064aa6-d9c2-4adf-a25b-33a70d86e728-config-data\") pod \"placement-db-sync-h2lbn\" (UID: \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\") " pod="openstack/placement-db-sync-h2lbn" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.379811 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-lhvdf\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.379842 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5989d951-ed71-4800-9400-390cbe5513f9-combined-ca-bundle\") pod \"barbican-db-sync-c4wdb\" (UID: \"5989d951-ed71-4800-9400-390cbe5513f9\") " pod="openstack/barbican-db-sync-c4wdb" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.379860 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-lhvdf\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.379885 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d064aa6-d9c2-4adf-a25b-33a70d86e728-scripts\") pod \"placement-db-sync-h2lbn\" (UID: \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\") " pod="openstack/placement-db-sync-h2lbn" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.379926 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5989d951-ed71-4800-9400-390cbe5513f9-db-sync-config-data\") pod \"barbican-db-sync-c4wdb\" (UID: \"5989d951-ed71-4800-9400-390cbe5513f9\") " pod="openstack/barbican-db-sync-c4wdb" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.380339 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d064aa6-d9c2-4adf-a25b-33a70d86e728-logs\") pod \"placement-db-sync-h2lbn\" (UID: \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\") " pod="openstack/placement-db-sync-h2lbn" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.385743 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d064aa6-d9c2-4adf-a25b-33a70d86e728-scripts\") pod \"placement-db-sync-h2lbn\" (UID: \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\") " pod="openstack/placement-db-sync-h2lbn" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.386361 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d064aa6-d9c2-4adf-a25b-33a70d86e728-combined-ca-bundle\") pod \"placement-db-sync-h2lbn\" (UID: \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\") " pod="openstack/placement-db-sync-h2lbn" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.387100 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d064aa6-d9c2-4adf-a25b-33a70d86e728-config-data\") pod \"placement-db-sync-h2lbn\" (UID: \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\") " pod="openstack/placement-db-sync-h2lbn" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.400640 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb7k8\" (UniqueName: \"kubernetes.io/projected/8d064aa6-d9c2-4adf-a25b-33a70d86e728-kube-api-access-sb7k8\") pod \"placement-db-sync-h2lbn\" (UID: \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\") " pod="openstack/placement-db-sync-h2lbn" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.448407 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-h2lbn" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.484264 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-lhvdf\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.484317 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdq9h\" (UniqueName: \"kubernetes.io/projected/5989d951-ed71-4800-9400-390cbe5513f9-kube-api-access-gdq9h\") pod \"barbican-db-sync-c4wdb\" (UID: \"5989d951-ed71-4800-9400-390cbe5513f9\") " pod="openstack/barbican-db-sync-c4wdb" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.484345 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpdj4\" (UniqueName: \"kubernetes.io/projected/c1a7725b-4536-4728-9aa7-99ca1c44daa6-kube-api-access-lpdj4\") pod \"dnsmasq-dns-56df8fb6b7-lhvdf\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.484374 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-lhvdf\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.484435 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-lhvdf\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.484761 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5989d951-ed71-4800-9400-390cbe5513f9-combined-ca-bundle\") pod \"barbican-db-sync-c4wdb\" (UID: \"5989d951-ed71-4800-9400-390cbe5513f9\") " pod="openstack/barbican-db-sync-c4wdb" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.484786 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-lhvdf\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.484822 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5989d951-ed71-4800-9400-390cbe5513f9-db-sync-config-data\") pod \"barbican-db-sync-c4wdb\" (UID: \"5989d951-ed71-4800-9400-390cbe5513f9\") " pod="openstack/barbican-db-sync-c4wdb" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.484861 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-config\") pod \"dnsmasq-dns-56df8fb6b7-lhvdf\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.485827 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-config\") pod \"dnsmasq-dns-56df8fb6b7-lhvdf\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.485886 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-lhvdf\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.485964 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-lhvdf\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.486459 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-lhvdf\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.487747 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-lhvdf\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.500889 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5989d951-ed71-4800-9400-390cbe5513f9-combined-ca-bundle\") pod \"barbican-db-sync-c4wdb\" (UID: \"5989d951-ed71-4800-9400-390cbe5513f9\") " pod="openstack/barbican-db-sync-c4wdb" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.504522 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpdj4\" (UniqueName: \"kubernetes.io/projected/c1a7725b-4536-4728-9aa7-99ca1c44daa6-kube-api-access-lpdj4\") pod \"dnsmasq-dns-56df8fb6b7-lhvdf\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.506334 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdq9h\" (UniqueName: \"kubernetes.io/projected/5989d951-ed71-4800-9400-390cbe5513f9-kube-api-access-gdq9h\") pod \"barbican-db-sync-c4wdb\" (UID: \"5989d951-ed71-4800-9400-390cbe5513f9\") " pod="openstack/barbican-db-sync-c4wdb" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.507579 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5989d951-ed71-4800-9400-390cbe5513f9-db-sync-config-data\") pod \"barbican-db-sync-c4wdb\" (UID: \"5989d951-ed71-4800-9400-390cbe5513f9\") " pod="openstack/barbican-db-sync-c4wdb" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.584966 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.607723 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.610325 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.615068 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.619072 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.620828 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.625473 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-c4wdb" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.627875 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.628105 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mrw97" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.628245 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.628354 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.628744 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.656667 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.701952 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.705226 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-config-data\") pod \"ceilometer-0\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.705321 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.705396 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/400b7e5e-907b-4128-b04f-9f10acabad20-logs\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.705432 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c153482-f01a-4e13-ba03-fdd162f3a758-log-httpd\") pod \"ceilometer-0\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.705474 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.705496 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-scripts\") pod \"ceilometer-0\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.705555 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.705738 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-scripts\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.705833 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.705879 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v945r\" (UniqueName: \"kubernetes.io/projected/7c153482-f01a-4e13-ba03-fdd162f3a758-kube-api-access-v945r\") pod \"ceilometer-0\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.705959 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c153482-f01a-4e13-ba03-fdd162f3a758-run-httpd\") pod \"ceilometer-0\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.706144 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/400b7e5e-907b-4128-b04f-9f10acabad20-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.706223 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-config-data\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.706332 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.706349 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp26t\" (UniqueName: \"kubernetes.io/projected/400b7e5e-907b-4128-b04f-9f10acabad20-kube-api-access-dp26t\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.808828 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/400b7e5e-907b-4128-b04f-9f10acabad20-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.808883 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-config-data\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.808946 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.808966 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp26t\" (UniqueName: \"kubernetes.io/projected/400b7e5e-907b-4128-b04f-9f10acabad20-kube-api-access-dp26t\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.808996 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-config-data\") pod \"ceilometer-0\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.809021 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.809051 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c153482-f01a-4e13-ba03-fdd162f3a758-log-httpd\") pod \"ceilometer-0\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.809066 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/400b7e5e-907b-4128-b04f-9f10acabad20-logs\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.809087 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.809101 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-scripts\") pod \"ceilometer-0\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.809123 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.809162 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-scripts\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.809199 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.809215 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v945r\" (UniqueName: \"kubernetes.io/projected/7c153482-f01a-4e13-ba03-fdd162f3a758-kube-api-access-v945r\") pod \"ceilometer-0\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.809237 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c153482-f01a-4e13-ba03-fdd162f3a758-run-httpd\") pod \"ceilometer-0\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.809653 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c153482-f01a-4e13-ba03-fdd162f3a758-run-httpd\") pod \"ceilometer-0\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.815766 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/400b7e5e-907b-4128-b04f-9f10acabad20-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.817371 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/400b7e5e-907b-4128-b04f-9f10acabad20-logs\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.823780 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-scripts\") pod \"ceilometer-0\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.825130 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-config-data\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.828152 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.830179 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-scripts\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.832960 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-68j4q"] Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.837632 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-config-data\") pod \"ceilometer-0\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.838317 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c153482-f01a-4e13-ba03-fdd162f3a758-log-httpd\") pod \"ceilometer-0\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.839611 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.840956 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.845960 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.857261 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.857491 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/976b8fe222d013f8479d40e9925328ad0f72c98e7e279071e3e1ab72dc6f328d/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.861013 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp26t\" (UniqueName: \"kubernetes.io/projected/400b7e5e-907b-4128-b04f-9f10acabad20-kube-api-access-dp26t\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.870713 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v945r\" (UniqueName: \"kubernetes.io/projected/7c153482-f01a-4e13-ba03-fdd162f3a758-kube-api-access-v945r\") pod \"ceilometer-0\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " pod="openstack/ceilometer-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.871633 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.878923 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.882224 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.882241 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.898240 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 16:04:59 crc kubenswrapper[4966]: I0127 16:04:59.922748 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-nlwsx"] Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.015414 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-9mgp8"] Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.030173 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\") pod \"glance-default-external-api-0\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.031203 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.031974 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcpvg\" (UniqueName: \"kubernetes.io/projected/61b000a1-e2b2-49b1-956a-35596bfeb52d-kube-api-access-qcpvg\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.032184 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.032227 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.032276 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.032298 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61b000a1-e2b2-49b1-956a-35596bfeb52d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.032360 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61b000a1-e2b2-49b1-956a-35596bfeb52d-logs\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.032411 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eac12953-51d3-4d82-a334-6290e6023c51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.080698 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-68j4q" event={"ID":"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80","Type":"ContainerStarted","Data":"addf3d3404815e5641e25d674b36462fb61ed7a559b76ae83e28864350aab0f8"} Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.081707 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-9mgp8" event={"ID":"05fc08f5-c60a-4248-8de2-447d0415188e","Type":"ContainerStarted","Data":"5a6f4e234c1b3747c3c3b19bb8993357cc35c23f55db123d0151470c3b6baf3d"} Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.082653 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.135059 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.135123 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.135176 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.135201 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61b000a1-e2b2-49b1-956a-35596bfeb52d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.135265 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61b000a1-e2b2-49b1-956a-35596bfeb52d-logs\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.135309 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-eac12953-51d3-4d82-a334-6290e6023c51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.135358 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.135437 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcpvg\" (UniqueName: \"kubernetes.io/projected/61b000a1-e2b2-49b1-956a-35596bfeb52d-kube-api-access-qcpvg\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.136495 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61b000a1-e2b2-49b1-956a-35596bfeb52d-logs\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.136713 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61b000a1-e2b2-49b1-956a-35596bfeb52d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.142211 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.143389 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.143445 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-eac12953-51d3-4d82-a334-6290e6023c51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/228a00ce595fa766fa34f65a641b8516d33aa40c9543b442d44ee069fda194ff/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.144041 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.145365 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.153568 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.154181 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.163381 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcpvg\" (UniqueName: \"kubernetes.io/projected/61b000a1-e2b2-49b1-956a-35596bfeb52d-kube-api-access-qcpvg\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.251335 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-7bd6d"] Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.272519 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-eac12953-51d3-4d82-a334-6290e6023c51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51\") pod \"glance-default-internal-api-0\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.406829 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.419764 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-h2lbn"] Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.447548 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dp2rw"] Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.784404 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lhvdf"] Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.825988 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 16:05:00 crc kubenswrapper[4966]: I0127 16:05:00.844524 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-c4wdb"] Jan 27 16:05:01 crc kubenswrapper[4966]: I0127 16:05:01.008975 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 16:05:01 crc kubenswrapper[4966]: I0127 16:05:01.012845 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:05:01 crc kubenswrapper[4966]: I0127 16:05:01.138233 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7bd6d" event={"ID":"71afeadb-1cd9-461f-b899-307f7dd34fca","Type":"ContainerStarted","Data":"41b7b1e528a76290c9029e5e699549001d5943d2e75e30d4136830e7c1f442e0"} Jan 27 16:05:01 crc kubenswrapper[4966]: I0127 16:05:01.138276 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7bd6d" event={"ID":"71afeadb-1cd9-461f-b899-307f7dd34fca","Type":"ContainerStarted","Data":"2c4561a47e4a969061a30e76a4422428c0a6c49c5a79bd0b8c93c6b46deebaad"} Jan 27 16:05:01 crc kubenswrapper[4966]: I0127 16:05:01.141054 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-h2lbn" event={"ID":"8d064aa6-d9c2-4adf-a25b-33a70d86e728","Type":"ContainerStarted","Data":"a10b73dcbd1cf36bbdec657360b0127447c0485692411ef4bcbdfd2062f289a3"} Jan 27 16:05:01 crc kubenswrapper[4966]: I0127 16:05:01.147575 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c153482-f01a-4e13-ba03-fdd162f3a758","Type":"ContainerStarted","Data":"6aa9fd8336bf768fb565cc8f9aea0a88dd612e3ac5099b0ae87976278d85b9ff"} Jan 27 16:05:01 crc kubenswrapper[4966]: I0127 16:05:01.153111 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" event={"ID":"c1a7725b-4536-4728-9aa7-99ca1c44daa6","Type":"ContainerStarted","Data":"aa927131a970f89d7e78417c7bdfad40feaa9196b177520dd4819347a8ce0937"} Jan 27 16:05:01 crc kubenswrapper[4966]: I0127 16:05:01.168166 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-c4wdb" event={"ID":"5989d951-ed71-4800-9400-390cbe5513f9","Type":"ContainerStarted","Data":"046b0ef810b1ef64ba42b7a96c92c38e511990946e24bce7e13856eae676bcd7"} Jan 27 16:05:01 crc kubenswrapper[4966]: I0127 16:05:01.169687 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-7bd6d" podStartSLOduration=3.169667724 podStartE2EDuration="3.169667724s" podCreationTimestamp="2026-01-27 16:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:05:01.168481736 +0000 UTC m=+1367.471275224" watchObservedRunningTime="2026-01-27 16:05:01.169667724 +0000 UTC m=+1367.472461212" Jan 27 16:05:01 crc kubenswrapper[4966]: I0127 16:05:01.173723 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-68j4q" event={"ID":"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80","Type":"ContainerStarted","Data":"d6067a5aa2410ee63aaa2194fc062054fb7d86e968c4646ada3dea1cf4f76702"} Jan 27 16:05:01 crc kubenswrapper[4966]: I0127 16:05:01.197594 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-68j4q" podStartSLOduration=3.19757932 podStartE2EDuration="3.19757932s" podCreationTimestamp="2026-01-27 16:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:05:01.196553907 +0000 UTC m=+1367.499347395" watchObservedRunningTime="2026-01-27 16:05:01.19757932 +0000 UTC m=+1367.500372808" Jan 27 16:05:01 crc kubenswrapper[4966]: I0127 16:05:01.201297 4966 generic.go:334] "Generic (PLEG): container finished" podID="fd66bf3a-16b8-4977-b579-b2f5ad423fbc" containerID="a4baaaedfec430a317a36eeee0927ddb1dc1fb624120c11ae1a6a7fb1c32756e" exitCode=0 Jan 27 16:05:01 crc kubenswrapper[4966]: I0127 16:05:01.201390 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" event={"ID":"fd66bf3a-16b8-4977-b579-b2f5ad423fbc","Type":"ContainerDied","Data":"a4baaaedfec430a317a36eeee0927ddb1dc1fb624120c11ae1a6a7fb1c32756e"} Jan 27 16:05:01 crc kubenswrapper[4966]: I0127 16:05:01.201419 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" event={"ID":"fd66bf3a-16b8-4977-b579-b2f5ad423fbc","Type":"ContainerStarted","Data":"7f8aaaab965d509f04f7986f0d0f0a06e3129b2f876a7669959e69e75dacfc57"} Jan 27 16:05:01 crc kubenswrapper[4966]: I0127 16:05:01.210684 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dp2rw" event={"ID":"4d47c000-d7a4-4dca-a051-15a5d91f3ab9","Type":"ContainerStarted","Data":"6ec5784942c7c3981aa257e3cac21e872663f7d243048fb4b67ea9da8838113e"} Jan 27 16:05:01 crc kubenswrapper[4966]: I0127 16:05:01.369251 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:05:01 crc kubenswrapper[4966]: I0127 16:05:01.465399 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 16:05:01 crc kubenswrapper[4966]: I0127 16:05:01.694616 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.016823 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.145112 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-ovsdbserver-nb\") pod \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.145258 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-dns-swift-storage-0\") pod \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.145313 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-config\") pod \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.145425 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-dns-svc\") pod \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.145478 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82rvt\" (UniqueName: \"kubernetes.io/projected/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-kube-api-access-82rvt\") pod \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.145528 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-ovsdbserver-sb\") pod \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\" (UID: \"fd66bf3a-16b8-4977-b579-b2f5ad423fbc\") " Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.166626 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-kube-api-access-82rvt" (OuterVolumeSpecName: "kube-api-access-82rvt") pod "fd66bf3a-16b8-4977-b579-b2f5ad423fbc" (UID: "fd66bf3a-16b8-4977-b579-b2f5ad423fbc"). InnerVolumeSpecName "kube-api-access-82rvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.200857 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fd66bf3a-16b8-4977-b579-b2f5ad423fbc" (UID: "fd66bf3a-16b8-4977-b579-b2f5ad423fbc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.201006 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fd66bf3a-16b8-4977-b579-b2f5ad423fbc" (UID: "fd66bf3a-16b8-4977-b579-b2f5ad423fbc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.209145 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fd66bf3a-16b8-4977-b579-b2f5ad423fbc" (UID: "fd66bf3a-16b8-4977-b579-b2f5ad423fbc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.223730 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fd66bf3a-16b8-4977-b579-b2f5ad423fbc" (UID: "fd66bf3a-16b8-4977-b579-b2f5ad423fbc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.228281 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-config" (OuterVolumeSpecName: "config") pod "fd66bf3a-16b8-4977-b579-b2f5ad423fbc" (UID: "fd66bf3a-16b8-4977-b579-b2f5ad423fbc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.248667 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.248719 4966 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.248738 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.248749 4966 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.248760 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82rvt\" (UniqueName: \"kubernetes.io/projected/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-kube-api-access-82rvt\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.248772 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd66bf3a-16b8-4977-b579-b2f5ad423fbc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.251616 4966 generic.go:334] "Generic (PLEG): container finished" podID="c1a7725b-4536-4728-9aa7-99ca1c44daa6" containerID="399322d621c266f60dd71784a392209b6f80f692969184e92043f604bac485d4" exitCode=0 Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.251813 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" event={"ID":"c1a7725b-4536-4728-9aa7-99ca1c44daa6","Type":"ContainerDied","Data":"399322d621c266f60dd71784a392209b6f80f692969184e92043f604bac485d4"} Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.262437 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"61b000a1-e2b2-49b1-956a-35596bfeb52d","Type":"ContainerStarted","Data":"e4c6c63ebad8d5d4c5bd3c532d19959beb56c98794e03d34dbbd63a19df16dce"} Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.266490 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" event={"ID":"fd66bf3a-16b8-4977-b579-b2f5ad423fbc","Type":"ContainerDied","Data":"7f8aaaab965d509f04f7986f0d0f0a06e3129b2f876a7669959e69e75dacfc57"} Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.266528 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-nlwsx" Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.266571 4966 scope.go:117] "RemoveContainer" containerID="a4baaaedfec430a317a36eeee0927ddb1dc1fb624120c11ae1a6a7fb1c32756e" Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.279019 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"400b7e5e-907b-4128-b04f-9f10acabad20","Type":"ContainerStarted","Data":"31925341b75eae045f18cc8dbb398010572833b2776f7a845ac1b29b6cff9b3e"} Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.371951 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-nlwsx"] Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.382607 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-nlwsx"] Jan 27 16:05:02 crc kubenswrapper[4966]: I0127 16:05:02.546524 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd66bf3a-16b8-4977-b579-b2f5ad423fbc" path="/var/lib/kubelet/pods/fd66bf3a-16b8-4977-b579-b2f5ad423fbc/volumes" Jan 27 16:05:03 crc kubenswrapper[4966]: I0127 16:05:03.310798 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" event={"ID":"c1a7725b-4536-4728-9aa7-99ca1c44daa6","Type":"ContainerStarted","Data":"3d274a7defd5c586b9e8d4c4aa79a78cf623ffe79e6f44cff7d2d3f4aca9d1d7"} Jan 27 16:05:03 crc kubenswrapper[4966]: I0127 16:05:03.314122 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:05:03 crc kubenswrapper[4966]: I0127 16:05:03.322498 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"61b000a1-e2b2-49b1-956a-35596bfeb52d","Type":"ContainerStarted","Data":"fedbe8374a534e0891eabdc2b6ccb05981e01fd57f1c4c5fb6962526f5ff0eab"} Jan 27 16:05:03 crc kubenswrapper[4966]: I0127 16:05:03.344082 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"400b7e5e-907b-4128-b04f-9f10acabad20","Type":"ContainerStarted","Data":"a709ad0f8b623403a090199ee65b90f85fc1c152c554456ef115250b4a9d3c72"} Jan 27 16:05:03 crc kubenswrapper[4966]: I0127 16:05:03.348396 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" podStartSLOduration=5.348374671 podStartE2EDuration="5.348374671s" podCreationTimestamp="2026-01-27 16:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:05:03.333822465 +0000 UTC m=+1369.636615973" watchObservedRunningTime="2026-01-27 16:05:03.348374671 +0000 UTC m=+1369.651168219" Jan 27 16:05:04 crc kubenswrapper[4966]: I0127 16:05:04.369874 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"61b000a1-e2b2-49b1-956a-35596bfeb52d","Type":"ContainerStarted","Data":"b4f66760646222fa75560c705be7b351da1cd8248bf7bb96673fe63370a4a036"} Jan 27 16:05:04 crc kubenswrapper[4966]: I0127 16:05:04.369956 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="61b000a1-e2b2-49b1-956a-35596bfeb52d" containerName="glance-log" containerID="cri-o://fedbe8374a534e0891eabdc2b6ccb05981e01fd57f1c4c5fb6962526f5ff0eab" gracePeriod=30 Jan 27 16:05:04 crc kubenswrapper[4966]: I0127 16:05:04.370033 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="61b000a1-e2b2-49b1-956a-35596bfeb52d" containerName="glance-httpd" containerID="cri-o://b4f66760646222fa75560c705be7b351da1cd8248bf7bb96673fe63370a4a036" gracePeriod=30 Jan 27 16:05:04 crc kubenswrapper[4966]: I0127 16:05:04.374972 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"400b7e5e-907b-4128-b04f-9f10acabad20","Type":"ContainerStarted","Data":"6680c2c05ba70570db81d901f50129e13cb5e866f3de7035398294358f10384d"} Jan 27 16:05:04 crc kubenswrapper[4966]: I0127 16:05:04.375043 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="400b7e5e-907b-4128-b04f-9f10acabad20" containerName="glance-log" containerID="cri-o://a709ad0f8b623403a090199ee65b90f85fc1c152c554456ef115250b4a9d3c72" gracePeriod=30 Jan 27 16:05:04 crc kubenswrapper[4966]: I0127 16:05:04.375069 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="400b7e5e-907b-4128-b04f-9f10acabad20" containerName="glance-httpd" containerID="cri-o://6680c2c05ba70570db81d901f50129e13cb5e866f3de7035398294358f10384d" gracePeriod=30 Jan 27 16:05:04 crc kubenswrapper[4966]: I0127 16:05:04.404731 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.404713404 podStartE2EDuration="6.404713404s" podCreationTimestamp="2026-01-27 16:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:05:04.391329504 +0000 UTC m=+1370.694123002" watchObservedRunningTime="2026-01-27 16:05:04.404713404 +0000 UTC m=+1370.707506892" Jan 27 16:05:04 crc kubenswrapper[4966]: I0127 16:05:04.417325 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.417305599 podStartE2EDuration="6.417305599s" podCreationTimestamp="2026-01-27 16:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:05:04.412639203 +0000 UTC m=+1370.715432711" watchObservedRunningTime="2026-01-27 16:05:04.417305599 +0000 UTC m=+1370.720099087" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.198274 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.242986 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-combined-ca-bundle\") pod \"61b000a1-e2b2-49b1-956a-35596bfeb52d\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.243061 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcpvg\" (UniqueName: \"kubernetes.io/projected/61b000a1-e2b2-49b1-956a-35596bfeb52d-kube-api-access-qcpvg\") pod \"61b000a1-e2b2-49b1-956a-35596bfeb52d\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.243119 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-config-data\") pod \"61b000a1-e2b2-49b1-956a-35596bfeb52d\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.243150 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-internal-tls-certs\") pod \"61b000a1-e2b2-49b1-956a-35596bfeb52d\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.243351 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51\") pod \"61b000a1-e2b2-49b1-956a-35596bfeb52d\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.243444 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-scripts\") pod \"61b000a1-e2b2-49b1-956a-35596bfeb52d\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.243692 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61b000a1-e2b2-49b1-956a-35596bfeb52d-logs\") pod \"61b000a1-e2b2-49b1-956a-35596bfeb52d\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.243746 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61b000a1-e2b2-49b1-956a-35596bfeb52d-httpd-run\") pod \"61b000a1-e2b2-49b1-956a-35596bfeb52d\" (UID: \"61b000a1-e2b2-49b1-956a-35596bfeb52d\") " Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.250576 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61b000a1-e2b2-49b1-956a-35596bfeb52d-logs" (OuterVolumeSpecName: "logs") pod "61b000a1-e2b2-49b1-956a-35596bfeb52d" (UID: "61b000a1-e2b2-49b1-956a-35596bfeb52d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.253291 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61b000a1-e2b2-49b1-956a-35596bfeb52d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "61b000a1-e2b2-49b1-956a-35596bfeb52d" (UID: "61b000a1-e2b2-49b1-956a-35596bfeb52d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.253945 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-scripts" (OuterVolumeSpecName: "scripts") pod "61b000a1-e2b2-49b1-956a-35596bfeb52d" (UID: "61b000a1-e2b2-49b1-956a-35596bfeb52d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.263516 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61b000a1-e2b2-49b1-956a-35596bfeb52d-kube-api-access-qcpvg" (OuterVolumeSpecName: "kube-api-access-qcpvg") pod "61b000a1-e2b2-49b1-956a-35596bfeb52d" (UID: "61b000a1-e2b2-49b1-956a-35596bfeb52d"). InnerVolumeSpecName "kube-api-access-qcpvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.291885 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51" (OuterVolumeSpecName: "glance") pod "61b000a1-e2b2-49b1-956a-35596bfeb52d" (UID: "61b000a1-e2b2-49b1-956a-35596bfeb52d"). InnerVolumeSpecName "pvc-eac12953-51d3-4d82-a334-6290e6023c51". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.294245 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61b000a1-e2b2-49b1-956a-35596bfeb52d" (UID: "61b000a1-e2b2-49b1-956a-35596bfeb52d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.331231 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "61b000a1-e2b2-49b1-956a-35596bfeb52d" (UID: "61b000a1-e2b2-49b1-956a-35596bfeb52d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.344146 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-config-data" (OuterVolumeSpecName: "config-data") pod "61b000a1-e2b2-49b1-956a-35596bfeb52d" (UID: "61b000a1-e2b2-49b1-956a-35596bfeb52d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.347812 4966 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.347881 4966 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-eac12953-51d3-4d82-a334-6290e6023c51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51\") on node \"crc\" " Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.347933 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.347944 4966 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61b000a1-e2b2-49b1-956a-35596bfeb52d-logs\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.347953 4966 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61b000a1-e2b2-49b1-956a-35596bfeb52d-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.347963 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.347973 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcpvg\" (UniqueName: \"kubernetes.io/projected/61b000a1-e2b2-49b1-956a-35596bfeb52d-kube-api-access-qcpvg\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.348004 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61b000a1-e2b2-49b1-956a-35596bfeb52d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.386668 4966 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.386851 4966 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-eac12953-51d3-4d82-a334-6290e6023c51" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51") on node "crc" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.401025 4966 generic.go:334] "Generic (PLEG): container finished" podID="400b7e5e-907b-4128-b04f-9f10acabad20" containerID="6680c2c05ba70570db81d901f50129e13cb5e866f3de7035398294358f10384d" exitCode=143 Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.401059 4966 generic.go:334] "Generic (PLEG): container finished" podID="400b7e5e-907b-4128-b04f-9f10acabad20" containerID="a709ad0f8b623403a090199ee65b90f85fc1c152c554456ef115250b4a9d3c72" exitCode=143 Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.401151 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"400b7e5e-907b-4128-b04f-9f10acabad20","Type":"ContainerDied","Data":"6680c2c05ba70570db81d901f50129e13cb5e866f3de7035398294358f10384d"} Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.401217 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"400b7e5e-907b-4128-b04f-9f10acabad20","Type":"ContainerDied","Data":"a709ad0f8b623403a090199ee65b90f85fc1c152c554456ef115250b4a9d3c72"} Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.408299 4966 generic.go:334] "Generic (PLEG): container finished" podID="61b000a1-e2b2-49b1-956a-35596bfeb52d" containerID="b4f66760646222fa75560c705be7b351da1cd8248bf7bb96673fe63370a4a036" exitCode=143 Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.408352 4966 generic.go:334] "Generic (PLEG): container finished" podID="61b000a1-e2b2-49b1-956a-35596bfeb52d" containerID="fedbe8374a534e0891eabdc2b6ccb05981e01fd57f1c4c5fb6962526f5ff0eab" exitCode=143 Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.408377 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"61b000a1-e2b2-49b1-956a-35596bfeb52d","Type":"ContainerDied","Data":"b4f66760646222fa75560c705be7b351da1cd8248bf7bb96673fe63370a4a036"} Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.408382 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.408437 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"61b000a1-e2b2-49b1-956a-35596bfeb52d","Type":"ContainerDied","Data":"fedbe8374a534e0891eabdc2b6ccb05981e01fd57f1c4c5fb6962526f5ff0eab"} Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.408449 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"61b000a1-e2b2-49b1-956a-35596bfeb52d","Type":"ContainerDied","Data":"e4c6c63ebad8d5d4c5bd3c532d19959beb56c98794e03d34dbbd63a19df16dce"} Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.408465 4966 scope.go:117] "RemoveContainer" containerID="b4f66760646222fa75560c705be7b351da1cd8248bf7bb96673fe63370a4a036" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.455280 4966 reconciler_common.go:293] "Volume detached for volume \"pvc-eac12953-51d3-4d82-a334-6290e6023c51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.462163 4966 scope.go:117] "RemoveContainer" containerID="fedbe8374a534e0891eabdc2b6ccb05981e01fd57f1c4c5fb6962526f5ff0eab" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.466836 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.491631 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.501279 4966 scope.go:117] "RemoveContainer" containerID="b4f66760646222fa75560c705be7b351da1cd8248bf7bb96673fe63370a4a036" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.514383 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 16:05:05 crc kubenswrapper[4966]: E0127 16:05:05.516144 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61b000a1-e2b2-49b1-956a-35596bfeb52d" containerName="glance-httpd" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.516255 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="61b000a1-e2b2-49b1-956a-35596bfeb52d" containerName="glance-httpd" Jan 27 16:05:05 crc kubenswrapper[4966]: E0127 16:05:05.516350 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61b000a1-e2b2-49b1-956a-35596bfeb52d" containerName="glance-log" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.516403 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="61b000a1-e2b2-49b1-956a-35596bfeb52d" containerName="glance-log" Jan 27 16:05:05 crc kubenswrapper[4966]: E0127 16:05:05.516477 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd66bf3a-16b8-4977-b579-b2f5ad423fbc" containerName="init" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.516534 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd66bf3a-16b8-4977-b579-b2f5ad423fbc" containerName="init" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.516820 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="61b000a1-e2b2-49b1-956a-35596bfeb52d" containerName="glance-log" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.516988 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="61b000a1-e2b2-49b1-956a-35596bfeb52d" containerName="glance-httpd" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.517059 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd66bf3a-16b8-4977-b579-b2f5ad423fbc" containerName="init" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.518250 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.522934 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.524169 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.530087 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 16:05:05 crc kubenswrapper[4966]: E0127 16:05:05.546448 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4f66760646222fa75560c705be7b351da1cd8248bf7bb96673fe63370a4a036\": container with ID starting with b4f66760646222fa75560c705be7b351da1cd8248bf7bb96673fe63370a4a036 not found: ID does not exist" containerID="b4f66760646222fa75560c705be7b351da1cd8248bf7bb96673fe63370a4a036" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.553505 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4f66760646222fa75560c705be7b351da1cd8248bf7bb96673fe63370a4a036"} err="failed to get container status \"b4f66760646222fa75560c705be7b351da1cd8248bf7bb96673fe63370a4a036\": rpc error: code = NotFound desc = could not find container \"b4f66760646222fa75560c705be7b351da1cd8248bf7bb96673fe63370a4a036\": container with ID starting with b4f66760646222fa75560c705be7b351da1cd8248bf7bb96673fe63370a4a036 not found: ID does not exist" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.553555 4966 scope.go:117] "RemoveContainer" containerID="fedbe8374a534e0891eabdc2b6ccb05981e01fd57f1c4c5fb6962526f5ff0eab" Jan 27 16:05:05 crc kubenswrapper[4966]: E0127 16:05:05.554758 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fedbe8374a534e0891eabdc2b6ccb05981e01fd57f1c4c5fb6962526f5ff0eab\": container with ID starting with fedbe8374a534e0891eabdc2b6ccb05981e01fd57f1c4c5fb6962526f5ff0eab not found: ID does not exist" containerID="fedbe8374a534e0891eabdc2b6ccb05981e01fd57f1c4c5fb6962526f5ff0eab" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.554997 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fedbe8374a534e0891eabdc2b6ccb05981e01fd57f1c4c5fb6962526f5ff0eab"} err="failed to get container status \"fedbe8374a534e0891eabdc2b6ccb05981e01fd57f1c4c5fb6962526f5ff0eab\": rpc error: code = NotFound desc = could not find container \"fedbe8374a534e0891eabdc2b6ccb05981e01fd57f1c4c5fb6962526f5ff0eab\": container with ID starting with fedbe8374a534e0891eabdc2b6ccb05981e01fd57f1c4c5fb6962526f5ff0eab not found: ID does not exist" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.555033 4966 scope.go:117] "RemoveContainer" containerID="b4f66760646222fa75560c705be7b351da1cd8248bf7bb96673fe63370a4a036" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.561044 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4f66760646222fa75560c705be7b351da1cd8248bf7bb96673fe63370a4a036"} err="failed to get container status \"b4f66760646222fa75560c705be7b351da1cd8248bf7bb96673fe63370a4a036\": rpc error: code = NotFound desc = could not find container \"b4f66760646222fa75560c705be7b351da1cd8248bf7bb96673fe63370a4a036\": container with ID starting with b4f66760646222fa75560c705be7b351da1cd8248bf7bb96673fe63370a4a036 not found: ID does not exist" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.561085 4966 scope.go:117] "RemoveContainer" containerID="fedbe8374a534e0891eabdc2b6ccb05981e01fd57f1c4c5fb6962526f5ff0eab" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.565060 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fedbe8374a534e0891eabdc2b6ccb05981e01fd57f1c4c5fb6962526f5ff0eab"} err="failed to get container status \"fedbe8374a534e0891eabdc2b6ccb05981e01fd57f1c4c5fb6962526f5ff0eab\": rpc error: code = NotFound desc = could not find container \"fedbe8374a534e0891eabdc2b6ccb05981e01fd57f1c4c5fb6962526f5ff0eab\": container with ID starting with fedbe8374a534e0891eabdc2b6ccb05981e01fd57f1c4c5fb6962526f5ff0eab not found: ID does not exist" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.660574 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldnm9\" (UniqueName: \"kubernetes.io/projected/ae145432-f4ac-4937-a71d-5c871832c20a-kube-api-access-ldnm9\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.660651 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.660852 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae145432-f4ac-4937-a71d-5c871832c20a-logs\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.660969 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.660992 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae145432-f4ac-4937-a71d-5c871832c20a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.661055 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.661083 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eac12953-51d3-4d82-a334-6290e6023c51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.661566 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.764400 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.764496 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae145432-f4ac-4937-a71d-5c871832c20a-logs\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.764523 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.764552 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae145432-f4ac-4937-a71d-5c871832c20a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.764620 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.764655 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-eac12953-51d3-4d82-a334-6290e6023c51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.764774 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.764826 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldnm9\" (UniqueName: \"kubernetes.io/projected/ae145432-f4ac-4937-a71d-5c871832c20a-kube-api-access-ldnm9\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.765867 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae145432-f4ac-4937-a71d-5c871832c20a-logs\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.767417 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae145432-f4ac-4937-a71d-5c871832c20a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.773119 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.773167 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-eac12953-51d3-4d82-a334-6290e6023c51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/228a00ce595fa766fa34f65a641b8516d33aa40c9543b442d44ee069fda194ff/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.773395 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.773551 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.774102 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.783132 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldnm9\" (UniqueName: \"kubernetes.io/projected/ae145432-f4ac-4937-a71d-5c871832c20a-kube-api-access-ldnm9\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.783235 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.872740 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-eac12953-51d3-4d82-a334-6290e6023c51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51\") pod \"glance-default-internal-api-0\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:05:05 crc kubenswrapper[4966]: I0127 16:05:05.937965 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 16:05:06 crc kubenswrapper[4966]: I0127 16:05:06.428613 4966 generic.go:334] "Generic (PLEG): container finished" podID="b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80" containerID="d6067a5aa2410ee63aaa2194fc062054fb7d86e968c4646ada3dea1cf4f76702" exitCode=0 Jan 27 16:05:06 crc kubenswrapper[4966]: I0127 16:05:06.428653 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-68j4q" event={"ID":"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80","Type":"ContainerDied","Data":"d6067a5aa2410ee63aaa2194fc062054fb7d86e968c4646ada3dea1cf4f76702"} Jan 27 16:05:06 crc kubenswrapper[4966]: I0127 16:05:06.536007 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61b000a1-e2b2-49b1-956a-35596bfeb52d" path="/var/lib/kubelet/pods/61b000a1-e2b2-49b1-956a-35596bfeb52d/volumes" Jan 27 16:05:08 crc kubenswrapper[4966]: I0127 16:05:08.993766 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 27 16:05:09 crc kubenswrapper[4966]: I0127 16:05:09.000580 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 27 16:05:09 crc kubenswrapper[4966]: I0127 16:05:09.468811 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 27 16:05:09 crc kubenswrapper[4966]: I0127 16:05:09.591236 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:05:09 crc kubenswrapper[4966]: I0127 16:05:09.699824 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-nxn4w"] Jan 27 16:05:09 crc kubenswrapper[4966]: I0127 16:05:09.700189 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" podUID="008f2af8-50bd-4efc-ab07-5dfbf858adbf" containerName="dnsmasq-dns" containerID="cri-o://17b91c1fa92e87c3067c3108be3b36234a3bb71530608be06b81511a7be2f323" gracePeriod=10 Jan 27 16:05:10 crc kubenswrapper[4966]: I0127 16:05:10.478086 4966 generic.go:334] "Generic (PLEG): container finished" podID="008f2af8-50bd-4efc-ab07-5dfbf858adbf" containerID="17b91c1fa92e87c3067c3108be3b36234a3bb71530608be06b81511a7be2f323" exitCode=0 Jan 27 16:05:10 crc kubenswrapper[4966]: I0127 16:05:10.479733 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" event={"ID":"008f2af8-50bd-4efc-ab07-5dfbf858adbf","Type":"ContainerDied","Data":"17b91c1fa92e87c3067c3108be3b36234a3bb71530608be06b81511a7be2f323"} Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.301048 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" podUID="008f2af8-50bd-4efc-ab07-5dfbf858adbf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.177:5353: connect: connection refused" Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.861716 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.886484 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.986870 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/400b7e5e-907b-4128-b04f-9f10acabad20-httpd-run\") pod \"400b7e5e-907b-4128-b04f-9f10acabad20\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.986949 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-scripts\") pod \"400b7e5e-907b-4128-b04f-9f10acabad20\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.986971 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-combined-ca-bundle\") pod \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.987011 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-fernet-keys\") pod \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.987176 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-public-tls-certs\") pod \"400b7e5e-907b-4128-b04f-9f10acabad20\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.987198 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjqng\" (UniqueName: \"kubernetes.io/projected/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-kube-api-access-bjqng\") pod \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.987227 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/400b7e5e-907b-4128-b04f-9f10acabad20-logs\") pod \"400b7e5e-907b-4128-b04f-9f10acabad20\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.987242 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-combined-ca-bundle\") pod \"400b7e5e-907b-4128-b04f-9f10acabad20\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.987374 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-config-data\") pod \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.987398 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-config-data\") pod \"400b7e5e-907b-4128-b04f-9f10acabad20\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.987432 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp26t\" (UniqueName: \"kubernetes.io/projected/400b7e5e-907b-4128-b04f-9f10acabad20-kube-api-access-dp26t\") pod \"400b7e5e-907b-4128-b04f-9f10acabad20\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.987447 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/400b7e5e-907b-4128-b04f-9f10acabad20-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "400b7e5e-907b-4128-b04f-9f10acabad20" (UID: "400b7e5e-907b-4128-b04f-9f10acabad20"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.987519 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\") pod \"400b7e5e-907b-4128-b04f-9f10acabad20\" (UID: \"400b7e5e-907b-4128-b04f-9f10acabad20\") " Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.987574 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-scripts\") pod \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.987616 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-credential-keys\") pod \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\" (UID: \"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80\") " Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.988069 4966 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/400b7e5e-907b-4128-b04f-9f10acabad20-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.989581 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/400b7e5e-907b-4128-b04f-9f10acabad20-logs" (OuterVolumeSpecName: "logs") pod "400b7e5e-907b-4128-b04f-9f10acabad20" (UID: "400b7e5e-907b-4128-b04f-9f10acabad20"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.993343 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-scripts" (OuterVolumeSpecName: "scripts") pod "b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80" (UID: "b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.993778 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80" (UID: "b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.996056 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80" (UID: "b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:13 crc kubenswrapper[4966]: I0127 16:05:13.999373 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-kube-api-access-bjqng" (OuterVolumeSpecName: "kube-api-access-bjqng") pod "b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80" (UID: "b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80"). InnerVolumeSpecName "kube-api-access-bjqng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.000048 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-scripts" (OuterVolumeSpecName: "scripts") pod "400b7e5e-907b-4128-b04f-9f10acabad20" (UID: "400b7e5e-907b-4128-b04f-9f10acabad20"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.001998 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/400b7e5e-907b-4128-b04f-9f10acabad20-kube-api-access-dp26t" (OuterVolumeSpecName: "kube-api-access-dp26t") pod "400b7e5e-907b-4128-b04f-9f10acabad20" (UID: "400b7e5e-907b-4128-b04f-9f10acabad20"). InnerVolumeSpecName "kube-api-access-dp26t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.016199 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0" (OuterVolumeSpecName: "glance") pod "400b7e5e-907b-4128-b04f-9f10acabad20" (UID: "400b7e5e-907b-4128-b04f-9f10acabad20"). InnerVolumeSpecName "pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.031821 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "400b7e5e-907b-4128-b04f-9f10acabad20" (UID: "400b7e5e-907b-4128-b04f-9f10acabad20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.045672 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80" (UID: "b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.061000 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-config-data" (OuterVolumeSpecName: "config-data") pod "b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80" (UID: "b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.074515 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "400b7e5e-907b-4128-b04f-9f10acabad20" (UID: "400b7e5e-907b-4128-b04f-9f10acabad20"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.076493 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-config-data" (OuterVolumeSpecName: "config-data") pod "400b7e5e-907b-4128-b04f-9f10acabad20" (UID: "400b7e5e-907b-4128-b04f-9f10acabad20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.090678 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.090718 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.090732 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp26t\" (UniqueName: \"kubernetes.io/projected/400b7e5e-907b-4128-b04f-9f10acabad20-kube-api-access-dp26t\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.090791 4966 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\") on node \"crc\" " Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.090805 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.094107 4966 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.094156 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.094168 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.094180 4966 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.094192 4966 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.094202 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjqng\" (UniqueName: \"kubernetes.io/projected/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80-kube-api-access-bjqng\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.094214 4966 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/400b7e5e-907b-4128-b04f-9f10acabad20-logs\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.094225 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400b7e5e-907b-4128-b04f-9f10acabad20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.117466 4966 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.117635 4966 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0") on node "crc" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.196008 4966 reconciler_common.go:293] "Volume detached for volume \"pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.533032 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-68j4q" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.533037 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.626154 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.626195 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.626221 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-68j4q" event={"ID":"b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80","Type":"ContainerDied","Data":"addf3d3404815e5641e25d674b36462fb61ed7a559b76ae83e28864350aab0f8"} Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.626261 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="addf3d3404815e5641e25d674b36462fb61ed7a559b76ae83e28864350aab0f8" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.626278 4966 scope.go:117] "RemoveContainer" containerID="6680c2c05ba70570db81d901f50129e13cb5e866f3de7035398294358f10384d" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.675253 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 16:05:14 crc kubenswrapper[4966]: E0127 16:05:14.676094 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80" containerName="keystone-bootstrap" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.676117 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80" containerName="keystone-bootstrap" Jan 27 16:05:14 crc kubenswrapper[4966]: E0127 16:05:14.676142 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400b7e5e-907b-4128-b04f-9f10acabad20" containerName="glance-httpd" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.676148 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="400b7e5e-907b-4128-b04f-9f10acabad20" containerName="glance-httpd" Jan 27 16:05:14 crc kubenswrapper[4966]: E0127 16:05:14.676178 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400b7e5e-907b-4128-b04f-9f10acabad20" containerName="glance-log" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.676186 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="400b7e5e-907b-4128-b04f-9f10acabad20" containerName="glance-log" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.676600 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80" containerName="keystone-bootstrap" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.676637 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="400b7e5e-907b-4128-b04f-9f10acabad20" containerName="glance-log" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.676662 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="400b7e5e-907b-4128-b04f-9f10acabad20" containerName="glance-httpd" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.678645 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.691400 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.692453 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.701497 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.827862 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.828176 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-logs\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.828313 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-config-data\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.828558 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.828632 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.828683 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.828720 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-scripts\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.828877 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n65zc\" (UniqueName: \"kubernetes.io/projected/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-kube-api-access-n65zc\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.930553 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-config-data\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.930658 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.930679 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.930708 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.930729 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-scripts\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.930752 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n65zc\" (UniqueName: \"kubernetes.io/projected/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-kube-api-access-n65zc\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.930813 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.930874 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-logs\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.931323 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-logs\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.932580 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.935023 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-config-data\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.935571 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-scripts\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.936119 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.936638 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.936828 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.937959 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/976b8fe222d013f8479d40e9925328ad0f72c98e7e279071e3e1ab72dc6f328d/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 27 16:05:14 crc kubenswrapper[4966]: I0127 16:05:14.950118 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n65zc\" (UniqueName: \"kubernetes.io/projected/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-kube-api-access-n65zc\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.009842 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\") pod \"glance-default-external-api-0\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " pod="openstack/glance-default-external-api-0" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.014301 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.019928 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-68j4q"] Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.032215 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-68j4q"] Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.116850 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-78x59"] Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.118318 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.127818 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-78x59"] Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.149574 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-97zfb" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.149753 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.149913 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.149979 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.151999 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.252695 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-fernet-keys\") pod \"keystone-bootstrap-78x59\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.252819 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-combined-ca-bundle\") pod \"keystone-bootstrap-78x59\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.252847 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-credential-keys\") pod \"keystone-bootstrap-78x59\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.253012 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-scripts\") pod \"keystone-bootstrap-78x59\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.253311 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-config-data\") pod \"keystone-bootstrap-78x59\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.253414 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxzfk\" (UniqueName: \"kubernetes.io/projected/621d47f8-d9c4-4875-aad5-8dd30f215f16-kube-api-access-dxzfk\") pod \"keystone-bootstrap-78x59\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.356286 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-combined-ca-bundle\") pod \"keystone-bootstrap-78x59\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.356387 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-credential-keys\") pod \"keystone-bootstrap-78x59\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.356466 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-scripts\") pod \"keystone-bootstrap-78x59\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.356641 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-config-data\") pod \"keystone-bootstrap-78x59\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.356739 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxzfk\" (UniqueName: \"kubernetes.io/projected/621d47f8-d9c4-4875-aad5-8dd30f215f16-kube-api-access-dxzfk\") pod \"keystone-bootstrap-78x59\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.356831 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-fernet-keys\") pod \"keystone-bootstrap-78x59\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.360540 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-combined-ca-bundle\") pod \"keystone-bootstrap-78x59\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.361651 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-fernet-keys\") pod \"keystone-bootstrap-78x59\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.362114 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-credential-keys\") pod \"keystone-bootstrap-78x59\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.366238 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-scripts\") pod \"keystone-bootstrap-78x59\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.373351 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-config-data\") pod \"keystone-bootstrap-78x59\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.377345 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxzfk\" (UniqueName: \"kubernetes.io/projected/621d47f8-d9c4-4875-aad5-8dd30f215f16-kube-api-access-dxzfk\") pod \"keystone-bootstrap-78x59\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:15 crc kubenswrapper[4966]: I0127 16:05:15.468872 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:16 crc kubenswrapper[4966]: I0127 16:05:16.536739 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="400b7e5e-907b-4128-b04f-9f10acabad20" path="/var/lib/kubelet/pods/400b7e5e-907b-4128-b04f-9f10acabad20/volumes" Jan 27 16:05:16 crc kubenswrapper[4966]: I0127 16:05:16.538842 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80" path="/var/lib/kubelet/pods/b2ab3fa5-d02e-4ffe-bc52-b8b934ac1c80/volumes" Jan 27 16:05:18 crc kubenswrapper[4966]: I0127 16:05:18.300725 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" podUID="008f2af8-50bd-4efc-ab07-5dfbf858adbf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.177:5353: connect: connection refused" Jan 27 16:05:18 crc kubenswrapper[4966]: E0127 16:05:18.988669 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 27 16:05:18 crc kubenswrapper[4966]: E0127 16:05:18.990515 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sb7k8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-h2lbn_openstack(8d064aa6-d9c2-4adf-a25b-33a70d86e728): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 16:05:18 crc kubenswrapper[4966]: E0127 16:05:18.992014 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-h2lbn" podUID="8d064aa6-d9c2-4adf-a25b-33a70d86e728" Jan 27 16:05:19 crc kubenswrapper[4966]: E0127 16:05:19.581665 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-h2lbn" podUID="8d064aa6-d9c2-4adf-a25b-33a70d86e728" Jan 27 16:05:28 crc kubenswrapper[4966]: I0127 16:05:28.301918 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" podUID="008f2af8-50bd-4efc-ab07-5dfbf858adbf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.177:5353: i/o timeout" Jan 27 16:05:28 crc kubenswrapper[4966]: I0127 16:05:28.302720 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:05:31 crc kubenswrapper[4966]: E0127 16:05:31.124769 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Jan 27 16:05:31 crc kubenswrapper[4966]: E0127 16:05:31.125191 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4j78g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-9mgp8_openstack(05fc08f5-c60a-4248-8de2-447d0415188e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 16:05:31 crc kubenswrapper[4966]: E0127 16:05:31.126420 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-9mgp8" podUID="05fc08f5-c60a-4248-8de2-447d0415188e" Jan 27 16:05:31 crc kubenswrapper[4966]: E0127 16:05:31.613280 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 27 16:05:31 crc kubenswrapper[4966]: E0127 16:05:31.613444 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdq9h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-c4wdb_openstack(5989d951-ed71-4800-9400-390cbe5513f9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 16:05:31 crc kubenswrapper[4966]: E0127 16:05:31.614621 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-c4wdb" podUID="5989d951-ed71-4800-9400-390cbe5513f9" Jan 27 16:05:31 crc kubenswrapper[4966]: I0127 16:05:31.728450 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" event={"ID":"008f2af8-50bd-4efc-ab07-5dfbf858adbf","Type":"ContainerDied","Data":"8a976ab25cbbebab1676c8d06c090c27573032d8b740218e476ba7fdc4194eeb"} Jan 27 16:05:31 crc kubenswrapper[4966]: I0127 16:05:31.728501 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a976ab25cbbebab1676c8d06c090c27573032d8b740218e476ba7fdc4194eeb" Jan 27 16:05:31 crc kubenswrapper[4966]: E0127 16:05:31.730806 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-c4wdb" podUID="5989d951-ed71-4800-9400-390cbe5513f9" Jan 27 16:05:31 crc kubenswrapper[4966]: E0127 16:05:31.730890 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-9mgp8" podUID="05fc08f5-c60a-4248-8de2-447d0415188e" Jan 27 16:05:31 crc kubenswrapper[4966]: I0127 16:05:31.759524 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:05:31 crc kubenswrapper[4966]: I0127 16:05:31.860105 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-dns-swift-storage-0\") pod \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " Jan 27 16:05:31 crc kubenswrapper[4966]: I0127 16:05:31.860202 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-config\") pod \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " Jan 27 16:05:31 crc kubenswrapper[4966]: I0127 16:05:31.860229 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-dns-svc\") pod \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " Jan 27 16:05:31 crc kubenswrapper[4966]: I0127 16:05:31.860281 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-ovsdbserver-nb\") pod \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " Jan 27 16:05:31 crc kubenswrapper[4966]: I0127 16:05:31.860450 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-ovsdbserver-sb\") pod \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " Jan 27 16:05:31 crc kubenswrapper[4966]: I0127 16:05:31.860547 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwhqt\" (UniqueName: \"kubernetes.io/projected/008f2af8-50bd-4efc-ab07-5dfbf858adbf-kube-api-access-nwhqt\") pod \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\" (UID: \"008f2af8-50bd-4efc-ab07-5dfbf858adbf\") " Jan 27 16:05:31 crc kubenswrapper[4966]: I0127 16:05:31.866400 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/008f2af8-50bd-4efc-ab07-5dfbf858adbf-kube-api-access-nwhqt" (OuterVolumeSpecName: "kube-api-access-nwhqt") pod "008f2af8-50bd-4efc-ab07-5dfbf858adbf" (UID: "008f2af8-50bd-4efc-ab07-5dfbf858adbf"). InnerVolumeSpecName "kube-api-access-nwhqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:05:31 crc kubenswrapper[4966]: I0127 16:05:31.919662 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "008f2af8-50bd-4efc-ab07-5dfbf858adbf" (UID: "008f2af8-50bd-4efc-ab07-5dfbf858adbf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:05:31 crc kubenswrapper[4966]: I0127 16:05:31.924744 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "008f2af8-50bd-4efc-ab07-5dfbf858adbf" (UID: "008f2af8-50bd-4efc-ab07-5dfbf858adbf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:05:31 crc kubenswrapper[4966]: I0127 16:05:31.928436 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "008f2af8-50bd-4efc-ab07-5dfbf858adbf" (UID: "008f2af8-50bd-4efc-ab07-5dfbf858adbf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:05:31 crc kubenswrapper[4966]: I0127 16:05:31.931645 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "008f2af8-50bd-4efc-ab07-5dfbf858adbf" (UID: "008f2af8-50bd-4efc-ab07-5dfbf858adbf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:05:31 crc kubenswrapper[4966]: I0127 16:05:31.935300 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-config" (OuterVolumeSpecName: "config") pod "008f2af8-50bd-4efc-ab07-5dfbf858adbf" (UID: "008f2af8-50bd-4efc-ab07-5dfbf858adbf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:05:31 crc kubenswrapper[4966]: I0127 16:05:31.962995 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwhqt\" (UniqueName: \"kubernetes.io/projected/008f2af8-50bd-4efc-ab07-5dfbf858adbf-kube-api-access-nwhqt\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:31 crc kubenswrapper[4966]: I0127 16:05:31.963025 4966 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:31 crc kubenswrapper[4966]: I0127 16:05:31.963035 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:31 crc kubenswrapper[4966]: I0127 16:05:31.963044 4966 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:31 crc kubenswrapper[4966]: I0127 16:05:31.963052 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:31 crc kubenswrapper[4966]: I0127 16:05:31.963060 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/008f2af8-50bd-4efc-ab07-5dfbf858adbf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:32 crc kubenswrapper[4966]: I0127 16:05:32.740260 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" Jan 27 16:05:32 crc kubenswrapper[4966]: I0127 16:05:32.827044 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-nxn4w"] Jan 27 16:05:32 crc kubenswrapper[4966]: I0127 16:05:32.890623 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-nxn4w"] Jan 27 16:05:33 crc kubenswrapper[4966]: E0127 16:05:33.194995 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 27 16:05:33 crc kubenswrapper[4966]: E0127 16:05:33.195183 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4cs2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-dp2rw_openstack(4d47c000-d7a4-4dca-a051-15a5d91f3ab9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 16:05:33 crc kubenswrapper[4966]: E0127 16:05:33.196743 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-dp2rw" podUID="4d47c000-d7a4-4dca-a051-15a5d91f3ab9" Jan 27 16:05:33 crc kubenswrapper[4966]: I0127 16:05:33.197126 4966 scope.go:117] "RemoveContainer" containerID="a709ad0f8b623403a090199ee65b90f85fc1c152c554456ef115250b4a9d3c72" Jan 27 16:05:33 crc kubenswrapper[4966]: I0127 16:05:33.302811 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-nxn4w" podUID="008f2af8-50bd-4efc-ab07-5dfbf858adbf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.177:5353: i/o timeout" Jan 27 16:05:33 crc kubenswrapper[4966]: W0127 16:05:33.733753 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae145432_f4ac_4937_a71d_5c871832c20a.slice/crio-a703be70ecb949d53b21c1086e596499ec6a76f1b363ffeff1180859322b065f WatchSource:0}: Error finding container a703be70ecb949d53b21c1086e596499ec6a76f1b363ffeff1180859322b065f: Status 404 returned error can't find the container with id a703be70ecb949d53b21c1086e596499ec6a76f1b363ffeff1180859322b065f Jan 27 16:05:33 crc kubenswrapper[4966]: I0127 16:05:33.736745 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 16:05:33 crc kubenswrapper[4966]: I0127 16:05:33.755160 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-h2lbn" event={"ID":"8d064aa6-d9c2-4adf-a25b-33a70d86e728","Type":"ContainerStarted","Data":"e68900a67d7b98ba28d68bda9b7727e6dba876e15292a6e8aa5cd9807e992361"} Jan 27 16:05:33 crc kubenswrapper[4966]: I0127 16:05:33.757582 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae145432-f4ac-4937-a71d-5c871832c20a","Type":"ContainerStarted","Data":"a703be70ecb949d53b21c1086e596499ec6a76f1b363ffeff1180859322b065f"} Jan 27 16:05:33 crc kubenswrapper[4966]: I0127 16:05:33.760851 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c153482-f01a-4e13-ba03-fdd162f3a758","Type":"ContainerStarted","Data":"579c6d99a3efe5221d1b43dc67146c664fee7e13fb046ad70d7ecc90fb83da6b"} Jan 27 16:05:33 crc kubenswrapper[4966]: E0127 16:05:33.762408 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-dp2rw" podUID="4d47c000-d7a4-4dca-a051-15a5d91f3ab9" Jan 27 16:05:33 crc kubenswrapper[4966]: I0127 16:05:33.790413 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-h2lbn" podStartSLOduration=2.774137944 podStartE2EDuration="35.790383827s" podCreationTimestamp="2026-01-27 16:04:58 +0000 UTC" firstStartedPulling="2026-01-27 16:05:00.424522465 +0000 UTC m=+1366.727315953" lastFinishedPulling="2026-01-27 16:05:33.440768358 +0000 UTC m=+1399.743561836" observedRunningTime="2026-01-27 16:05:33.776969787 +0000 UTC m=+1400.079763285" watchObservedRunningTime="2026-01-27 16:05:33.790383827 +0000 UTC m=+1400.093177315" Jan 27 16:05:33 crc kubenswrapper[4966]: I0127 16:05:33.831871 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-78x59"] Jan 27 16:05:33 crc kubenswrapper[4966]: I0127 16:05:33.853391 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 16:05:34 crc kubenswrapper[4966]: I0127 16:05:34.540814 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="008f2af8-50bd-4efc-ab07-5dfbf858adbf" path="/var/lib/kubelet/pods/008f2af8-50bd-4efc-ab07-5dfbf858adbf/volumes" Jan 27 16:05:34 crc kubenswrapper[4966]: I0127 16:05:34.776479 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae145432-f4ac-4937-a71d-5c871832c20a","Type":"ContainerStarted","Data":"79955df90857be0dc204773aba9d4c7b251a4c1f8e34305d7a46483bde28a748"} Jan 27 16:05:34 crc kubenswrapper[4966]: I0127 16:05:34.779867 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-78x59" event={"ID":"621d47f8-d9c4-4875-aad5-8dd30f215f16","Type":"ContainerStarted","Data":"b2932732ae140e9744ec03705f36c57581cecf5d768cbfba66569f3a87ee8a41"} Jan 27 16:05:34 crc kubenswrapper[4966]: I0127 16:05:34.779929 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-78x59" event={"ID":"621d47f8-d9c4-4875-aad5-8dd30f215f16","Type":"ContainerStarted","Data":"f1d409dc12b13f6d26df83dbf84071e0816511672e1ae2f2621ec3ff59885d9c"} Jan 27 16:05:34 crc kubenswrapper[4966]: I0127 16:05:34.784043 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4417e7ad-d093-4fe3-bf2a-d7504ba5db81","Type":"ContainerStarted","Data":"2557b7dbc0d640cce7ce19958c6f56ff01730dab2b69441f098f61907d948e84"} Jan 27 16:05:34 crc kubenswrapper[4966]: I0127 16:05:34.784641 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4417e7ad-d093-4fe3-bf2a-d7504ba5db81","Type":"ContainerStarted","Data":"d69d4366a002a308cc0590eda89e2ed8f76fc9f7917ae244075ab4a73a2ee2a0"} Jan 27 16:05:34 crc kubenswrapper[4966]: I0127 16:05:34.808977 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-78x59" podStartSLOduration=19.808961465 podStartE2EDuration="19.808961465s" podCreationTimestamp="2026-01-27 16:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:05:34.801216622 +0000 UTC m=+1401.104010110" watchObservedRunningTime="2026-01-27 16:05:34.808961465 +0000 UTC m=+1401.111754943" Jan 27 16:05:35 crc kubenswrapper[4966]: I0127 16:05:35.796102 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4417e7ad-d093-4fe3-bf2a-d7504ba5db81","Type":"ContainerStarted","Data":"96160452ef69eb575158448d9a2c75f539b571a48dce8eeef6d2a9c545dfd924"} Jan 27 16:05:35 crc kubenswrapper[4966]: I0127 16:05:35.798558 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae145432-f4ac-4937-a71d-5c871832c20a","Type":"ContainerStarted","Data":"2d30a3136e0067a73d4556ad2583452bb94726a7e73445d9956209567a53e232"} Jan 27 16:05:35 crc kubenswrapper[4966]: I0127 16:05:35.816647 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=21.816625271 podStartE2EDuration="21.816625271s" podCreationTimestamp="2026-01-27 16:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:05:35.813478962 +0000 UTC m=+1402.116272460" watchObservedRunningTime="2026-01-27 16:05:35.816625271 +0000 UTC m=+1402.119418769" Jan 27 16:05:35 crc kubenswrapper[4966]: I0127 16:05:35.855976 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=30.855953615 podStartE2EDuration="30.855953615s" podCreationTimestamp="2026-01-27 16:05:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:05:35.839083445 +0000 UTC m=+1402.141876933" watchObservedRunningTime="2026-01-27 16:05:35.855953615 +0000 UTC m=+1402.158747103" Jan 27 16:05:35 crc kubenswrapper[4966]: I0127 16:05:35.938415 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 16:05:35 crc kubenswrapper[4966]: I0127 16:05:35.938815 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 16:05:35 crc kubenswrapper[4966]: I0127 16:05:35.938971 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 16:05:35 crc kubenswrapper[4966]: I0127 16:05:35.939185 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 16:05:35 crc kubenswrapper[4966]: I0127 16:05:35.976204 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 16:05:35 crc kubenswrapper[4966]: I0127 16:05:35.983986 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 16:05:37 crc kubenswrapper[4966]: I0127 16:05:37.826182 4966 generic.go:334] "Generic (PLEG): container finished" podID="621d47f8-d9c4-4875-aad5-8dd30f215f16" containerID="b2932732ae140e9744ec03705f36c57581cecf5d768cbfba66569f3a87ee8a41" exitCode=0 Jan 27 16:05:37 crc kubenswrapper[4966]: I0127 16:05:37.826730 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-78x59" event={"ID":"621d47f8-d9c4-4875-aad5-8dd30f215f16","Type":"ContainerDied","Data":"b2932732ae140e9744ec03705f36c57581cecf5d768cbfba66569f3a87ee8a41"} Jan 27 16:05:37 crc kubenswrapper[4966]: I0127 16:05:37.831337 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c153482-f01a-4e13-ba03-fdd162f3a758","Type":"ContainerStarted","Data":"5685c7a8b83e567e446891d0a4cdff3f354f53b26adc1b73d1d466e112c1a1f5"} Jan 27 16:05:38 crc kubenswrapper[4966]: I0127 16:05:38.842375 4966 generic.go:334] "Generic (PLEG): container finished" podID="8d064aa6-d9c2-4adf-a25b-33a70d86e728" containerID="e68900a67d7b98ba28d68bda9b7727e6dba876e15292a6e8aa5cd9807e992361" exitCode=0 Jan 27 16:05:38 crc kubenswrapper[4966]: I0127 16:05:38.842553 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-h2lbn" event={"ID":"8d064aa6-d9c2-4adf-a25b-33a70d86e728","Type":"ContainerDied","Data":"e68900a67d7b98ba28d68bda9b7727e6dba876e15292a6e8aa5cd9807e992361"} Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.318200 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.327623 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-h2lbn" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.414853 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d064aa6-d9c2-4adf-a25b-33a70d86e728-scripts\") pod \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\" (UID: \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\") " Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.414996 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d064aa6-d9c2-4adf-a25b-33a70d86e728-logs\") pod \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\" (UID: \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\") " Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.415067 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-fernet-keys\") pod \"621d47f8-d9c4-4875-aad5-8dd30f215f16\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.415099 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-scripts\") pod \"621d47f8-d9c4-4875-aad5-8dd30f215f16\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.415163 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-config-data\") pod \"621d47f8-d9c4-4875-aad5-8dd30f215f16\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.415240 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-credential-keys\") pod \"621d47f8-d9c4-4875-aad5-8dd30f215f16\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.415275 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-combined-ca-bundle\") pod \"621d47f8-d9c4-4875-aad5-8dd30f215f16\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.415303 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb7k8\" (UniqueName: \"kubernetes.io/projected/8d064aa6-d9c2-4adf-a25b-33a70d86e728-kube-api-access-sb7k8\") pod \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\" (UID: \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\") " Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.415336 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d064aa6-d9c2-4adf-a25b-33a70d86e728-combined-ca-bundle\") pod \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\" (UID: \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\") " Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.415377 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxzfk\" (UniqueName: \"kubernetes.io/projected/621d47f8-d9c4-4875-aad5-8dd30f215f16-kube-api-access-dxzfk\") pod \"621d47f8-d9c4-4875-aad5-8dd30f215f16\" (UID: \"621d47f8-d9c4-4875-aad5-8dd30f215f16\") " Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.415393 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d064aa6-d9c2-4adf-a25b-33a70d86e728-logs" (OuterVolumeSpecName: "logs") pod "8d064aa6-d9c2-4adf-a25b-33a70d86e728" (UID: "8d064aa6-d9c2-4adf-a25b-33a70d86e728"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.415411 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d064aa6-d9c2-4adf-a25b-33a70d86e728-config-data\") pod \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\" (UID: \"8d064aa6-d9c2-4adf-a25b-33a70d86e728\") " Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.416989 4966 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d064aa6-d9c2-4adf-a25b-33a70d86e728-logs\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.422472 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "621d47f8-d9c4-4875-aad5-8dd30f215f16" (UID: "621d47f8-d9c4-4875-aad5-8dd30f215f16"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.424058 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/621d47f8-d9c4-4875-aad5-8dd30f215f16-kube-api-access-dxzfk" (OuterVolumeSpecName: "kube-api-access-dxzfk") pod "621d47f8-d9c4-4875-aad5-8dd30f215f16" (UID: "621d47f8-d9c4-4875-aad5-8dd30f215f16"). InnerVolumeSpecName "kube-api-access-dxzfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.424270 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-scripts" (OuterVolumeSpecName: "scripts") pod "621d47f8-d9c4-4875-aad5-8dd30f215f16" (UID: "621d47f8-d9c4-4875-aad5-8dd30f215f16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.425254 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d064aa6-d9c2-4adf-a25b-33a70d86e728-kube-api-access-sb7k8" (OuterVolumeSpecName: "kube-api-access-sb7k8") pod "8d064aa6-d9c2-4adf-a25b-33a70d86e728" (UID: "8d064aa6-d9c2-4adf-a25b-33a70d86e728"). InnerVolumeSpecName "kube-api-access-sb7k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.426067 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d064aa6-d9c2-4adf-a25b-33a70d86e728-scripts" (OuterVolumeSpecName: "scripts") pod "8d064aa6-d9c2-4adf-a25b-33a70d86e728" (UID: "8d064aa6-d9c2-4adf-a25b-33a70d86e728"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.429490 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "621d47f8-d9c4-4875-aad5-8dd30f215f16" (UID: "621d47f8-d9c4-4875-aad5-8dd30f215f16"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.456505 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "621d47f8-d9c4-4875-aad5-8dd30f215f16" (UID: "621d47f8-d9c4-4875-aad5-8dd30f215f16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.456584 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d064aa6-d9c2-4adf-a25b-33a70d86e728-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d064aa6-d9c2-4adf-a25b-33a70d86e728" (UID: "8d064aa6-d9c2-4adf-a25b-33a70d86e728"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.461446 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d064aa6-d9c2-4adf-a25b-33a70d86e728-config-data" (OuterVolumeSpecName: "config-data") pod "8d064aa6-d9c2-4adf-a25b-33a70d86e728" (UID: "8d064aa6-d9c2-4adf-a25b-33a70d86e728"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.469074 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-config-data" (OuterVolumeSpecName: "config-data") pod "621d47f8-d9c4-4875-aad5-8dd30f215f16" (UID: "621d47f8-d9c4-4875-aad5-8dd30f215f16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.519814 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d064aa6-d9c2-4adf-a25b-33a70d86e728-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.519871 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxzfk\" (UniqueName: \"kubernetes.io/projected/621d47f8-d9c4-4875-aad5-8dd30f215f16-kube-api-access-dxzfk\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.519918 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d064aa6-d9c2-4adf-a25b-33a70d86e728-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.519935 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d064aa6-d9c2-4adf-a25b-33a70d86e728-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.519954 4966 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.519973 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.519988 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.520015 4966 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.520031 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621d47f8-d9c4-4875-aad5-8dd30f215f16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.520052 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb7k8\" (UniqueName: \"kubernetes.io/projected/8d064aa6-d9c2-4adf-a25b-33a70d86e728-kube-api-access-sb7k8\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.912110 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c153482-f01a-4e13-ba03-fdd162f3a758","Type":"ContainerStarted","Data":"042a0171df1179e3f83e0ecc77f468cc8390a8e6fc510489d15712ddea2d801b"} Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.916101 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-78x59" event={"ID":"621d47f8-d9c4-4875-aad5-8dd30f215f16","Type":"ContainerDied","Data":"f1d409dc12b13f6d26df83dbf84071e0816511672e1ae2f2621ec3ff59885d9c"} Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.916133 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1d409dc12b13f6d26df83dbf84071e0816511672e1ae2f2621ec3ff59885d9c" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.916140 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-78x59" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.919149 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-h2lbn" event={"ID":"8d064aa6-d9c2-4adf-a25b-33a70d86e728","Type":"ContainerDied","Data":"a10b73dcbd1cf36bbdec657360b0127447c0485692411ef4bcbdfd2062f289a3"} Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.919195 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a10b73dcbd1cf36bbdec657360b0127447c0485692411ef4bcbdfd2062f289a3" Jan 27 16:05:41 crc kubenswrapper[4966]: I0127 16:05:41.919206 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-h2lbn" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.473178 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7d7bd8f5c6-k4z4r"] Jan 27 16:05:42 crc kubenswrapper[4966]: E0127 16:05:42.473572 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008f2af8-50bd-4efc-ab07-5dfbf858adbf" containerName="init" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.473584 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="008f2af8-50bd-4efc-ab07-5dfbf858adbf" containerName="init" Jan 27 16:05:42 crc kubenswrapper[4966]: E0127 16:05:42.473603 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621d47f8-d9c4-4875-aad5-8dd30f215f16" containerName="keystone-bootstrap" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.473609 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="621d47f8-d9c4-4875-aad5-8dd30f215f16" containerName="keystone-bootstrap" Jan 27 16:05:42 crc kubenswrapper[4966]: E0127 16:05:42.473620 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d064aa6-d9c2-4adf-a25b-33a70d86e728" containerName="placement-db-sync" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.473626 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d064aa6-d9c2-4adf-a25b-33a70d86e728" containerName="placement-db-sync" Jan 27 16:05:42 crc kubenswrapper[4966]: E0127 16:05:42.473650 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008f2af8-50bd-4efc-ab07-5dfbf858adbf" containerName="dnsmasq-dns" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.473655 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="008f2af8-50bd-4efc-ab07-5dfbf858adbf" containerName="dnsmasq-dns" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.473829 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="621d47f8-d9c4-4875-aad5-8dd30f215f16" containerName="keystone-bootstrap" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.473849 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d064aa6-d9c2-4adf-a25b-33a70d86e728" containerName="placement-db-sync" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.473862 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="008f2af8-50bd-4efc-ab07-5dfbf858adbf" containerName="dnsmasq-dns" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.474589 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.476920 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.477850 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.478333 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-97zfb" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.478677 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.478773 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.480212 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.496560 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7d7bd8f5c6-k4z4r"] Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.545405 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac92552-e80a-486f-a9cf-4e57907928ca-combined-ca-bundle\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.545465 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac92552-e80a-486f-a9cf-4e57907928ca-internal-tls-certs\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.545493 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac92552-e80a-486f-a9cf-4e57907928ca-config-data\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.545567 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac92552-e80a-486f-a9cf-4e57907928ca-public-tls-certs\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.545653 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ac92552-e80a-486f-a9cf-4e57907928ca-scripts\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.545719 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ac92552-e80a-486f-a9cf-4e57907928ca-fernet-keys\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.545745 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc6v6\" (UniqueName: \"kubernetes.io/projected/1ac92552-e80a-486f-a9cf-4e57907928ca-kube-api-access-jc6v6\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.545809 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ac92552-e80a-486f-a9cf-4e57907928ca-credential-keys\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.568571 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f68b888d8-25wpv"] Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.570788 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.573913 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.574225 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.574638 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hdq2c" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.575143 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.575227 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.598857 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f68b888d8-25wpv"] Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.647332 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac92552-e80a-486f-a9cf-4e57907928ca-combined-ca-bundle\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.647381 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a03245f-c0e4-4241-8711-18cd9517be4d-combined-ca-bundle\") pod \"placement-f68b888d8-25wpv\" (UID: \"5a03245f-c0e4-4241-8711-18cd9517be4d\") " pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.647416 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac92552-e80a-486f-a9cf-4e57907928ca-internal-tls-certs\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.647439 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac92552-e80a-486f-a9cf-4e57907928ca-config-data\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.647463 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a03245f-c0e4-4241-8711-18cd9517be4d-scripts\") pod \"placement-f68b888d8-25wpv\" (UID: \"5a03245f-c0e4-4241-8711-18cd9517be4d\") " pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.647490 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a03245f-c0e4-4241-8711-18cd9517be4d-config-data\") pod \"placement-f68b888d8-25wpv\" (UID: \"5a03245f-c0e4-4241-8711-18cd9517be4d\") " pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.647582 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a03245f-c0e4-4241-8711-18cd9517be4d-public-tls-certs\") pod \"placement-f68b888d8-25wpv\" (UID: \"5a03245f-c0e4-4241-8711-18cd9517be4d\") " pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.647804 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac92552-e80a-486f-a9cf-4e57907928ca-public-tls-certs\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.647935 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ac92552-e80a-486f-a9cf-4e57907928ca-scripts\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.648054 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ac92552-e80a-486f-a9cf-4e57907928ca-fernet-keys\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.648083 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc6v6\" (UniqueName: \"kubernetes.io/projected/1ac92552-e80a-486f-a9cf-4e57907928ca-kube-api-access-jc6v6\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.648185 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a03245f-c0e4-4241-8711-18cd9517be4d-internal-tls-certs\") pod \"placement-f68b888d8-25wpv\" (UID: \"5a03245f-c0e4-4241-8711-18cd9517be4d\") " pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.648231 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzvbt\" (UniqueName: \"kubernetes.io/projected/5a03245f-c0e4-4241-8711-18cd9517be4d-kube-api-access-rzvbt\") pod \"placement-f68b888d8-25wpv\" (UID: \"5a03245f-c0e4-4241-8711-18cd9517be4d\") " pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.648277 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ac92552-e80a-486f-a9cf-4e57907928ca-credential-keys\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.648346 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a03245f-c0e4-4241-8711-18cd9517be4d-logs\") pod \"placement-f68b888d8-25wpv\" (UID: \"5a03245f-c0e4-4241-8711-18cd9517be4d\") " pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.652348 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac92552-e80a-486f-a9cf-4e57907928ca-public-tls-certs\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.652368 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac92552-e80a-486f-a9cf-4e57907928ca-combined-ca-bundle\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.652423 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ac92552-e80a-486f-a9cf-4e57907928ca-scripts\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.653377 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac92552-e80a-486f-a9cf-4e57907928ca-internal-tls-certs\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.654596 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ac92552-e80a-486f-a9cf-4e57907928ca-fernet-keys\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.659322 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ac92552-e80a-486f-a9cf-4e57907928ca-credential-keys\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.660608 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac92552-e80a-486f-a9cf-4e57907928ca-config-data\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.662166 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc6v6\" (UniqueName: \"kubernetes.io/projected/1ac92552-e80a-486f-a9cf-4e57907928ca-kube-api-access-jc6v6\") pod \"keystone-7d7bd8f5c6-k4z4r\" (UID: \"1ac92552-e80a-486f-a9cf-4e57907928ca\") " pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.750425 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a03245f-c0e4-4241-8711-18cd9517be4d-internal-tls-certs\") pod \"placement-f68b888d8-25wpv\" (UID: \"5a03245f-c0e4-4241-8711-18cd9517be4d\") " pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.750496 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzvbt\" (UniqueName: \"kubernetes.io/projected/5a03245f-c0e4-4241-8711-18cd9517be4d-kube-api-access-rzvbt\") pod \"placement-f68b888d8-25wpv\" (UID: \"5a03245f-c0e4-4241-8711-18cd9517be4d\") " pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.750544 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a03245f-c0e4-4241-8711-18cd9517be4d-logs\") pod \"placement-f68b888d8-25wpv\" (UID: \"5a03245f-c0e4-4241-8711-18cd9517be4d\") " pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.750603 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a03245f-c0e4-4241-8711-18cd9517be4d-combined-ca-bundle\") pod \"placement-f68b888d8-25wpv\" (UID: \"5a03245f-c0e4-4241-8711-18cd9517be4d\") " pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.750637 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a03245f-c0e4-4241-8711-18cd9517be4d-scripts\") pod \"placement-f68b888d8-25wpv\" (UID: \"5a03245f-c0e4-4241-8711-18cd9517be4d\") " pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.750661 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a03245f-c0e4-4241-8711-18cd9517be4d-config-data\") pod \"placement-f68b888d8-25wpv\" (UID: \"5a03245f-c0e4-4241-8711-18cd9517be4d\") " pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.750676 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a03245f-c0e4-4241-8711-18cd9517be4d-public-tls-certs\") pod \"placement-f68b888d8-25wpv\" (UID: \"5a03245f-c0e4-4241-8711-18cd9517be4d\") " pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.751146 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a03245f-c0e4-4241-8711-18cd9517be4d-logs\") pod \"placement-f68b888d8-25wpv\" (UID: \"5a03245f-c0e4-4241-8711-18cd9517be4d\") " pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.754116 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a03245f-c0e4-4241-8711-18cd9517be4d-combined-ca-bundle\") pod \"placement-f68b888d8-25wpv\" (UID: \"5a03245f-c0e4-4241-8711-18cd9517be4d\") " pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.755280 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a03245f-c0e4-4241-8711-18cd9517be4d-public-tls-certs\") pod \"placement-f68b888d8-25wpv\" (UID: \"5a03245f-c0e4-4241-8711-18cd9517be4d\") " pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.755398 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a03245f-c0e4-4241-8711-18cd9517be4d-scripts\") pod \"placement-f68b888d8-25wpv\" (UID: \"5a03245f-c0e4-4241-8711-18cd9517be4d\") " pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.755432 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a03245f-c0e4-4241-8711-18cd9517be4d-internal-tls-certs\") pod \"placement-f68b888d8-25wpv\" (UID: \"5a03245f-c0e4-4241-8711-18cd9517be4d\") " pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.755600 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a03245f-c0e4-4241-8711-18cd9517be4d-config-data\") pod \"placement-f68b888d8-25wpv\" (UID: \"5a03245f-c0e4-4241-8711-18cd9517be4d\") " pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.767464 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzvbt\" (UniqueName: \"kubernetes.io/projected/5a03245f-c0e4-4241-8711-18cd9517be4d-kube-api-access-rzvbt\") pod \"placement-f68b888d8-25wpv\" (UID: \"5a03245f-c0e4-4241-8711-18cd9517be4d\") " pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.791937 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:42 crc kubenswrapper[4966]: I0127 16:05:42.892677 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:43 crc kubenswrapper[4966]: E0127 16:05:43.104905 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71afeadb_1cd9_461f_b899_307f7dd34fca.slice/crio-conmon-41b7b1e528a76290c9029e5e699549001d5943d2e75e30d4136830e7c1f442e0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71afeadb_1cd9_461f_b899_307f7dd34fca.slice/crio-41b7b1e528a76290c9029e5e699549001d5943d2e75e30d4136830e7c1f442e0.scope\": RecentStats: unable to find data in memory cache]" Jan 27 16:05:43 crc kubenswrapper[4966]: I0127 16:05:43.302356 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7d7bd8f5c6-k4z4r"] Jan 27 16:05:43 crc kubenswrapper[4966]: W0127 16:05:43.311794 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ac92552_e80a_486f_a9cf_4e57907928ca.slice/crio-55b74abc46df8a0b1e576bca90161563dc0515e5ae22072f8f87f88a542c7a79 WatchSource:0}: Error finding container 55b74abc46df8a0b1e576bca90161563dc0515e5ae22072f8f87f88a542c7a79: Status 404 returned error can't find the container with id 55b74abc46df8a0b1e576bca90161563dc0515e5ae22072f8f87f88a542c7a79 Jan 27 16:05:43 crc kubenswrapper[4966]: I0127 16:05:43.454955 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f68b888d8-25wpv"] Jan 27 16:05:43 crc kubenswrapper[4966]: I0127 16:05:43.959595 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d7bd8f5c6-k4z4r" event={"ID":"1ac92552-e80a-486f-a9cf-4e57907928ca","Type":"ContainerStarted","Data":"ca5b7bfc651de1f67ad06020bf421ec31b656751816301f7977d1f8141062f8e"} Jan 27 16:05:43 crc kubenswrapper[4966]: I0127 16:05:43.959957 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d7bd8f5c6-k4z4r" event={"ID":"1ac92552-e80a-486f-a9cf-4e57907928ca","Type":"ContainerStarted","Data":"55b74abc46df8a0b1e576bca90161563dc0515e5ae22072f8f87f88a542c7a79"} Jan 27 16:05:43 crc kubenswrapper[4966]: I0127 16:05:43.960391 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:05:43 crc kubenswrapper[4966]: I0127 16:05:43.963429 4966 generic.go:334] "Generic (PLEG): container finished" podID="71afeadb-1cd9-461f-b899-307f7dd34fca" containerID="41b7b1e528a76290c9029e5e699549001d5943d2e75e30d4136830e7c1f442e0" exitCode=0 Jan 27 16:05:43 crc kubenswrapper[4966]: I0127 16:05:43.963484 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7bd6d" event={"ID":"71afeadb-1cd9-461f-b899-307f7dd34fca","Type":"ContainerDied","Data":"41b7b1e528a76290c9029e5e699549001d5943d2e75e30d4136830e7c1f442e0"} Jan 27 16:05:43 crc kubenswrapper[4966]: I0127 16:05:43.966011 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f68b888d8-25wpv" event={"ID":"5a03245f-c0e4-4241-8711-18cd9517be4d","Type":"ContainerStarted","Data":"83ef103a04d8a7dce84a2b80ad0a939e58c816e84ccc295bfb23ee1c763d6747"} Jan 27 16:05:43 crc kubenswrapper[4966]: I0127 16:05:43.966045 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f68b888d8-25wpv" event={"ID":"5a03245f-c0e4-4241-8711-18cd9517be4d","Type":"ContainerStarted","Data":"d403696644595bbbb3093b0ca4b5af6caa9b56ecd695dad11844e55396b10bc8"} Jan 27 16:05:43 crc kubenswrapper[4966]: I0127 16:05:43.966060 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f68b888d8-25wpv" event={"ID":"5a03245f-c0e4-4241-8711-18cd9517be4d","Type":"ContainerStarted","Data":"5aa9440e92c0a489edb38f93eb8d006db9593bd591159a54577c0d59ae3a4898"} Jan 27 16:05:43 crc kubenswrapper[4966]: I0127 16:05:43.967098 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:43 crc kubenswrapper[4966]: I0127 16:05:43.967140 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:05:43 crc kubenswrapper[4966]: I0127 16:05:43.986422 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7d7bd8f5c6-k4z4r" podStartSLOduration=1.98640112 podStartE2EDuration="1.98640112s" podCreationTimestamp="2026-01-27 16:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:05:43.975967612 +0000 UTC m=+1410.278761120" watchObservedRunningTime="2026-01-27 16:05:43.98640112 +0000 UTC m=+1410.289194618" Jan 27 16:05:44 crc kubenswrapper[4966]: I0127 16:05:44.027551 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-f68b888d8-25wpv" podStartSLOduration=2.02753442 podStartE2EDuration="2.02753442s" podCreationTimestamp="2026-01-27 16:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:05:44.020655305 +0000 UTC m=+1410.323448813" watchObservedRunningTime="2026-01-27 16:05:44.02753442 +0000 UTC m=+1410.330327908" Jan 27 16:05:45 crc kubenswrapper[4966]: I0127 16:05:45.015146 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 16:05:45 crc kubenswrapper[4966]: I0127 16:05:45.016610 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 16:05:45 crc kubenswrapper[4966]: I0127 16:05:45.016684 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 16:05:45 crc kubenswrapper[4966]: I0127 16:05:45.016732 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 16:05:45 crc kubenswrapper[4966]: I0127 16:05:45.074252 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 16:05:45 crc kubenswrapper[4966]: I0127 16:05:45.078158 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 16:05:45 crc kubenswrapper[4966]: I0127 16:05:45.380383 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7bd6d" Jan 27 16:05:45 crc kubenswrapper[4966]: I0127 16:05:45.411743 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71afeadb-1cd9-461f-b899-307f7dd34fca-combined-ca-bundle\") pod \"71afeadb-1cd9-461f-b899-307f7dd34fca\" (UID: \"71afeadb-1cd9-461f-b899-307f7dd34fca\") " Jan 27 16:05:45 crc kubenswrapper[4966]: I0127 16:05:45.411791 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/71afeadb-1cd9-461f-b899-307f7dd34fca-config\") pod \"71afeadb-1cd9-461f-b899-307f7dd34fca\" (UID: \"71afeadb-1cd9-461f-b899-307f7dd34fca\") " Jan 27 16:05:45 crc kubenswrapper[4966]: I0127 16:05:45.412029 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q2pk\" (UniqueName: \"kubernetes.io/projected/71afeadb-1cd9-461f-b899-307f7dd34fca-kube-api-access-7q2pk\") pod \"71afeadb-1cd9-461f-b899-307f7dd34fca\" (UID: \"71afeadb-1cd9-461f-b899-307f7dd34fca\") " Jan 27 16:05:45 crc kubenswrapper[4966]: I0127 16:05:45.431954 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71afeadb-1cd9-461f-b899-307f7dd34fca-kube-api-access-7q2pk" (OuterVolumeSpecName: "kube-api-access-7q2pk") pod "71afeadb-1cd9-461f-b899-307f7dd34fca" (UID: "71afeadb-1cd9-461f-b899-307f7dd34fca"). InnerVolumeSpecName "kube-api-access-7q2pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:05:45 crc kubenswrapper[4966]: I0127 16:05:45.447026 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71afeadb-1cd9-461f-b899-307f7dd34fca-config" (OuterVolumeSpecName: "config") pod "71afeadb-1cd9-461f-b899-307f7dd34fca" (UID: "71afeadb-1cd9-461f-b899-307f7dd34fca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:45 crc kubenswrapper[4966]: I0127 16:05:45.448821 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71afeadb-1cd9-461f-b899-307f7dd34fca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71afeadb-1cd9-461f-b899-307f7dd34fca" (UID: "71afeadb-1cd9-461f-b899-307f7dd34fca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:45 crc kubenswrapper[4966]: I0127 16:05:45.513890 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71afeadb-1cd9-461f-b899-307f7dd34fca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:45 crc kubenswrapper[4966]: I0127 16:05:45.513941 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/71afeadb-1cd9-461f-b899-307f7dd34fca-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:45 crc kubenswrapper[4966]: I0127 16:05:45.513956 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q2pk\" (UniqueName: \"kubernetes.io/projected/71afeadb-1cd9-461f-b899-307f7dd34fca-kube-api-access-7q2pk\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.026284 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7bd6d" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.027751 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7bd6d" event={"ID":"71afeadb-1cd9-461f-b899-307f7dd34fca","Type":"ContainerDied","Data":"2c4561a47e4a969061a30e76a4422428c0a6c49c5a79bd0b8c93c6b46deebaad"} Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.027789 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c4561a47e4a969061a30e76a4422428c0a6c49c5a79bd0b8c93c6b46deebaad" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.266629 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-mgmwk"] Jan 27 16:05:46 crc kubenswrapper[4966]: E0127 16:05:46.267750 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71afeadb-1cd9-461f-b899-307f7dd34fca" containerName="neutron-db-sync" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.267765 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="71afeadb-1cd9-461f-b899-307f7dd34fca" containerName="neutron-db-sync" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.268005 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="71afeadb-1cd9-461f-b899-307f7dd34fca" containerName="neutron-db-sync" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.269120 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.286340 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-mgmwk"] Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.340720 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmwf2\" (UniqueName: \"kubernetes.io/projected/4fab0805-4dcd-46f0-9fdf-3234dccac22e-kube-api-access-nmwf2\") pod \"dnsmasq-dns-6b7b667979-mgmwk\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.340789 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-mgmwk\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.340906 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-mgmwk\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.340934 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-dns-svc\") pod \"dnsmasq-dns-6b7b667979-mgmwk\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.340968 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-config\") pod \"dnsmasq-dns-6b7b667979-mgmwk\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.340999 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-mgmwk\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.352152 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6f886fdd58-d9dcc"] Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.356383 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f886fdd58-d9dcc" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.358183 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.362541 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.362681 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-v7pvv" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.363176 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.430200 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f886fdd58-d9dcc"] Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.442552 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-dns-svc\") pod \"dnsmasq-dns-6b7b667979-mgmwk\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.442604 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-config\") pod \"dnsmasq-dns-6b7b667979-mgmwk\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.442642 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-mgmwk\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.442709 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5tkr\" (UniqueName: \"kubernetes.io/projected/4b821f12-5d2b-476d-9a06-82c533b408cc-kube-api-access-g5tkr\") pod \"neutron-6f886fdd58-d9dcc\" (UID: \"4b821f12-5d2b-476d-9a06-82c533b408cc\") " pod="openstack/neutron-6f886fdd58-d9dcc" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.442745 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-config\") pod \"neutron-6f886fdd58-d9dcc\" (UID: \"4b821f12-5d2b-476d-9a06-82c533b408cc\") " pod="openstack/neutron-6f886fdd58-d9dcc" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.442765 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmwf2\" (UniqueName: \"kubernetes.io/projected/4fab0805-4dcd-46f0-9fdf-3234dccac22e-kube-api-access-nmwf2\") pod \"dnsmasq-dns-6b7b667979-mgmwk\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.442792 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-ovndb-tls-certs\") pod \"neutron-6f886fdd58-d9dcc\" (UID: \"4b821f12-5d2b-476d-9a06-82c533b408cc\") " pod="openstack/neutron-6f886fdd58-d9dcc" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.442811 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-mgmwk\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.442860 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-combined-ca-bundle\") pod \"neutron-6f886fdd58-d9dcc\" (UID: \"4b821f12-5d2b-476d-9a06-82c533b408cc\") " pod="openstack/neutron-6f886fdd58-d9dcc" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.442910 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-httpd-config\") pod \"neutron-6f886fdd58-d9dcc\" (UID: \"4b821f12-5d2b-476d-9a06-82c533b408cc\") " pod="openstack/neutron-6f886fdd58-d9dcc" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.442941 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-mgmwk\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.443752 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-mgmwk\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.444259 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-dns-svc\") pod \"dnsmasq-dns-6b7b667979-mgmwk\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.444785 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-config\") pod \"dnsmasq-dns-6b7b667979-mgmwk\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.445597 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-mgmwk\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.445614 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-mgmwk\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.470774 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmwf2\" (UniqueName: \"kubernetes.io/projected/4fab0805-4dcd-46f0-9fdf-3234dccac22e-kube-api-access-nmwf2\") pod \"dnsmasq-dns-6b7b667979-mgmwk\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.546281 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5tkr\" (UniqueName: \"kubernetes.io/projected/4b821f12-5d2b-476d-9a06-82c533b408cc-kube-api-access-g5tkr\") pod \"neutron-6f886fdd58-d9dcc\" (UID: \"4b821f12-5d2b-476d-9a06-82c533b408cc\") " pod="openstack/neutron-6f886fdd58-d9dcc" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.546340 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-config\") pod \"neutron-6f886fdd58-d9dcc\" (UID: \"4b821f12-5d2b-476d-9a06-82c533b408cc\") " pod="openstack/neutron-6f886fdd58-d9dcc" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.546389 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-ovndb-tls-certs\") pod \"neutron-6f886fdd58-d9dcc\" (UID: \"4b821f12-5d2b-476d-9a06-82c533b408cc\") " pod="openstack/neutron-6f886fdd58-d9dcc" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.546453 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-combined-ca-bundle\") pod \"neutron-6f886fdd58-d9dcc\" (UID: \"4b821f12-5d2b-476d-9a06-82c533b408cc\") " pod="openstack/neutron-6f886fdd58-d9dcc" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.546516 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-httpd-config\") pod \"neutron-6f886fdd58-d9dcc\" (UID: \"4b821f12-5d2b-476d-9a06-82c533b408cc\") " pod="openstack/neutron-6f886fdd58-d9dcc" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.552969 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-combined-ca-bundle\") pod \"neutron-6f886fdd58-d9dcc\" (UID: \"4b821f12-5d2b-476d-9a06-82c533b408cc\") " pod="openstack/neutron-6f886fdd58-d9dcc" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.553805 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-config\") pod \"neutron-6f886fdd58-d9dcc\" (UID: \"4b821f12-5d2b-476d-9a06-82c533b408cc\") " pod="openstack/neutron-6f886fdd58-d9dcc" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.555713 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-ovndb-tls-certs\") pod \"neutron-6f886fdd58-d9dcc\" (UID: \"4b821f12-5d2b-476d-9a06-82c533b408cc\") " pod="openstack/neutron-6f886fdd58-d9dcc" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.556297 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-httpd-config\") pod \"neutron-6f886fdd58-d9dcc\" (UID: \"4b821f12-5d2b-476d-9a06-82c533b408cc\") " pod="openstack/neutron-6f886fdd58-d9dcc" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.569949 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5tkr\" (UniqueName: \"kubernetes.io/projected/4b821f12-5d2b-476d-9a06-82c533b408cc-kube-api-access-g5tkr\") pod \"neutron-6f886fdd58-d9dcc\" (UID: \"4b821f12-5d2b-476d-9a06-82c533b408cc\") " pod="openstack/neutron-6f886fdd58-d9dcc" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.619602 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:46 crc kubenswrapper[4966]: I0127 16:05:46.674847 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f886fdd58-d9dcc" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.046965 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f886fdd58-d9dcc"] Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.099994 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-mgmwk"] Jan 27 16:05:48 crc kubenswrapper[4966]: W0127 16:05:48.105316 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fab0805_4dcd_46f0_9fdf_3234dccac22e.slice/crio-c7a997ae413a0fee51ad11ecc7a0e93550b529426c2090622a5242abfa8c9f5f WatchSource:0}: Error finding container c7a997ae413a0fee51ad11ecc7a0e93550b529426c2090622a5242abfa8c9f5f: Status 404 returned error can't find the container with id c7a997ae413a0fee51ad11ecc7a0e93550b529426c2090622a5242abfa8c9f5f Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.176907 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5c7444dc4c-gxtck"] Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.178976 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.183931 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.184237 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.204721 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fce5b18-2272-4aba-a5cc-75f98ee0b1f7-internal-tls-certs\") pod \"neutron-5c7444dc4c-gxtck\" (UID: \"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7\") " pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.204844 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fce5b18-2272-4aba-a5cc-75f98ee0b1f7-ovndb-tls-certs\") pod \"neutron-5c7444dc4c-gxtck\" (UID: \"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7\") " pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.204949 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fce5b18-2272-4aba-a5cc-75f98ee0b1f7-public-tls-certs\") pod \"neutron-5c7444dc4c-gxtck\" (UID: \"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7\") " pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.205030 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3fce5b18-2272-4aba-a5cc-75f98ee0b1f7-httpd-config\") pod \"neutron-5c7444dc4c-gxtck\" (UID: \"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7\") " pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.205151 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fce5b18-2272-4aba-a5cc-75f98ee0b1f7-combined-ca-bundle\") pod \"neutron-5c7444dc4c-gxtck\" (UID: \"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7\") " pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.205333 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr8mc\" (UniqueName: \"kubernetes.io/projected/3fce5b18-2272-4aba-a5cc-75f98ee0b1f7-kube-api-access-lr8mc\") pod \"neutron-5c7444dc4c-gxtck\" (UID: \"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7\") " pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.205395 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3fce5b18-2272-4aba-a5cc-75f98ee0b1f7-config\") pod \"neutron-5c7444dc4c-gxtck\" (UID: \"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7\") " pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.259968 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c7444dc4c-gxtck"] Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.307255 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fce5b18-2272-4aba-a5cc-75f98ee0b1f7-ovndb-tls-certs\") pod \"neutron-5c7444dc4c-gxtck\" (UID: \"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7\") " pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.307655 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fce5b18-2272-4aba-a5cc-75f98ee0b1f7-public-tls-certs\") pod \"neutron-5c7444dc4c-gxtck\" (UID: \"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7\") " pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.307721 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3fce5b18-2272-4aba-a5cc-75f98ee0b1f7-httpd-config\") pod \"neutron-5c7444dc4c-gxtck\" (UID: \"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7\") " pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.307816 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fce5b18-2272-4aba-a5cc-75f98ee0b1f7-combined-ca-bundle\") pod \"neutron-5c7444dc4c-gxtck\" (UID: \"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7\") " pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.307997 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr8mc\" (UniqueName: \"kubernetes.io/projected/3fce5b18-2272-4aba-a5cc-75f98ee0b1f7-kube-api-access-lr8mc\") pod \"neutron-5c7444dc4c-gxtck\" (UID: \"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7\") " pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.308072 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3fce5b18-2272-4aba-a5cc-75f98ee0b1f7-config\") pod \"neutron-5c7444dc4c-gxtck\" (UID: \"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7\") " pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.308210 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fce5b18-2272-4aba-a5cc-75f98ee0b1f7-internal-tls-certs\") pod \"neutron-5c7444dc4c-gxtck\" (UID: \"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7\") " pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.311826 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fce5b18-2272-4aba-a5cc-75f98ee0b1f7-ovndb-tls-certs\") pod \"neutron-5c7444dc4c-gxtck\" (UID: \"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7\") " pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.313542 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fce5b18-2272-4aba-a5cc-75f98ee0b1f7-public-tls-certs\") pod \"neutron-5c7444dc4c-gxtck\" (UID: \"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7\") " pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.313815 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fce5b18-2272-4aba-a5cc-75f98ee0b1f7-internal-tls-certs\") pod \"neutron-5c7444dc4c-gxtck\" (UID: \"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7\") " pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.313952 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3fce5b18-2272-4aba-a5cc-75f98ee0b1f7-httpd-config\") pod \"neutron-5c7444dc4c-gxtck\" (UID: \"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7\") " pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.315440 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3fce5b18-2272-4aba-a5cc-75f98ee0b1f7-config\") pod \"neutron-5c7444dc4c-gxtck\" (UID: \"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7\") " pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.317381 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fce5b18-2272-4aba-a5cc-75f98ee0b1f7-combined-ca-bundle\") pod \"neutron-5c7444dc4c-gxtck\" (UID: \"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7\") " pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.335269 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr8mc\" (UniqueName: \"kubernetes.io/projected/3fce5b18-2272-4aba-a5cc-75f98ee0b1f7-kube-api-access-lr8mc\") pod \"neutron-5c7444dc4c-gxtck\" (UID: \"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7\") " pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:48 crc kubenswrapper[4966]: I0127 16:05:48.623806 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:49 crc kubenswrapper[4966]: I0127 16:05:49.060101 4966 generic.go:334] "Generic (PLEG): container finished" podID="4fab0805-4dcd-46f0-9fdf-3234dccac22e" containerID="92b2d2a44dcd134d2ac673166025c78e343bc16744eab1ad7538cb7992297b5c" exitCode=0 Jan 27 16:05:49 crc kubenswrapper[4966]: I0127 16:05:49.060146 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" event={"ID":"4fab0805-4dcd-46f0-9fdf-3234dccac22e","Type":"ContainerDied","Data":"92b2d2a44dcd134d2ac673166025c78e343bc16744eab1ad7538cb7992297b5c"} Jan 27 16:05:49 crc kubenswrapper[4966]: I0127 16:05:49.060187 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" event={"ID":"4fab0805-4dcd-46f0-9fdf-3234dccac22e","Type":"ContainerStarted","Data":"c7a997ae413a0fee51ad11ecc7a0e93550b529426c2090622a5242abfa8c9f5f"} Jan 27 16:05:49 crc kubenswrapper[4966]: I0127 16:05:49.061657 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f886fdd58-d9dcc" event={"ID":"4b821f12-5d2b-476d-9a06-82c533b408cc","Type":"ContainerStarted","Data":"a8b590e06b1aa4545f0914bd63c32e52516019a41e89d0157811b8b4aed63112"} Jan 27 16:05:49 crc kubenswrapper[4966]: I0127 16:05:49.061704 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f886fdd58-d9dcc" event={"ID":"4b821f12-5d2b-476d-9a06-82c533b408cc","Type":"ContainerStarted","Data":"f2ffe967c0ba215d63a0dadc854d1694a81c2c15b0edb30fdb5693c6eb0fb1df"} Jan 27 16:05:51 crc kubenswrapper[4966]: I0127 16:05:51.536461 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 16:05:51 crc kubenswrapper[4966]: I0127 16:05:51.537383 4966 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 16:05:51 crc kubenswrapper[4966]: I0127 16:05:51.568466 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 16:05:53 crc kubenswrapper[4966]: I0127 16:05:53.889328 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c7444dc4c-gxtck"] Jan 27 16:05:53 crc kubenswrapper[4966]: W0127 16:05:53.893626 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fce5b18_2272_4aba_a5cc_75f98ee0b1f7.slice/crio-ce9c186b7c03272b673cd36befa70065919af2264112e7d9a0c8bfe5eb54a3e5 WatchSource:0}: Error finding container ce9c186b7c03272b673cd36befa70065919af2264112e7d9a0c8bfe5eb54a3e5: Status 404 returned error can't find the container with id ce9c186b7c03272b673cd36befa70065919af2264112e7d9a0c8bfe5eb54a3e5 Jan 27 16:05:54 crc kubenswrapper[4966]: I0127 16:05:54.179398 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-9mgp8" event={"ID":"05fc08f5-c60a-4248-8de2-447d0415188e","Type":"ContainerStarted","Data":"4aede79151d22fda9bfdcbd820539923d790f45a0123f8f7ce030fa9ce38640f"} Jan 27 16:05:54 crc kubenswrapper[4966]: I0127 16:05:54.182197 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f886fdd58-d9dcc" event={"ID":"4b821f12-5d2b-476d-9a06-82c533b408cc","Type":"ContainerStarted","Data":"231061fda531e6c517b98391a2b224fe2af512ea2318046034b9ac10b9af16ff"} Jan 27 16:05:54 crc kubenswrapper[4966]: I0127 16:05:54.182365 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6f886fdd58-d9dcc" Jan 27 16:05:54 crc kubenswrapper[4966]: I0127 16:05:54.185132 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c153482-f01a-4e13-ba03-fdd162f3a758","Type":"ContainerStarted","Data":"37ec43e7f1708e0f246642b4f42869cc8a0b5d2e5d0d8ecadb1cdf0ba4681644"} Jan 27 16:05:54 crc kubenswrapper[4966]: I0127 16:05:54.185249 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c153482-f01a-4e13-ba03-fdd162f3a758" containerName="ceilometer-central-agent" containerID="cri-o://579c6d99a3efe5221d1b43dc67146c664fee7e13fb046ad70d7ecc90fb83da6b" gracePeriod=30 Jan 27 16:05:54 crc kubenswrapper[4966]: I0127 16:05:54.185459 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 16:05:54 crc kubenswrapper[4966]: I0127 16:05:54.185517 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c153482-f01a-4e13-ba03-fdd162f3a758" containerName="proxy-httpd" containerID="cri-o://37ec43e7f1708e0f246642b4f42869cc8a0b5d2e5d0d8ecadb1cdf0ba4681644" gracePeriod=30 Jan 27 16:05:54 crc kubenswrapper[4966]: I0127 16:05:54.185581 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c153482-f01a-4e13-ba03-fdd162f3a758" containerName="sg-core" containerID="cri-o://042a0171df1179e3f83e0ecc77f468cc8390a8e6fc510489d15712ddea2d801b" gracePeriod=30 Jan 27 16:05:54 crc kubenswrapper[4966]: I0127 16:05:54.185625 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c153482-f01a-4e13-ba03-fdd162f3a758" containerName="ceilometer-notification-agent" containerID="cri-o://5685c7a8b83e567e446891d0a4cdff3f354f53b26adc1b73d1d466e112c1a1f5" gracePeriod=30 Jan 27 16:05:54 crc kubenswrapper[4966]: I0127 16:05:54.190686 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c7444dc4c-gxtck" event={"ID":"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7","Type":"ContainerStarted","Data":"6b364abd294adc951c88d6b63539e2b47bb35c99b49547c72dc0679e0e098982"} Jan 27 16:05:54 crc kubenswrapper[4966]: I0127 16:05:54.190721 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c7444dc4c-gxtck" event={"ID":"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7","Type":"ContainerStarted","Data":"ce9c186b7c03272b673cd36befa70065919af2264112e7d9a0c8bfe5eb54a3e5"} Jan 27 16:05:54 crc kubenswrapper[4966]: I0127 16:05:54.200216 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-c4wdb" event={"ID":"5989d951-ed71-4800-9400-390cbe5513f9","Type":"ContainerStarted","Data":"b1787c60db8a370a891efda62a6662c217861954221efe0b8d26c672d800384d"} Jan 27 16:05:54 crc kubenswrapper[4966]: I0127 16:05:54.205619 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" event={"ID":"4fab0805-4dcd-46f0-9fdf-3234dccac22e","Type":"ContainerStarted","Data":"c18251672f061e2b774367f4290566cce6c8ee38363ff38d531a5b438f1d7f6b"} Jan 27 16:05:54 crc kubenswrapper[4966]: I0127 16:05:54.206167 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:54 crc kubenswrapper[4966]: I0127 16:05:54.222211 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-9mgp8" podStartSLOduration=2.67746054 podStartE2EDuration="56.22219527s" podCreationTimestamp="2026-01-27 16:04:58 +0000 UTC" firstStartedPulling="2026-01-27 16:04:59.951782602 +0000 UTC m=+1366.254576090" lastFinishedPulling="2026-01-27 16:05:53.496517332 +0000 UTC m=+1419.799310820" observedRunningTime="2026-01-27 16:05:54.199908861 +0000 UTC m=+1420.502702369" watchObservedRunningTime="2026-01-27 16:05:54.22219527 +0000 UTC m=+1420.524988758" Jan 27 16:05:54 crc kubenswrapper[4966]: I0127 16:05:54.225214 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6f886fdd58-d9dcc" podStartSLOduration=8.225174284 podStartE2EDuration="8.225174284s" podCreationTimestamp="2026-01-27 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:05:54.218319088 +0000 UTC m=+1420.521112586" watchObservedRunningTime="2026-01-27 16:05:54.225174284 +0000 UTC m=+1420.527967772" Jan 27 16:05:54 crc kubenswrapper[4966]: I0127 16:05:54.254714 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.8898597710000002 podStartE2EDuration="55.254687689s" podCreationTimestamp="2026-01-27 16:04:59 +0000 UTC" firstStartedPulling="2026-01-27 16:05:01.124883439 +0000 UTC m=+1367.427676927" lastFinishedPulling="2026-01-27 16:05:53.489711357 +0000 UTC m=+1419.792504845" observedRunningTime="2026-01-27 16:05:54.243809648 +0000 UTC m=+1420.546603146" watchObservedRunningTime="2026-01-27 16:05:54.254687689 +0000 UTC m=+1420.557481187" Jan 27 16:05:54 crc kubenswrapper[4966]: I0127 16:05:54.273148 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-c4wdb" podStartSLOduration=2.63709187 podStartE2EDuration="55.273124908s" podCreationTimestamp="2026-01-27 16:04:59 +0000 UTC" firstStartedPulling="2026-01-27 16:05:00.861812795 +0000 UTC m=+1367.164606283" lastFinishedPulling="2026-01-27 16:05:53.497845833 +0000 UTC m=+1419.800639321" observedRunningTime="2026-01-27 16:05:54.259219662 +0000 UTC m=+1420.562013160" watchObservedRunningTime="2026-01-27 16:05:54.273124908 +0000 UTC m=+1420.575918396" Jan 27 16:05:54 crc kubenswrapper[4966]: I0127 16:05:54.291285 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" podStartSLOduration=8.291258687 podStartE2EDuration="8.291258687s" podCreationTimestamp="2026-01-27 16:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:05:54.281403148 +0000 UTC m=+1420.584196656" watchObservedRunningTime="2026-01-27 16:05:54.291258687 +0000 UTC m=+1420.594052175" Jan 27 16:05:55 crc kubenswrapper[4966]: I0127 16:05:55.219569 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dp2rw" event={"ID":"4d47c000-d7a4-4dca-a051-15a5d91f3ab9","Type":"ContainerStarted","Data":"f184896614b1c948c519e41aba00ef22569dfae9acedab34364b4b2e8ca228f4"} Jan 27 16:05:55 crc kubenswrapper[4966]: I0127 16:05:55.223148 4966 generic.go:334] "Generic (PLEG): container finished" podID="7c153482-f01a-4e13-ba03-fdd162f3a758" containerID="37ec43e7f1708e0f246642b4f42869cc8a0b5d2e5d0d8ecadb1cdf0ba4681644" exitCode=0 Jan 27 16:05:55 crc kubenswrapper[4966]: I0127 16:05:55.223186 4966 generic.go:334] "Generic (PLEG): container finished" podID="7c153482-f01a-4e13-ba03-fdd162f3a758" containerID="042a0171df1179e3f83e0ecc77f468cc8390a8e6fc510489d15712ddea2d801b" exitCode=2 Jan 27 16:05:55 crc kubenswrapper[4966]: I0127 16:05:55.223185 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c153482-f01a-4e13-ba03-fdd162f3a758","Type":"ContainerDied","Data":"37ec43e7f1708e0f246642b4f42869cc8a0b5d2e5d0d8ecadb1cdf0ba4681644"} Jan 27 16:05:55 crc kubenswrapper[4966]: I0127 16:05:55.223224 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c153482-f01a-4e13-ba03-fdd162f3a758","Type":"ContainerDied","Data":"042a0171df1179e3f83e0ecc77f468cc8390a8e6fc510489d15712ddea2d801b"} Jan 27 16:05:55 crc kubenswrapper[4966]: I0127 16:05:55.223251 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c153482-f01a-4e13-ba03-fdd162f3a758","Type":"ContainerDied","Data":"579c6d99a3efe5221d1b43dc67146c664fee7e13fb046ad70d7ecc90fb83da6b"} Jan 27 16:05:55 crc kubenswrapper[4966]: I0127 16:05:55.223197 4966 generic.go:334] "Generic (PLEG): container finished" podID="7c153482-f01a-4e13-ba03-fdd162f3a758" containerID="579c6d99a3efe5221d1b43dc67146c664fee7e13fb046ad70d7ecc90fb83da6b" exitCode=0 Jan 27 16:05:55 crc kubenswrapper[4966]: I0127 16:05:55.226076 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c7444dc4c-gxtck" event={"ID":"3fce5b18-2272-4aba-a5cc-75f98ee0b1f7","Type":"ContainerStarted","Data":"f776d7088536e8effb967fd35ec06f924814399353d0f858a142e569646b0bb9"} Jan 27 16:05:55 crc kubenswrapper[4966]: I0127 16:05:55.250289 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-dp2rw" podStartSLOduration=4.212227074 podStartE2EDuration="57.250270706s" podCreationTimestamp="2026-01-27 16:04:58 +0000 UTC" firstStartedPulling="2026-01-27 16:05:00.458956475 +0000 UTC m=+1366.761749963" lastFinishedPulling="2026-01-27 16:05:53.497000107 +0000 UTC m=+1419.799793595" observedRunningTime="2026-01-27 16:05:55.241740899 +0000 UTC m=+1421.544534417" watchObservedRunningTime="2026-01-27 16:05:55.250270706 +0000 UTC m=+1421.553064204" Jan 27 16:05:55 crc kubenswrapper[4966]: I0127 16:05:55.290090 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5c7444dc4c-gxtck" podStartSLOduration=7.290067014 podStartE2EDuration="7.290067014s" podCreationTimestamp="2026-01-27 16:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:05:55.2768474 +0000 UTC m=+1421.579640888" watchObservedRunningTime="2026-01-27 16:05:55.290067014 +0000 UTC m=+1421.592860502" Jan 27 16:05:56 crc kubenswrapper[4966]: I0127 16:05:56.236453 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:05:57 crc kubenswrapper[4966]: I0127 16:05:57.249629 4966 generic.go:334] "Generic (PLEG): container finished" podID="5989d951-ed71-4800-9400-390cbe5513f9" containerID="b1787c60db8a370a891efda62a6662c217861954221efe0b8d26c672d800384d" exitCode=0 Jan 27 16:05:57 crc kubenswrapper[4966]: I0127 16:05:57.249752 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-c4wdb" event={"ID":"5989d951-ed71-4800-9400-390cbe5513f9","Type":"ContainerDied","Data":"b1787c60db8a370a891efda62a6662c217861954221efe0b8d26c672d800384d"} Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.264863 4966 generic.go:334] "Generic (PLEG): container finished" podID="7c153482-f01a-4e13-ba03-fdd162f3a758" containerID="5685c7a8b83e567e446891d0a4cdff3f354f53b26adc1b73d1d466e112c1a1f5" exitCode=0 Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.265415 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c153482-f01a-4e13-ba03-fdd162f3a758","Type":"ContainerDied","Data":"5685c7a8b83e567e446891d0a4cdff3f354f53b26adc1b73d1d466e112c1a1f5"} Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.265467 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c153482-f01a-4e13-ba03-fdd162f3a758","Type":"ContainerDied","Data":"6aa9fd8336bf768fb565cc8f9aea0a88dd612e3ac5099b0ae87976278d85b9ff"} Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.265487 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aa9fd8336bf768fb565cc8f9aea0a88dd612e3ac5099b0ae87976278d85b9ff" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.372664 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.467573 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v945r\" (UniqueName: \"kubernetes.io/projected/7c153482-f01a-4e13-ba03-fdd162f3a758-kube-api-access-v945r\") pod \"7c153482-f01a-4e13-ba03-fdd162f3a758\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.467648 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-sg-core-conf-yaml\") pod \"7c153482-f01a-4e13-ba03-fdd162f3a758\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.467756 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c153482-f01a-4e13-ba03-fdd162f3a758-run-httpd\") pod \"7c153482-f01a-4e13-ba03-fdd162f3a758\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.467830 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-combined-ca-bundle\") pod \"7c153482-f01a-4e13-ba03-fdd162f3a758\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.467871 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-scripts\") pod \"7c153482-f01a-4e13-ba03-fdd162f3a758\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.467933 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c153482-f01a-4e13-ba03-fdd162f3a758-log-httpd\") pod \"7c153482-f01a-4e13-ba03-fdd162f3a758\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.467965 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-config-data\") pod \"7c153482-f01a-4e13-ba03-fdd162f3a758\" (UID: \"7c153482-f01a-4e13-ba03-fdd162f3a758\") " Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.471724 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c153482-f01a-4e13-ba03-fdd162f3a758-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7c153482-f01a-4e13-ba03-fdd162f3a758" (UID: "7c153482-f01a-4e13-ba03-fdd162f3a758"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.472737 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c153482-f01a-4e13-ba03-fdd162f3a758-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7c153482-f01a-4e13-ba03-fdd162f3a758" (UID: "7c153482-f01a-4e13-ba03-fdd162f3a758"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.512533 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c153482-f01a-4e13-ba03-fdd162f3a758-kube-api-access-v945r" (OuterVolumeSpecName: "kube-api-access-v945r") pod "7c153482-f01a-4e13-ba03-fdd162f3a758" (UID: "7c153482-f01a-4e13-ba03-fdd162f3a758"). InnerVolumeSpecName "kube-api-access-v945r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.512654 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-scripts" (OuterVolumeSpecName: "scripts") pod "7c153482-f01a-4e13-ba03-fdd162f3a758" (UID: "7c153482-f01a-4e13-ba03-fdd162f3a758"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.525587 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7c153482-f01a-4e13-ba03-fdd162f3a758" (UID: "7c153482-f01a-4e13-ba03-fdd162f3a758"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.571418 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v945r\" (UniqueName: \"kubernetes.io/projected/7c153482-f01a-4e13-ba03-fdd162f3a758-kube-api-access-v945r\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.571456 4966 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.571465 4966 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c153482-f01a-4e13-ba03-fdd162f3a758-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.571473 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.571482 4966 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c153482-f01a-4e13-ba03-fdd162f3a758-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.598567 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c153482-f01a-4e13-ba03-fdd162f3a758" (UID: "7c153482-f01a-4e13-ba03-fdd162f3a758"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.639180 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-config-data" (OuterVolumeSpecName: "config-data") pod "7c153482-f01a-4e13-ba03-fdd162f3a758" (UID: "7c153482-f01a-4e13-ba03-fdd162f3a758"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.663644 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-c4wdb" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.674040 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.674094 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c153482-f01a-4e13-ba03-fdd162f3a758-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.775150 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5989d951-ed71-4800-9400-390cbe5513f9-db-sync-config-data\") pod \"5989d951-ed71-4800-9400-390cbe5513f9\" (UID: \"5989d951-ed71-4800-9400-390cbe5513f9\") " Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.775459 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5989d951-ed71-4800-9400-390cbe5513f9-combined-ca-bundle\") pod \"5989d951-ed71-4800-9400-390cbe5513f9\" (UID: \"5989d951-ed71-4800-9400-390cbe5513f9\") " Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.775541 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdq9h\" (UniqueName: \"kubernetes.io/projected/5989d951-ed71-4800-9400-390cbe5513f9-kube-api-access-gdq9h\") pod \"5989d951-ed71-4800-9400-390cbe5513f9\" (UID: \"5989d951-ed71-4800-9400-390cbe5513f9\") " Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.778537 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5989d951-ed71-4800-9400-390cbe5513f9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5989d951-ed71-4800-9400-390cbe5513f9" (UID: "5989d951-ed71-4800-9400-390cbe5513f9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.779380 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5989d951-ed71-4800-9400-390cbe5513f9-kube-api-access-gdq9h" (OuterVolumeSpecName: "kube-api-access-gdq9h") pod "5989d951-ed71-4800-9400-390cbe5513f9" (UID: "5989d951-ed71-4800-9400-390cbe5513f9"). InnerVolumeSpecName "kube-api-access-gdq9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.814255 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5989d951-ed71-4800-9400-390cbe5513f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5989d951-ed71-4800-9400-390cbe5513f9" (UID: "5989d951-ed71-4800-9400-390cbe5513f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.878021 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdq9h\" (UniqueName: \"kubernetes.io/projected/5989d951-ed71-4800-9400-390cbe5513f9-kube-api-access-gdq9h\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.878068 4966 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5989d951-ed71-4800-9400-390cbe5513f9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:58 crc kubenswrapper[4966]: I0127 16:05:58.878076 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5989d951-ed71-4800-9400-390cbe5513f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.305878 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-c4wdb" event={"ID":"5989d951-ed71-4800-9400-390cbe5513f9","Type":"ContainerDied","Data":"046b0ef810b1ef64ba42b7a96c92c38e511990946e24bce7e13856eae676bcd7"} Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.305940 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="046b0ef810b1ef64ba42b7a96c92c38e511990946e24bce7e13856eae676bcd7" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.306027 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-c4wdb" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.307875 4966 generic.go:334] "Generic (PLEG): container finished" podID="05fc08f5-c60a-4248-8de2-447d0415188e" containerID="4aede79151d22fda9bfdcbd820539923d790f45a0123f8f7ce030fa9ce38640f" exitCode=0 Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.307974 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-9mgp8" event={"ID":"05fc08f5-c60a-4248-8de2-447d0415188e","Type":"ContainerDied","Data":"4aede79151d22fda9bfdcbd820539923d790f45a0123f8f7ce030fa9ce38640f"} Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.314139 4966 generic.go:334] "Generic (PLEG): container finished" podID="4d47c000-d7a4-4dca-a051-15a5d91f3ab9" containerID="f184896614b1c948c519e41aba00ef22569dfae9acedab34364b4b2e8ca228f4" exitCode=0 Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.314250 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.319048 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dp2rw" event={"ID":"4d47c000-d7a4-4dca-a051-15a5d91f3ab9","Type":"ContainerDied","Data":"f184896614b1c948c519e41aba00ef22569dfae9acedab34364b4b2e8ca228f4"} Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.378979 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.397682 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.411153 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:05:59 crc kubenswrapper[4966]: E0127 16:05:59.411613 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c153482-f01a-4e13-ba03-fdd162f3a758" containerName="sg-core" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.411633 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c153482-f01a-4e13-ba03-fdd162f3a758" containerName="sg-core" Jan 27 16:05:59 crc kubenswrapper[4966]: E0127 16:05:59.411668 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c153482-f01a-4e13-ba03-fdd162f3a758" containerName="proxy-httpd" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.411674 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c153482-f01a-4e13-ba03-fdd162f3a758" containerName="proxy-httpd" Jan 27 16:05:59 crc kubenswrapper[4966]: E0127 16:05:59.411688 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c153482-f01a-4e13-ba03-fdd162f3a758" containerName="ceilometer-central-agent" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.411693 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c153482-f01a-4e13-ba03-fdd162f3a758" containerName="ceilometer-central-agent" Jan 27 16:05:59 crc kubenswrapper[4966]: E0127 16:05:59.411705 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5989d951-ed71-4800-9400-390cbe5513f9" containerName="barbican-db-sync" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.411711 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="5989d951-ed71-4800-9400-390cbe5513f9" containerName="barbican-db-sync" Jan 27 16:05:59 crc kubenswrapper[4966]: E0127 16:05:59.411720 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c153482-f01a-4e13-ba03-fdd162f3a758" containerName="ceilometer-notification-agent" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.411727 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c153482-f01a-4e13-ba03-fdd162f3a758" containerName="ceilometer-notification-agent" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.411946 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c153482-f01a-4e13-ba03-fdd162f3a758" containerName="ceilometer-central-agent" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.411970 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c153482-f01a-4e13-ba03-fdd162f3a758" containerName="ceilometer-notification-agent" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.411984 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c153482-f01a-4e13-ba03-fdd162f3a758" containerName="proxy-httpd" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.411999 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="5989d951-ed71-4800-9400-390cbe5513f9" containerName="barbican-db-sync" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.412019 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c153482-f01a-4e13-ba03-fdd162f3a758" containerName="sg-core" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.413971 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.423371 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.423482 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.423630 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.500638 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-config-data\") pod \"ceilometer-0\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.500995 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpvzd\" (UniqueName: \"kubernetes.io/projected/d50b72a4-4856-4248-b784-239009beb314-kube-api-access-xpvzd\") pod \"ceilometer-0\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.501138 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d50b72a4-4856-4248-b784-239009beb314-log-httpd\") pod \"ceilometer-0\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.501302 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.501508 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d50b72a4-4856-4248-b784-239009beb314-run-httpd\") pod \"ceilometer-0\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.501661 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.501837 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-scripts\") pod \"ceilometer-0\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.531829 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5d55df94bc-bzw98"] Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.533586 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5d55df94bc-bzw98" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.535721 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.537572 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.537758 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-jt6w5" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.564203 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5d55df94bc-bzw98"] Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.598202 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-797bd7c9db-vjlzd"] Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.606541 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-797bd7c9db-vjlzd" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.614200 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.615660 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.615760 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d50b72a4-4856-4248-b784-239009beb314-run-httpd\") pod \"ceilometer-0\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.615806 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.615867 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-scripts\") pod \"ceilometer-0\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.616017 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-config-data\") pod \"ceilometer-0\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.616060 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpvzd\" (UniqueName: \"kubernetes.io/projected/d50b72a4-4856-4248-b784-239009beb314-kube-api-access-xpvzd\") pod \"ceilometer-0\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.616099 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d50b72a4-4856-4248-b784-239009beb314-log-httpd\") pod \"ceilometer-0\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.624170 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.628395 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-config-data\") pod \"ceilometer-0\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.633735 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d50b72a4-4856-4248-b784-239009beb314-run-httpd\") pod \"ceilometer-0\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.635856 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-scripts\") pod \"ceilometer-0\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.635925 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-797bd7c9db-vjlzd"] Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.642274 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d50b72a4-4856-4248-b784-239009beb314-log-httpd\") pod \"ceilometer-0\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.650541 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpvzd\" (UniqueName: \"kubernetes.io/projected/d50b72a4-4856-4248-b784-239009beb314-kube-api-access-xpvzd\") pod \"ceilometer-0\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.651067 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.671259 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-mgmwk"] Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.671609 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" podUID="4fab0805-4dcd-46f0-9fdf-3234dccac22e" containerName="dnsmasq-dns" containerID="cri-o://c18251672f061e2b774367f4290566cce6c8ee38363ff38d531a5b438f1d7f6b" gracePeriod=10 Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.680540 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.707203 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-dwz97"] Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.709152 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.717691 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbjkp\" (UniqueName: \"kubernetes.io/projected/db44a3bd-5583-4a79-838d-6a21f083e020-kube-api-access-xbjkp\") pod \"barbican-worker-5d55df94bc-bzw98\" (UID: \"db44a3bd-5583-4a79-838d-6a21f083e020\") " pod="openstack/barbican-worker-5d55df94bc-bzw98" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.717767 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e49ad1cd-2925-41c2-b562-8a3478420d39-logs\") pod \"barbican-keystone-listener-797bd7c9db-vjlzd\" (UID: \"e49ad1cd-2925-41c2-b562-8a3478420d39\") " pod="openstack/barbican-keystone-listener-797bd7c9db-vjlzd" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.717803 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db44a3bd-5583-4a79-838d-6a21f083e020-combined-ca-bundle\") pod \"barbican-worker-5d55df94bc-bzw98\" (UID: \"db44a3bd-5583-4a79-838d-6a21f083e020\") " pod="openstack/barbican-worker-5d55df94bc-bzw98" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.717845 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e49ad1cd-2925-41c2-b562-8a3478420d39-config-data-custom\") pod \"barbican-keystone-listener-797bd7c9db-vjlzd\" (UID: \"e49ad1cd-2925-41c2-b562-8a3478420d39\") " pod="openstack/barbican-keystone-listener-797bd7c9db-vjlzd" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.717878 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqgmk\" (UniqueName: \"kubernetes.io/projected/e49ad1cd-2925-41c2-b562-8a3478420d39-kube-api-access-gqgmk\") pod \"barbican-keystone-listener-797bd7c9db-vjlzd\" (UID: \"e49ad1cd-2925-41c2-b562-8a3478420d39\") " pod="openstack/barbican-keystone-listener-797bd7c9db-vjlzd" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.717913 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db44a3bd-5583-4a79-838d-6a21f083e020-config-data\") pod \"barbican-worker-5d55df94bc-bzw98\" (UID: \"db44a3bd-5583-4a79-838d-6a21f083e020\") " pod="openstack/barbican-worker-5d55df94bc-bzw98" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.717949 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db44a3bd-5583-4a79-838d-6a21f083e020-config-data-custom\") pod \"barbican-worker-5d55df94bc-bzw98\" (UID: \"db44a3bd-5583-4a79-838d-6a21f083e020\") " pod="openstack/barbican-worker-5d55df94bc-bzw98" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.717966 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e49ad1cd-2925-41c2-b562-8a3478420d39-config-data\") pod \"barbican-keystone-listener-797bd7c9db-vjlzd\" (UID: \"e49ad1cd-2925-41c2-b562-8a3478420d39\") " pod="openstack/barbican-keystone-listener-797bd7c9db-vjlzd" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.718004 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e49ad1cd-2925-41c2-b562-8a3478420d39-combined-ca-bundle\") pod \"barbican-keystone-listener-797bd7c9db-vjlzd\" (UID: \"e49ad1cd-2925-41c2-b562-8a3478420d39\") " pod="openstack/barbican-keystone-listener-797bd7c9db-vjlzd" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.718068 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db44a3bd-5583-4a79-838d-6a21f083e020-logs\") pod \"barbican-worker-5d55df94bc-bzw98\" (UID: \"db44a3bd-5583-4a79-838d-6a21f083e020\") " pod="openstack/barbican-worker-5d55df94bc-bzw98" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.728234 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-dwz97"] Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.771116 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.793615 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-f5c687874-swft6"] Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.796277 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.807929 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.821209 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db44a3bd-5583-4a79-838d-6a21f083e020-combined-ca-bundle\") pod \"barbican-worker-5d55df94bc-bzw98\" (UID: \"db44a3bd-5583-4a79-838d-6a21f083e020\") " pod="openstack/barbican-worker-5d55df94bc-bzw98" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.821796 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e49ad1cd-2925-41c2-b562-8a3478420d39-logs\") pod \"barbican-keystone-listener-797bd7c9db-vjlzd\" (UID: \"e49ad1cd-2925-41c2-b562-8a3478420d39\") " pod="openstack/barbican-keystone-listener-797bd7c9db-vjlzd" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.821923 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-dwz97\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.822012 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-dwz97\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.822130 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e49ad1cd-2925-41c2-b562-8a3478420d39-config-data-custom\") pod \"barbican-keystone-listener-797bd7c9db-vjlzd\" (UID: \"e49ad1cd-2925-41c2-b562-8a3478420d39\") " pod="openstack/barbican-keystone-listener-797bd7c9db-vjlzd" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.823098 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqgmk\" (UniqueName: \"kubernetes.io/projected/e49ad1cd-2925-41c2-b562-8a3478420d39-kube-api-access-gqgmk\") pod \"barbican-keystone-listener-797bd7c9db-vjlzd\" (UID: \"e49ad1cd-2925-41c2-b562-8a3478420d39\") " pod="openstack/barbican-keystone-listener-797bd7c9db-vjlzd" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.823201 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db44a3bd-5583-4a79-838d-6a21f083e020-config-data\") pod \"barbican-worker-5d55df94bc-bzw98\" (UID: \"db44a3bd-5583-4a79-838d-6a21f083e020\") " pod="openstack/barbican-worker-5d55df94bc-bzw98" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.823304 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db44a3bd-5583-4a79-838d-6a21f083e020-config-data-custom\") pod \"barbican-worker-5d55df94bc-bzw98\" (UID: \"db44a3bd-5583-4a79-838d-6a21f083e020\") " pod="openstack/barbican-worker-5d55df94bc-bzw98" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.823377 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e49ad1cd-2925-41c2-b562-8a3478420d39-config-data\") pod \"barbican-keystone-listener-797bd7c9db-vjlzd\" (UID: \"e49ad1cd-2925-41c2-b562-8a3478420d39\") " pod="openstack/barbican-keystone-listener-797bd7c9db-vjlzd" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.823498 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-dwz97\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.823584 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e49ad1cd-2925-41c2-b562-8a3478420d39-combined-ca-bundle\") pod \"barbican-keystone-listener-797bd7c9db-vjlzd\" (UID: \"e49ad1cd-2925-41c2-b562-8a3478420d39\") " pod="openstack/barbican-keystone-listener-797bd7c9db-vjlzd" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.823682 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvwbz\" (UniqueName: \"kubernetes.io/projected/f36d34a3-56a3-4adf-be27-15bcd39431ed-kube-api-access-dvwbz\") pod \"dnsmasq-dns-848cf88cfc-dwz97\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.823789 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-dwz97\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.823872 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db44a3bd-5583-4a79-838d-6a21f083e020-logs\") pod \"barbican-worker-5d55df94bc-bzw98\" (UID: \"db44a3bd-5583-4a79-838d-6a21f083e020\") " pod="openstack/barbican-worker-5d55df94bc-bzw98" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.824015 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbjkp\" (UniqueName: \"kubernetes.io/projected/db44a3bd-5583-4a79-838d-6a21f083e020-kube-api-access-xbjkp\") pod \"barbican-worker-5d55df94bc-bzw98\" (UID: \"db44a3bd-5583-4a79-838d-6a21f083e020\") " pod="openstack/barbican-worker-5d55df94bc-bzw98" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.824105 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-config\") pod \"dnsmasq-dns-848cf88cfc-dwz97\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.836571 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-f5c687874-swft6"] Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.837200 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db44a3bd-5583-4a79-838d-6a21f083e020-logs\") pod \"barbican-worker-5d55df94bc-bzw98\" (UID: \"db44a3bd-5583-4a79-838d-6a21f083e020\") " pod="openstack/barbican-worker-5d55df94bc-bzw98" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.838216 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e49ad1cd-2925-41c2-b562-8a3478420d39-logs\") pod \"barbican-keystone-listener-797bd7c9db-vjlzd\" (UID: \"e49ad1cd-2925-41c2-b562-8a3478420d39\") " pod="openstack/barbican-keystone-listener-797bd7c9db-vjlzd" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.843815 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e49ad1cd-2925-41c2-b562-8a3478420d39-config-data-custom\") pod \"barbican-keystone-listener-797bd7c9db-vjlzd\" (UID: \"e49ad1cd-2925-41c2-b562-8a3478420d39\") " pod="openstack/barbican-keystone-listener-797bd7c9db-vjlzd" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.844575 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e49ad1cd-2925-41c2-b562-8a3478420d39-combined-ca-bundle\") pod \"barbican-keystone-listener-797bd7c9db-vjlzd\" (UID: \"e49ad1cd-2925-41c2-b562-8a3478420d39\") " pod="openstack/barbican-keystone-listener-797bd7c9db-vjlzd" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.845168 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db44a3bd-5583-4a79-838d-6a21f083e020-combined-ca-bundle\") pod \"barbican-worker-5d55df94bc-bzw98\" (UID: \"db44a3bd-5583-4a79-838d-6a21f083e020\") " pod="openstack/barbican-worker-5d55df94bc-bzw98" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.851781 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e49ad1cd-2925-41c2-b562-8a3478420d39-config-data\") pod \"barbican-keystone-listener-797bd7c9db-vjlzd\" (UID: \"e49ad1cd-2925-41c2-b562-8a3478420d39\") " pod="openstack/barbican-keystone-listener-797bd7c9db-vjlzd" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.855948 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db44a3bd-5583-4a79-838d-6a21f083e020-config-data\") pod \"barbican-worker-5d55df94bc-bzw98\" (UID: \"db44a3bd-5583-4a79-838d-6a21f083e020\") " pod="openstack/barbican-worker-5d55df94bc-bzw98" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.859190 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqgmk\" (UniqueName: \"kubernetes.io/projected/e49ad1cd-2925-41c2-b562-8a3478420d39-kube-api-access-gqgmk\") pod \"barbican-keystone-listener-797bd7c9db-vjlzd\" (UID: \"e49ad1cd-2925-41c2-b562-8a3478420d39\") " pod="openstack/barbican-keystone-listener-797bd7c9db-vjlzd" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.860171 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db44a3bd-5583-4a79-838d-6a21f083e020-config-data-custom\") pod \"barbican-worker-5d55df94bc-bzw98\" (UID: \"db44a3bd-5583-4a79-838d-6a21f083e020\") " pod="openstack/barbican-worker-5d55df94bc-bzw98" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.860726 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbjkp\" (UniqueName: \"kubernetes.io/projected/db44a3bd-5583-4a79-838d-6a21f083e020-kube-api-access-xbjkp\") pod \"barbican-worker-5d55df94bc-bzw98\" (UID: \"db44a3bd-5583-4a79-838d-6a21f083e020\") " pod="openstack/barbican-worker-5d55df94bc-bzw98" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.873750 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5d55df94bc-bzw98" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.926312 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-config\") pod \"dnsmasq-dns-848cf88cfc-dwz97\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.926393 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-dwz97\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.926422 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-dwz97\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.926445 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/553ff3ba-f6fa-4006-925f-4c5d43741caa-logs\") pod \"barbican-api-f5c687874-swft6\" (UID: \"553ff3ba-f6fa-4006-925f-4c5d43741caa\") " pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.926483 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpl5q\" (UniqueName: \"kubernetes.io/projected/553ff3ba-f6fa-4006-925f-4c5d43741caa-kube-api-access-gpl5q\") pod \"barbican-api-f5c687874-swft6\" (UID: \"553ff3ba-f6fa-4006-925f-4c5d43741caa\") " pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.926503 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553ff3ba-f6fa-4006-925f-4c5d43741caa-combined-ca-bundle\") pod \"barbican-api-f5c687874-swft6\" (UID: \"553ff3ba-f6fa-4006-925f-4c5d43741caa\") " pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.926529 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553ff3ba-f6fa-4006-925f-4c5d43741caa-config-data\") pod \"barbican-api-f5c687874-swft6\" (UID: \"553ff3ba-f6fa-4006-925f-4c5d43741caa\") " pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.926569 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-dwz97\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.926600 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvwbz\" (UniqueName: \"kubernetes.io/projected/f36d34a3-56a3-4adf-be27-15bcd39431ed-kube-api-access-dvwbz\") pod \"dnsmasq-dns-848cf88cfc-dwz97\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.926637 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-dwz97\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.926661 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/553ff3ba-f6fa-4006-925f-4c5d43741caa-config-data-custom\") pod \"barbican-api-f5c687874-swft6\" (UID: \"553ff3ba-f6fa-4006-925f-4c5d43741caa\") " pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.928499 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-dwz97\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.928724 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-dwz97\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.929030 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-dwz97\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.929256 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-dwz97\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.929365 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-config\") pod \"dnsmasq-dns-848cf88cfc-dwz97\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.947813 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-797bd7c9db-vjlzd" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.960244 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvwbz\" (UniqueName: \"kubernetes.io/projected/f36d34a3-56a3-4adf-be27-15bcd39431ed-kube-api-access-dvwbz\") pod \"dnsmasq-dns-848cf88cfc-dwz97\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:05:59 crc kubenswrapper[4966]: I0127 16:05:59.966989 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.028102 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpl5q\" (UniqueName: \"kubernetes.io/projected/553ff3ba-f6fa-4006-925f-4c5d43741caa-kube-api-access-gpl5q\") pod \"barbican-api-f5c687874-swft6\" (UID: \"553ff3ba-f6fa-4006-925f-4c5d43741caa\") " pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.028136 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553ff3ba-f6fa-4006-925f-4c5d43741caa-combined-ca-bundle\") pod \"barbican-api-f5c687874-swft6\" (UID: \"553ff3ba-f6fa-4006-925f-4c5d43741caa\") " pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.028176 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553ff3ba-f6fa-4006-925f-4c5d43741caa-config-data\") pod \"barbican-api-f5c687874-swft6\" (UID: \"553ff3ba-f6fa-4006-925f-4c5d43741caa\") " pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.028282 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/553ff3ba-f6fa-4006-925f-4c5d43741caa-config-data-custom\") pod \"barbican-api-f5c687874-swft6\" (UID: \"553ff3ba-f6fa-4006-925f-4c5d43741caa\") " pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.028374 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/553ff3ba-f6fa-4006-925f-4c5d43741caa-logs\") pod \"barbican-api-f5c687874-swft6\" (UID: \"553ff3ba-f6fa-4006-925f-4c5d43741caa\") " pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.035336 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/553ff3ba-f6fa-4006-925f-4c5d43741caa-logs\") pod \"barbican-api-f5c687874-swft6\" (UID: \"553ff3ba-f6fa-4006-925f-4c5d43741caa\") " pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.054760 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553ff3ba-f6fa-4006-925f-4c5d43741caa-combined-ca-bundle\") pod \"barbican-api-f5c687874-swft6\" (UID: \"553ff3ba-f6fa-4006-925f-4c5d43741caa\") " pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.055709 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553ff3ba-f6fa-4006-925f-4c5d43741caa-config-data\") pod \"barbican-api-f5c687874-swft6\" (UID: \"553ff3ba-f6fa-4006-925f-4c5d43741caa\") " pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.068592 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/553ff3ba-f6fa-4006-925f-4c5d43741caa-config-data-custom\") pod \"barbican-api-f5c687874-swft6\" (UID: \"553ff3ba-f6fa-4006-925f-4c5d43741caa\") " pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.095588 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpl5q\" (UniqueName: \"kubernetes.io/projected/553ff3ba-f6fa-4006-925f-4c5d43741caa-kube-api-access-gpl5q\") pod \"barbican-api-f5c687874-swft6\" (UID: \"553ff3ba-f6fa-4006-925f-4c5d43741caa\") " pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.331762 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.397086 4966 generic.go:334] "Generic (PLEG): container finished" podID="4fab0805-4dcd-46f0-9fdf-3234dccac22e" containerID="c18251672f061e2b774367f4290566cce6c8ee38363ff38d531a5b438f1d7f6b" exitCode=0 Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.398831 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" event={"ID":"4fab0805-4dcd-46f0-9fdf-3234dccac22e","Type":"ContainerDied","Data":"c18251672f061e2b774367f4290566cce6c8ee38363ff38d531a5b438f1d7f6b"} Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.521256 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.556643 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c153482-f01a-4e13-ba03-fdd162f3a758" path="/var/lib/kubelet/pods/7c153482-f01a-4e13-ba03-fdd162f3a758/volumes" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.579905 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.655285 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-ovsdbserver-nb\") pod \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.655360 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-config\") pod \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.655459 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-dns-svc\") pod \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.655526 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmwf2\" (UniqueName: \"kubernetes.io/projected/4fab0805-4dcd-46f0-9fdf-3234dccac22e-kube-api-access-nmwf2\") pod \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.655653 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-ovsdbserver-sb\") pod \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.655752 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-dns-swift-storage-0\") pod \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\" (UID: \"4fab0805-4dcd-46f0-9fdf-3234dccac22e\") " Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.661668 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fab0805-4dcd-46f0-9fdf-3234dccac22e-kube-api-access-nmwf2" (OuterVolumeSpecName: "kube-api-access-nmwf2") pod "4fab0805-4dcd-46f0-9fdf-3234dccac22e" (UID: "4fab0805-4dcd-46f0-9fdf-3234dccac22e"). InnerVolumeSpecName "kube-api-access-nmwf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.720508 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-config" (OuterVolumeSpecName: "config") pod "4fab0805-4dcd-46f0-9fdf-3234dccac22e" (UID: "4fab0805-4dcd-46f0-9fdf-3234dccac22e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.731467 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4fab0805-4dcd-46f0-9fdf-3234dccac22e" (UID: "4fab0805-4dcd-46f0-9fdf-3234dccac22e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.736163 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4fab0805-4dcd-46f0-9fdf-3234dccac22e" (UID: "4fab0805-4dcd-46f0-9fdf-3234dccac22e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.748401 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4fab0805-4dcd-46f0-9fdf-3234dccac22e" (UID: "4fab0805-4dcd-46f0-9fdf-3234dccac22e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.752175 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4fab0805-4dcd-46f0-9fdf-3234dccac22e" (UID: "4fab0805-4dcd-46f0-9fdf-3234dccac22e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.761973 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmwf2\" (UniqueName: \"kubernetes.io/projected/4fab0805-4dcd-46f0-9fdf-3234dccac22e-kube-api-access-nmwf2\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.762012 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.762024 4966 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.762032 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.762042 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.762050 4966 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fab0805-4dcd-46f0-9fdf-3234dccac22e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:00 crc kubenswrapper[4966]: I0127 16:06:00.919287 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-9mgp8" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.018034 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-797bd7c9db-vjlzd"] Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.067783 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05fc08f5-c60a-4248-8de2-447d0415188e-config-data\") pod \"05fc08f5-c60a-4248-8de2-447d0415188e\" (UID: \"05fc08f5-c60a-4248-8de2-447d0415188e\") " Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.068030 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j78g\" (UniqueName: \"kubernetes.io/projected/05fc08f5-c60a-4248-8de2-447d0415188e-kube-api-access-4j78g\") pod \"05fc08f5-c60a-4248-8de2-447d0415188e\" (UID: \"05fc08f5-c60a-4248-8de2-447d0415188e\") " Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.068217 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05fc08f5-c60a-4248-8de2-447d0415188e-combined-ca-bundle\") pod \"05fc08f5-c60a-4248-8de2-447d0415188e\" (UID: \"05fc08f5-c60a-4248-8de2-447d0415188e\") " Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.073685 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05fc08f5-c60a-4248-8de2-447d0415188e-kube-api-access-4j78g" (OuterVolumeSpecName: "kube-api-access-4j78g") pod "05fc08f5-c60a-4248-8de2-447d0415188e" (UID: "05fc08f5-c60a-4248-8de2-447d0415188e"). InnerVolumeSpecName "kube-api-access-4j78g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.152055 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05fc08f5-c60a-4248-8de2-447d0415188e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05fc08f5-c60a-4248-8de2-447d0415188e" (UID: "05fc08f5-c60a-4248-8de2-447d0415188e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.170315 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j78g\" (UniqueName: \"kubernetes.io/projected/05fc08f5-c60a-4248-8de2-447d0415188e-kube-api-access-4j78g\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.170341 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05fc08f5-c60a-4248-8de2-447d0415188e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.249198 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05fc08f5-c60a-4248-8de2-447d0415188e-config-data" (OuterVolumeSpecName: "config-data") pod "05fc08f5-c60a-4248-8de2-447d0415188e" (UID: "05fc08f5-c60a-4248-8de2-447d0415188e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.272649 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05fc08f5-c60a-4248-8de2-447d0415188e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.296285 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-dwz97"] Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.304697 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.314714 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5d55df94bc-bzw98"] Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.330045 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-f5c687874-swft6"] Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.374446 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4cs2\" (UniqueName: \"kubernetes.io/projected/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-kube-api-access-q4cs2\") pod \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.374582 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-config-data\") pod \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.374633 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-etc-machine-id\") pod \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.374662 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-db-sync-config-data\") pod \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.374752 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-combined-ca-bundle\") pod \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.374768 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-scripts\") pod \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\" (UID: \"4d47c000-d7a4-4dca-a051-15a5d91f3ab9\") " Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.376007 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4d47c000-d7a4-4dca-a051-15a5d91f3ab9" (UID: "4d47c000-d7a4-4dca-a051-15a5d91f3ab9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.382924 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4d47c000-d7a4-4dca-a051-15a5d91f3ab9" (UID: "4d47c000-d7a4-4dca-a051-15a5d91f3ab9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.394762 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-kube-api-access-q4cs2" (OuterVolumeSpecName: "kube-api-access-q4cs2") pod "4d47c000-d7a4-4dca-a051-15a5d91f3ab9" (UID: "4d47c000-d7a4-4dca-a051-15a5d91f3ab9"). InnerVolumeSpecName "kube-api-access-q4cs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.398076 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-scripts" (OuterVolumeSpecName: "scripts") pod "4d47c000-d7a4-4dca-a051-15a5d91f3ab9" (UID: "4d47c000-d7a4-4dca-a051-15a5d91f3ab9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.408702 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f5c687874-swft6" event={"ID":"553ff3ba-f6fa-4006-925f-4c5d43741caa","Type":"ContainerStarted","Data":"cf6377fc76930e9cc5ddc5850da4f4a1dcb91616ef91bb9de322a8818bdabf2c"} Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.409546 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d55df94bc-bzw98" event={"ID":"db44a3bd-5583-4a79-838d-6a21f083e020","Type":"ContainerStarted","Data":"938febcfbf8d3b1af721ba5518ee74da360edfe344983a50e6a6649c2ab83175"} Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.410479 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-797bd7c9db-vjlzd" event={"ID":"e49ad1cd-2925-41c2-b562-8a3478420d39","Type":"ContainerStarted","Data":"4c501e91a6b994f46ab142ac15732bb359895f50eb0aea6011b209280262a943"} Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.411451 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" event={"ID":"f36d34a3-56a3-4adf-be27-15bcd39431ed","Type":"ContainerStarted","Data":"1050df5e234e53b68b804a71a71c00607d65ea4820c175bd3bfeb80313884b03"} Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.413701 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" event={"ID":"4fab0805-4dcd-46f0-9fdf-3234dccac22e","Type":"ContainerDied","Data":"c7a997ae413a0fee51ad11ecc7a0e93550b529426c2090622a5242abfa8c9f5f"} Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.413729 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-mgmwk" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.413741 4966 scope.go:117] "RemoveContainer" containerID="c18251672f061e2b774367f4290566cce6c8ee38363ff38d531a5b438f1d7f6b" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.415230 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-9mgp8" event={"ID":"05fc08f5-c60a-4248-8de2-447d0415188e","Type":"ContainerDied","Data":"5a6f4e234c1b3747c3c3b19bb8993357cc35c23f55db123d0151470c3b6baf3d"} Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.415323 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a6f4e234c1b3747c3c3b19bb8993357cc35c23f55db123d0151470c3b6baf3d" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.415253 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-9mgp8" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.418349 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dp2rw" event={"ID":"4d47c000-d7a4-4dca-a051-15a5d91f3ab9","Type":"ContainerDied","Data":"6ec5784942c7c3981aa257e3cac21e872663f7d243048fb4b67ea9da8838113e"} Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.418378 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ec5784942c7c3981aa257e3cac21e872663f7d243048fb4b67ea9da8838113e" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.418431 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dp2rw" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.423189 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d50b72a4-4856-4248-b784-239009beb314","Type":"ContainerStarted","Data":"b512d82ebb7a777df9b1a92fca506801927a3cc48e324747962b3ea261934524"} Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.441485 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d47c000-d7a4-4dca-a051-15a5d91f3ab9" (UID: "4d47c000-d7a4-4dca-a051-15a5d91f3ab9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.463772 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-mgmwk"] Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.467721 4966 scope.go:117] "RemoveContainer" containerID="92b2d2a44dcd134d2ac673166025c78e343bc16744eab1ad7538cb7992297b5c" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.475021 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-mgmwk"] Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.481385 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4cs2\" (UniqueName: \"kubernetes.io/projected/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-kube-api-access-q4cs2\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.481412 4966 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.481421 4966 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.481430 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.481440 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.486449 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-config-data" (OuterVolumeSpecName: "config-data") pod "4d47c000-d7a4-4dca-a051-15a5d91f3ab9" (UID: "4d47c000-d7a4-4dca-a051-15a5d91f3ab9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.583316 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d47c000-d7a4-4dca-a051-15a5d91f3ab9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.610594 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 16:06:01 crc kubenswrapper[4966]: E0127 16:06:01.611200 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fab0805-4dcd-46f0-9fdf-3234dccac22e" containerName="dnsmasq-dns" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.611216 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fab0805-4dcd-46f0-9fdf-3234dccac22e" containerName="dnsmasq-dns" Jan 27 16:06:01 crc kubenswrapper[4966]: E0127 16:06:01.611231 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05fc08f5-c60a-4248-8de2-447d0415188e" containerName="heat-db-sync" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.611239 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="05fc08f5-c60a-4248-8de2-447d0415188e" containerName="heat-db-sync" Jan 27 16:06:01 crc kubenswrapper[4966]: E0127 16:06:01.611267 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fab0805-4dcd-46f0-9fdf-3234dccac22e" containerName="init" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.611275 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fab0805-4dcd-46f0-9fdf-3234dccac22e" containerName="init" Jan 27 16:06:01 crc kubenswrapper[4966]: E0127 16:06:01.611308 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d47c000-d7a4-4dca-a051-15a5d91f3ab9" containerName="cinder-db-sync" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.611316 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d47c000-d7a4-4dca-a051-15a5d91f3ab9" containerName="cinder-db-sync" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.611568 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="05fc08f5-c60a-4248-8de2-447d0415188e" containerName="heat-db-sync" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.611587 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fab0805-4dcd-46f0-9fdf-3234dccac22e" containerName="dnsmasq-dns" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.611622 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d47c000-d7a4-4dca-a051-15a5d91f3ab9" containerName="cinder-db-sync" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.612979 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.615709 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.640007 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.688285 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhpgk\" (UniqueName: \"kubernetes.io/projected/fb22a22f-5bef-4207-a218-78f78129d538-kube-api-access-zhpgk\") pod \"cinder-scheduler-0\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.688358 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-scripts\") pod \"cinder-scheduler-0\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.688426 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.688480 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb22a22f-5bef-4207-a218-78f78129d538-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.688541 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.688583 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-config-data\") pod \"cinder-scheduler-0\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.704948 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-dwz97"] Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.749258 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-22rl8"] Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.752803 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.766758 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-22rl8"] Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.796026 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhpgk\" (UniqueName: \"kubernetes.io/projected/fb22a22f-5bef-4207-a218-78f78129d538-kube-api-access-zhpgk\") pod \"cinder-scheduler-0\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.796092 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-scripts\") pod \"cinder-scheduler-0\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.812548 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.812723 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb22a22f-5bef-4207-a218-78f78129d538-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.812943 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.813034 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-config-data\") pod \"cinder-scheduler-0\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.814138 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb22a22f-5bef-4207-a218-78f78129d538-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.816765 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.819300 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhpgk\" (UniqueName: \"kubernetes.io/projected/fb22a22f-5bef-4207-a218-78f78129d538-kube-api-access-zhpgk\") pod \"cinder-scheduler-0\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.819764 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.843445 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-scripts\") pod \"cinder-scheduler-0\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.850121 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-config-data\") pod \"cinder-scheduler-0\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.850374 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.852122 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.856526 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.893863 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.921870 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cdl4\" (UniqueName: \"kubernetes.io/projected/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-kube-api-access-4cdl4\") pod \"dnsmasq-dns-6578955fd5-22rl8\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.921987 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-config\") pod \"dnsmasq-dns-6578955fd5-22rl8\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.922035 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-dns-svc\") pod \"dnsmasq-dns-6578955fd5-22rl8\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.922238 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-22rl8\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.922302 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-22rl8\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.922373 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-22rl8\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:01 crc kubenswrapper[4966]: I0127 16:06:01.944314 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.029380 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-logs\") pod \"cinder-api-0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.031492 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-config\") pod \"dnsmasq-dns-6578955fd5-22rl8\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.031590 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-dns-svc\") pod \"dnsmasq-dns-6578955fd5-22rl8\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.031669 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.031836 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-config-data\") pod \"cinder-api-0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.032198 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-config-data-custom\") pod \"cinder-api-0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.032278 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-22rl8\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.032384 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-22rl8\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.032471 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwkfc\" (UniqueName: \"kubernetes.io/projected/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-kube-api-access-lwkfc\") pod \"cinder-api-0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.032533 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-22rl8\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.032608 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.032698 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cdl4\" (UniqueName: \"kubernetes.io/projected/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-kube-api-access-4cdl4\") pod \"dnsmasq-dns-6578955fd5-22rl8\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.032725 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-scripts\") pod \"cinder-api-0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.033047 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-config\") pod \"dnsmasq-dns-6578955fd5-22rl8\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.033951 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-22rl8\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.034113 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-22rl8\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.034168 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-22rl8\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.034202 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-dns-svc\") pod \"dnsmasq-dns-6578955fd5-22rl8\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.074672 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cdl4\" (UniqueName: \"kubernetes.io/projected/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-kube-api-access-4cdl4\") pod \"dnsmasq-dns-6578955fd5-22rl8\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.077222 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.134921 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.135452 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-config-data\") pod \"cinder-api-0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.135590 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-config-data-custom\") pod \"cinder-api-0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.135731 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwkfc\" (UniqueName: \"kubernetes.io/projected/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-kube-api-access-lwkfc\") pod \"cinder-api-0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.135802 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.135840 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-scripts\") pod \"cinder-api-0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.135876 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-logs\") pod \"cinder-api-0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.136373 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-logs\") pod \"cinder-api-0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.135731 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.144620 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-scripts\") pod \"cinder-api-0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.147891 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: E0127 16:06:02.153956 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf36d34a3_56a3_4adf_be27_15bcd39431ed.slice/crio-4aed40990e9abd9d7e7daea7ca9656d7429a1c44056b264d92f2b6cf056ef34a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d47c000_d7a4_4dca_a051_15a5d91f3ab9.slice/crio-6ec5784942c7c3981aa257e3cac21e872663f7d243048fb4b67ea9da8838113e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d47c000_d7a4_4dca_a051_15a5d91f3ab9.slice\": RecentStats: unable to find data in memory cache]" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.155668 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-config-data\") pod \"cinder-api-0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.157765 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-config-data-custom\") pod \"cinder-api-0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.160418 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwkfc\" (UniqueName: \"kubernetes.io/projected/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-kube-api-access-lwkfc\") pod \"cinder-api-0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.200925 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.462678 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f5c687874-swft6" event={"ID":"553ff3ba-f6fa-4006-925f-4c5d43741caa","Type":"ContainerStarted","Data":"303c0ce661863e2601c3f27ce4b6242771a0a594924877aca1d8f327ef7a7e40"} Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.463052 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f5c687874-swft6" event={"ID":"553ff3ba-f6fa-4006-925f-4c5d43741caa","Type":"ContainerStarted","Data":"ff1f238673b10dbf617fc9443f7665d351815c0d97d01b94e7c2e82287541fa6"} Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.463947 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.464168 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.495341 4966 generic.go:334] "Generic (PLEG): container finished" podID="f36d34a3-56a3-4adf-be27-15bcd39431ed" containerID="4aed40990e9abd9d7e7daea7ca9656d7429a1c44056b264d92f2b6cf056ef34a" exitCode=0 Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.495432 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" event={"ID":"f36d34a3-56a3-4adf-be27-15bcd39431ed","Type":"ContainerDied","Data":"4aed40990e9abd9d7e7daea7ca9656d7429a1c44056b264d92f2b6cf056ef34a"} Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.515152 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-f5c687874-swft6" podStartSLOduration=3.515129643 podStartE2EDuration="3.515129643s" podCreationTimestamp="2026-01-27 16:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:06:02.498333236 +0000 UTC m=+1428.801126744" watchObservedRunningTime="2026-01-27 16:06:02.515129643 +0000 UTC m=+1428.817923131" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.584391 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fab0805-4dcd-46f0-9fdf-3234dccac22e" path="/var/lib/kubelet/pods/4fab0805-4dcd-46f0-9fdf-3234dccac22e/volumes" Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.639304 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d50b72a4-4856-4248-b784-239009beb314","Type":"ContainerStarted","Data":"1dec27b91ce2f475d477ea70f8314f9a1eb7cea3697059c003c9e588fccff574"} Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.649382 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 16:06:02 crc kubenswrapper[4966]: I0127 16:06:02.781606 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-22rl8"] Jan 27 16:06:03 crc kubenswrapper[4966]: I0127 16:06:03.015094 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 16:06:03 crc kubenswrapper[4966]: E0127 16:06:03.598354 4966 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 27 16:06:03 crc kubenswrapper[4966]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/f36d34a3-56a3-4adf-be27-15bcd39431ed/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 16:06:03 crc kubenswrapper[4966]: > podSandboxID="1050df5e234e53b68b804a71a71c00607d65ea4820c175bd3bfeb80313884b03" Jan 27 16:06:03 crc kubenswrapper[4966]: E0127 16:06:03.598814 4966 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 16:06:03 crc kubenswrapper[4966]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n99h8bhd9h696h649h588h5c6h658h5b4h57fh65h89h5f5h56h696h5dh8h57h597h68ch568h58dh66hf4h675h598h588h67dhb5h69h5dh6bq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dvwbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-848cf88cfc-dwz97_openstack(f36d34a3-56a3-4adf-be27-15bcd39431ed): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/f36d34a3-56a3-4adf-be27-15bcd39431ed/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 16:06:03 crc kubenswrapper[4966]: > logger="UnhandledError" Jan 27 16:06:03 crc kubenswrapper[4966]: E0127 16:06:03.600197 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/f36d34a3-56a3-4adf-be27-15bcd39431ed/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" podUID="f36d34a3-56a3-4adf-be27-15bcd39431ed" Jan 27 16:06:03 crc kubenswrapper[4966]: I0127 16:06:03.602266 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d50b72a4-4856-4248-b784-239009beb314","Type":"ContainerStarted","Data":"5bbd3659f41fd5066a31dffdb9e308a96d92349f0b720d62c2bf6a95b77d7b0e"} Jan 27 16:06:03 crc kubenswrapper[4966]: I0127 16:06:03.609136 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-22rl8" event={"ID":"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6","Type":"ContainerStarted","Data":"aa53c6dbb151e70bd802b8ee22d4fbdc2e3733c42fc42c6eb4c2dad0961af1df"} Jan 27 16:06:03 crc kubenswrapper[4966]: I0127 16:06:03.613021 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fb22a22f-5bef-4207-a218-78f78129d538","Type":"ContainerStarted","Data":"5da95639d31fc0e3c2fc04978ddddca842b54446ae7913a6b294d295d05500c1"} Jan 27 16:06:03 crc kubenswrapper[4966]: W0127 16:06:03.613375 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cdf04e8_bdfe_4f18_b764_cdf58ea20bb0.slice/crio-3399ea6d049597ef47ea215266150616a5ffde386e1cddd2e4b82af29cf8e9df WatchSource:0}: Error finding container 3399ea6d049597ef47ea215266150616a5ffde386e1cddd2e4b82af29cf8e9df: Status 404 returned error can't find the container with id 3399ea6d049597ef47ea215266150616a5ffde386e1cddd2e4b82af29cf8e9df Jan 27 16:06:04 crc kubenswrapper[4966]: I0127 16:06:04.648476 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0","Type":"ContainerStarted","Data":"3399ea6d049597ef47ea215266150616a5ffde386e1cddd2e4b82af29cf8e9df"} Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.156749 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.334170 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-dns-svc\") pod \"f36d34a3-56a3-4adf-be27-15bcd39431ed\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.334621 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvwbz\" (UniqueName: \"kubernetes.io/projected/f36d34a3-56a3-4adf-be27-15bcd39431ed-kube-api-access-dvwbz\") pod \"f36d34a3-56a3-4adf-be27-15bcd39431ed\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.334846 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-ovsdbserver-nb\") pod \"f36d34a3-56a3-4adf-be27-15bcd39431ed\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.335134 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-config\") pod \"f36d34a3-56a3-4adf-be27-15bcd39431ed\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.335460 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-dns-swift-storage-0\") pod \"f36d34a3-56a3-4adf-be27-15bcd39431ed\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.335657 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-ovsdbserver-sb\") pod \"f36d34a3-56a3-4adf-be27-15bcd39431ed\" (UID: \"f36d34a3-56a3-4adf-be27-15bcd39431ed\") " Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.342001 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f36d34a3-56a3-4adf-be27-15bcd39431ed-kube-api-access-dvwbz" (OuterVolumeSpecName: "kube-api-access-dvwbz") pod "f36d34a3-56a3-4adf-be27-15bcd39431ed" (UID: "f36d34a3-56a3-4adf-be27-15bcd39431ed"). InnerVolumeSpecName "kube-api-access-dvwbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.357579 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvwbz\" (UniqueName: \"kubernetes.io/projected/f36d34a3-56a3-4adf-be27-15bcd39431ed-kube-api-access-dvwbz\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.400328 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f36d34a3-56a3-4adf-be27-15bcd39431ed" (UID: "f36d34a3-56a3-4adf-be27-15bcd39431ed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.462350 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.473213 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-config" (OuterVolumeSpecName: "config") pod "f36d34a3-56a3-4adf-be27-15bcd39431ed" (UID: "f36d34a3-56a3-4adf-be27-15bcd39431ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.502980 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f36d34a3-56a3-4adf-be27-15bcd39431ed" (UID: "f36d34a3-56a3-4adf-be27-15bcd39431ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.503158 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f36d34a3-56a3-4adf-be27-15bcd39431ed" (UID: "f36d34a3-56a3-4adf-be27-15bcd39431ed"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.534421 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f36d34a3-56a3-4adf-be27-15bcd39431ed" (UID: "f36d34a3-56a3-4adf-be27-15bcd39431ed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.579754 4966 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.579782 4966 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.579793 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.579802 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f36d34a3-56a3-4adf-be27-15bcd39431ed-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.682881 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0","Type":"ContainerStarted","Data":"7b2aca9d277f1dcec6e7503e5238350016f7c435118428be7c3d5f5e2834fa18"} Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.690202 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" event={"ID":"f36d34a3-56a3-4adf-be27-15bcd39431ed","Type":"ContainerDied","Data":"1050df5e234e53b68b804a71a71c00607d65ea4820c175bd3bfeb80313884b03"} Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.690265 4966 scope.go:117] "RemoveContainer" containerID="4aed40990e9abd9d7e7daea7ca9656d7429a1c44056b264d92f2b6cf056ef34a" Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.690394 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-dwz97" Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.703504 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d50b72a4-4856-4248-b784-239009beb314","Type":"ContainerStarted","Data":"671c0014085b324b356b6f812ef1cd434ff28515f46fa482c08e3321703cbc0a"} Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.744798 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d55df94bc-bzw98" event={"ID":"db44a3bd-5583-4a79-838d-6a21f083e020","Type":"ContainerStarted","Data":"e2f9ea8b68cd2988fb0a5c6116891eee1be1220caf4d3237b5a22f9e4523d07f"} Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.744836 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d55df94bc-bzw98" event={"ID":"db44a3bd-5583-4a79-838d-6a21f083e020","Type":"ContainerStarted","Data":"880c4efd8edcb93f41cf19335114b8a76acf57426d19f4754369b18721672b9d"} Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.747588 4966 generic.go:334] "Generic (PLEG): container finished" podID="f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6" containerID="c3c8c68a7e898c416b965815724fc5fd6a882d9cb078e852cca229a4fa3254b2" exitCode=0 Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.747639 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-22rl8" event={"ID":"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6","Type":"ContainerDied","Data":"c3c8c68a7e898c416b965815724fc5fd6a882d9cb078e852cca229a4fa3254b2"} Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.770880 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fb22a22f-5bef-4207-a218-78f78129d538","Type":"ContainerStarted","Data":"2652351c1a18677ae03404b9cca67082f1e119d16cd030b8b48c966e2c3d8209"} Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.800750 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-797bd7c9db-vjlzd" event={"ID":"e49ad1cd-2925-41c2-b562-8a3478420d39","Type":"ContainerStarted","Data":"0b258bf39d18a57e64497bc319a08c9f5684ed02b15cd2354fb4b5364c8e9785"} Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.800797 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-797bd7c9db-vjlzd" event={"ID":"e49ad1cd-2925-41c2-b562-8a3478420d39","Type":"ContainerStarted","Data":"7b3c24e9d8df70bc83aea6a6c2d5c45e99bec52fc5ade7694917a261e14e3abd"} Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.804076 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-dwz97"] Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.826548 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-dwz97"] Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.828388 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5d55df94bc-bzw98" podStartSLOduration=3.820020709 podStartE2EDuration="6.828364926s" podCreationTimestamp="2026-01-27 16:05:59 +0000 UTC" firstStartedPulling="2026-01-27 16:06:01.319653684 +0000 UTC m=+1427.622447172" lastFinishedPulling="2026-01-27 16:06:04.327997901 +0000 UTC m=+1430.630791389" observedRunningTime="2026-01-27 16:06:05.800429519 +0000 UTC m=+1432.103223027" watchObservedRunningTime="2026-01-27 16:06:05.828364926 +0000 UTC m=+1432.131158414" Jan 27 16:06:05 crc kubenswrapper[4966]: I0127 16:06:05.882399 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-797bd7c9db-vjlzd" podStartSLOduration=3.596144825 podStartE2EDuration="6.882377991s" podCreationTimestamp="2026-01-27 16:05:59 +0000 UTC" firstStartedPulling="2026-01-27 16:06:01.046039709 +0000 UTC m=+1427.348833197" lastFinishedPulling="2026-01-27 16:06:04.332272875 +0000 UTC m=+1430.635066363" observedRunningTime="2026-01-27 16:06:05.84665379 +0000 UTC m=+1432.149447288" watchObservedRunningTime="2026-01-27 16:06:05.882377991 +0000 UTC m=+1432.185171479" Jan 27 16:06:06 crc kubenswrapper[4966]: I0127 16:06:06.416199 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 16:06:06 crc kubenswrapper[4966]: I0127 16:06:06.534410 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f36d34a3-56a3-4adf-be27-15bcd39431ed" path="/var/lib/kubelet/pods/f36d34a3-56a3-4adf-be27-15bcd39431ed/volumes" Jan 27 16:06:06 crc kubenswrapper[4966]: I0127 16:06:06.815406 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d50b72a4-4856-4248-b784-239009beb314","Type":"ContainerStarted","Data":"9f96bb4e225ba3a3b8aea9db3e93be6156e744aa151f965afcb1acd3624c8bed"} Jan 27 16:06:06 crc kubenswrapper[4966]: I0127 16:06:06.815569 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 16:06:06 crc kubenswrapper[4966]: I0127 16:06:06.817757 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-22rl8" event={"ID":"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6","Type":"ContainerStarted","Data":"eeb0f1440ced4edb03a189170d2506dc0aa04d3d1985ed20a1eb40072dd82bb3"} Jan 27 16:06:06 crc kubenswrapper[4966]: I0127 16:06:06.818410 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:06 crc kubenswrapper[4966]: I0127 16:06:06.820718 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fb22a22f-5bef-4207-a218-78f78129d538","Type":"ContainerStarted","Data":"8f73022b4143e178e9d48477cbb119ce9d8a9ae56f9d49fea8488827be8b7de7"} Jan 27 16:06:06 crc kubenswrapper[4966]: I0127 16:06:06.823277 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0","Type":"ContainerStarted","Data":"3b9cc635282d8d8bed4cd86330b8e46cca1139eac93cc04cbbf3db9735912118"} Jan 27 16:06:06 crc kubenswrapper[4966]: I0127 16:06:06.823818 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 16:06:06 crc kubenswrapper[4966]: I0127 16:06:06.855190 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.296960752 podStartE2EDuration="7.855172782s" podCreationTimestamp="2026-01-27 16:05:59 +0000 UTC" firstStartedPulling="2026-01-27 16:06:00.601072558 +0000 UTC m=+1426.903866046" lastFinishedPulling="2026-01-27 16:06:06.159284578 +0000 UTC m=+1432.462078076" observedRunningTime="2026-01-27 16:06:06.847417499 +0000 UTC m=+1433.150210987" watchObservedRunningTime="2026-01-27 16:06:06.855172782 +0000 UTC m=+1433.157966270" Jan 27 16:06:06 crc kubenswrapper[4966]: I0127 16:06:06.873885 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-22rl8" podStartSLOduration=5.873867869 podStartE2EDuration="5.873867869s" podCreationTimestamp="2026-01-27 16:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:06:06.872674291 +0000 UTC m=+1433.175467809" watchObservedRunningTime="2026-01-27 16:06:06.873867869 +0000 UTC m=+1433.176661357" Jan 27 16:06:06 crc kubenswrapper[4966]: I0127 16:06:06.908216 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.908200255 podStartE2EDuration="5.908200255s" podCreationTimestamp="2026-01-27 16:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:06:06.893642089 +0000 UTC m=+1433.196435587" watchObservedRunningTime="2026-01-27 16:06:06.908200255 +0000 UTC m=+1433.210993743" Jan 27 16:06:06 crc kubenswrapper[4966]: I0127 16:06:06.930361 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.157491857 podStartE2EDuration="5.93033089s" podCreationTimestamp="2026-01-27 16:06:01 +0000 UTC" firstStartedPulling="2026-01-27 16:06:02.567801355 +0000 UTC m=+1428.870594843" lastFinishedPulling="2026-01-27 16:06:04.340640368 +0000 UTC m=+1430.643433876" observedRunningTime="2026-01-27 16:06:06.915220656 +0000 UTC m=+1433.218014144" watchObservedRunningTime="2026-01-27 16:06:06.93033089 +0000 UTC m=+1433.233124378" Jan 27 16:06:06 crc kubenswrapper[4966]: I0127 16:06:06.958569 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.010753 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8d96bc9db-xwrs6"] Jan 27 16:06:07 crc kubenswrapper[4966]: E0127 16:06:07.011748 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f36d34a3-56a3-4adf-be27-15bcd39431ed" containerName="init" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.011927 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="f36d34a3-56a3-4adf-be27-15bcd39431ed" containerName="init" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.012343 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="f36d34a3-56a3-4adf-be27-15bcd39431ed" containerName="init" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.014320 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.019699 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.020034 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.023739 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8d96bc9db-xwrs6"] Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.122819 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ad0899a-bde7-4576-8195-6719d77a51d0-public-tls-certs\") pod \"barbican-api-8d96bc9db-xwrs6\" (UID: \"8ad0899a-bde7-4576-8195-6719d77a51d0\") " pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.122989 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ad0899a-bde7-4576-8195-6719d77a51d0-config-data\") pod \"barbican-api-8d96bc9db-xwrs6\" (UID: \"8ad0899a-bde7-4576-8195-6719d77a51d0\") " pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.123092 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ad0899a-bde7-4576-8195-6719d77a51d0-combined-ca-bundle\") pod \"barbican-api-8d96bc9db-xwrs6\" (UID: \"8ad0899a-bde7-4576-8195-6719d77a51d0\") " pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.123303 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ad0899a-bde7-4576-8195-6719d77a51d0-logs\") pod \"barbican-api-8d96bc9db-xwrs6\" (UID: \"8ad0899a-bde7-4576-8195-6719d77a51d0\") " pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.123414 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ad0899a-bde7-4576-8195-6719d77a51d0-internal-tls-certs\") pod \"barbican-api-8d96bc9db-xwrs6\" (UID: \"8ad0899a-bde7-4576-8195-6719d77a51d0\") " pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.123550 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf888\" (UniqueName: \"kubernetes.io/projected/8ad0899a-bde7-4576-8195-6719d77a51d0-kube-api-access-rf888\") pod \"barbican-api-8d96bc9db-xwrs6\" (UID: \"8ad0899a-bde7-4576-8195-6719d77a51d0\") " pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.123601 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ad0899a-bde7-4576-8195-6719d77a51d0-config-data-custom\") pod \"barbican-api-8d96bc9db-xwrs6\" (UID: \"8ad0899a-bde7-4576-8195-6719d77a51d0\") " pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.225198 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ad0899a-bde7-4576-8195-6719d77a51d0-internal-tls-certs\") pod \"barbican-api-8d96bc9db-xwrs6\" (UID: \"8ad0899a-bde7-4576-8195-6719d77a51d0\") " pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.225290 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf888\" (UniqueName: \"kubernetes.io/projected/8ad0899a-bde7-4576-8195-6719d77a51d0-kube-api-access-rf888\") pod \"barbican-api-8d96bc9db-xwrs6\" (UID: \"8ad0899a-bde7-4576-8195-6719d77a51d0\") " pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.225323 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ad0899a-bde7-4576-8195-6719d77a51d0-config-data-custom\") pod \"barbican-api-8d96bc9db-xwrs6\" (UID: \"8ad0899a-bde7-4576-8195-6719d77a51d0\") " pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.225394 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ad0899a-bde7-4576-8195-6719d77a51d0-public-tls-certs\") pod \"barbican-api-8d96bc9db-xwrs6\" (UID: \"8ad0899a-bde7-4576-8195-6719d77a51d0\") " pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.225437 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ad0899a-bde7-4576-8195-6719d77a51d0-config-data\") pod \"barbican-api-8d96bc9db-xwrs6\" (UID: \"8ad0899a-bde7-4576-8195-6719d77a51d0\") " pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.225458 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ad0899a-bde7-4576-8195-6719d77a51d0-combined-ca-bundle\") pod \"barbican-api-8d96bc9db-xwrs6\" (UID: \"8ad0899a-bde7-4576-8195-6719d77a51d0\") " pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.225511 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ad0899a-bde7-4576-8195-6719d77a51d0-logs\") pod \"barbican-api-8d96bc9db-xwrs6\" (UID: \"8ad0899a-bde7-4576-8195-6719d77a51d0\") " pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.225947 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ad0899a-bde7-4576-8195-6719d77a51d0-logs\") pod \"barbican-api-8d96bc9db-xwrs6\" (UID: \"8ad0899a-bde7-4576-8195-6719d77a51d0\") " pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.234144 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ad0899a-bde7-4576-8195-6719d77a51d0-config-data\") pod \"barbican-api-8d96bc9db-xwrs6\" (UID: \"8ad0899a-bde7-4576-8195-6719d77a51d0\") " pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.234348 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ad0899a-bde7-4576-8195-6719d77a51d0-public-tls-certs\") pod \"barbican-api-8d96bc9db-xwrs6\" (UID: \"8ad0899a-bde7-4576-8195-6719d77a51d0\") " pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.248603 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ad0899a-bde7-4576-8195-6719d77a51d0-combined-ca-bundle\") pod \"barbican-api-8d96bc9db-xwrs6\" (UID: \"8ad0899a-bde7-4576-8195-6719d77a51d0\") " pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.249727 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ad0899a-bde7-4576-8195-6719d77a51d0-internal-tls-certs\") pod \"barbican-api-8d96bc9db-xwrs6\" (UID: \"8ad0899a-bde7-4576-8195-6719d77a51d0\") " pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.250284 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ad0899a-bde7-4576-8195-6719d77a51d0-config-data-custom\") pod \"barbican-api-8d96bc9db-xwrs6\" (UID: \"8ad0899a-bde7-4576-8195-6719d77a51d0\") " pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.257452 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf888\" (UniqueName: \"kubernetes.io/projected/8ad0899a-bde7-4576-8195-6719d77a51d0-kube-api-access-rf888\") pod \"barbican-api-8d96bc9db-xwrs6\" (UID: \"8ad0899a-bde7-4576-8195-6719d77a51d0\") " pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.338114 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.834595 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0" containerName="cinder-api-log" containerID="cri-o://7b2aca9d277f1dcec6e7503e5238350016f7c435118428be7c3d5f5e2834fa18" gracePeriod=30 Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.834655 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0" containerName="cinder-api" containerID="cri-o://3b9cc635282d8d8bed4cd86330b8e46cca1139eac93cc04cbbf3db9735912118" gracePeriod=30 Jan 27 16:06:07 crc kubenswrapper[4966]: W0127 16:06:07.921236 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ad0899a_bde7_4576_8195_6719d77a51d0.slice/crio-6911a84569ec08c31159251ba371fb575023e064afe95355fb982453ee449c89 WatchSource:0}: Error finding container 6911a84569ec08c31159251ba371fb575023e064afe95355fb982453ee449c89: Status 404 returned error can't find the container with id 6911a84569ec08c31159251ba371fb575023e064afe95355fb982453ee449c89 Jan 27 16:06:07 crc kubenswrapper[4966]: I0127 16:06:07.921940 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8d96bc9db-xwrs6"] Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.845264 4966 generic.go:334] "Generic (PLEG): container finished" podID="3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0" containerID="3b9cc635282d8d8bed4cd86330b8e46cca1139eac93cc04cbbf3db9735912118" exitCode=0 Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.845530 4966 generic.go:334] "Generic (PLEG): container finished" podID="3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0" containerID="7b2aca9d277f1dcec6e7503e5238350016f7c435118428be7c3d5f5e2834fa18" exitCode=143 Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.845360 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0","Type":"ContainerDied","Data":"3b9cc635282d8d8bed4cd86330b8e46cca1139eac93cc04cbbf3db9735912118"} Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.845594 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0","Type":"ContainerDied","Data":"7b2aca9d277f1dcec6e7503e5238350016f7c435118428be7c3d5f5e2834fa18"} Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.845609 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0","Type":"ContainerDied","Data":"3399ea6d049597ef47ea215266150616a5ffde386e1cddd2e4b82af29cf8e9df"} Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.845619 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3399ea6d049597ef47ea215266150616a5ffde386e1cddd2e4b82af29cf8e9df" Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.855779 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8d96bc9db-xwrs6" event={"ID":"8ad0899a-bde7-4576-8195-6719d77a51d0","Type":"ContainerStarted","Data":"764855cb83d36518f9a602ac324c49b420cba902774e8017cc5a8705d73405e7"} Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.855831 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8d96bc9db-xwrs6" event={"ID":"8ad0899a-bde7-4576-8195-6719d77a51d0","Type":"ContainerStarted","Data":"1299536a6b1a765b5f092d0ff88485ed10f14ef13986b7b488d6f1d576348352"} Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.855842 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8d96bc9db-xwrs6" event={"ID":"8ad0899a-bde7-4576-8195-6719d77a51d0","Type":"ContainerStarted","Data":"6911a84569ec08c31159251ba371fb575023e064afe95355fb982453ee449c89"} Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.895981 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8d96bc9db-xwrs6" podStartSLOduration=2.895962342 podStartE2EDuration="2.895962342s" podCreationTimestamp="2026-01-27 16:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:06:08.883379608 +0000 UTC m=+1435.186173096" watchObservedRunningTime="2026-01-27 16:06:08.895962342 +0000 UTC m=+1435.198755830" Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.909998 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.975185 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-combined-ca-bundle\") pod \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.975261 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-logs\") pod \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.975296 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-config-data-custom\") pod \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.975482 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-scripts\") pod \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.975511 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwkfc\" (UniqueName: \"kubernetes.io/projected/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-kube-api-access-lwkfc\") pod \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.975559 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-etc-machine-id\") pod \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.975611 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-config-data\") pod \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\" (UID: \"3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0\") " Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.975752 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0" (UID: "3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.977748 4966 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.979141 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-logs" (OuterVolumeSpecName: "logs") pod "3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0" (UID: "3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.981597 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0" (UID: "3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.983962 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-kube-api-access-lwkfc" (OuterVolumeSpecName: "kube-api-access-lwkfc") pod "3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0" (UID: "3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0"). InnerVolumeSpecName "kube-api-access-lwkfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:08 crc kubenswrapper[4966]: I0127 16:06:08.991059 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-scripts" (OuterVolumeSpecName: "scripts") pod "3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0" (UID: "3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.049066 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0" (UID: "3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.067957 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.080629 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-config-data" (OuterVolumeSpecName: "config-data") pod "3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0" (UID: "3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.080804 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.082203 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.082225 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwkfc\" (UniqueName: \"kubernetes.io/projected/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-kube-api-access-lwkfc\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.082237 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.082248 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.082257 4966 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-logs\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.082264 4966 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.865502 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.865774 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.866021 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.935953 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.963405 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.978720 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 16:06:09 crc kubenswrapper[4966]: E0127 16:06:09.979348 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0" containerName="cinder-api-log" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.979373 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0" containerName="cinder-api-log" Jan 27 16:06:09 crc kubenswrapper[4966]: E0127 16:06:09.979423 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0" containerName="cinder-api" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.979433 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0" containerName="cinder-api" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.979685 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0" containerName="cinder-api" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.979713 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0" containerName="cinder-api-log" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.981625 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.984426 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.984597 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.984756 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 27 16:06:09 crc kubenswrapper[4966]: I0127 16:06:09.993341 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.121464 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7212827-c660-4b8a-b0ef-62d91f255dd6-logs\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.122955 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7212827-c660-4b8a-b0ef-62d91f255dd6-config-data-custom\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.122984 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7212827-c660-4b8a-b0ef-62d91f255dd6-config-data\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.123087 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7212827-c660-4b8a-b0ef-62d91f255dd6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.124122 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7212827-c660-4b8a-b0ef-62d91f255dd6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.124241 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7212827-c660-4b8a-b0ef-62d91f255dd6-scripts\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.124315 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2k26\" (UniqueName: \"kubernetes.io/projected/c7212827-c660-4b8a-b0ef-62d91f255dd6-kube-api-access-t2k26\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.124346 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7212827-c660-4b8a-b0ef-62d91f255dd6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.124364 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7212827-c660-4b8a-b0ef-62d91f255dd6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.226808 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2k26\" (UniqueName: \"kubernetes.io/projected/c7212827-c660-4b8a-b0ef-62d91f255dd6-kube-api-access-t2k26\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.226926 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7212827-c660-4b8a-b0ef-62d91f255dd6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.226965 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7212827-c660-4b8a-b0ef-62d91f255dd6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.227154 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7212827-c660-4b8a-b0ef-62d91f255dd6-logs\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.227221 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7212827-c660-4b8a-b0ef-62d91f255dd6-config-data-custom\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.227259 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7212827-c660-4b8a-b0ef-62d91f255dd6-config-data\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.227348 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7212827-c660-4b8a-b0ef-62d91f255dd6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.227452 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7212827-c660-4b8a-b0ef-62d91f255dd6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.227539 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7212827-c660-4b8a-b0ef-62d91f255dd6-scripts\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.227613 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7212827-c660-4b8a-b0ef-62d91f255dd6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.227804 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7212827-c660-4b8a-b0ef-62d91f255dd6-logs\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.233515 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7212827-c660-4b8a-b0ef-62d91f255dd6-config-data\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.233603 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7212827-c660-4b8a-b0ef-62d91f255dd6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.234577 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7212827-c660-4b8a-b0ef-62d91f255dd6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.236256 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7212827-c660-4b8a-b0ef-62d91f255dd6-config-data-custom\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.245386 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7212827-c660-4b8a-b0ef-62d91f255dd6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.247433 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2k26\" (UniqueName: \"kubernetes.io/projected/c7212827-c660-4b8a-b0ef-62d91f255dd6-kube-api-access-t2k26\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.247676 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7212827-c660-4b8a-b0ef-62d91f255dd6-scripts\") pod \"cinder-api-0\" (UID: \"c7212827-c660-4b8a-b0ef-62d91f255dd6\") " pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.317622 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.544339 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0" path="/var/lib/kubelet/pods/3cdf04e8-bdfe-4f18-b764-cdf58ea20bb0/volumes" Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.810246 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 16:06:10 crc kubenswrapper[4966]: I0127 16:06:10.878056 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c7212827-c660-4b8a-b0ef-62d91f255dd6","Type":"ContainerStarted","Data":"3fffc9e2ac07e5dc1227fac745a1997f4b4af1083fd18bfdf495c32e1707a861"} Jan 27 16:06:11 crc kubenswrapper[4966]: I0127 16:06:11.867723 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:06:11 crc kubenswrapper[4966]: I0127 16:06:11.897685 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c7212827-c660-4b8a-b0ef-62d91f255dd6","Type":"ContainerStarted","Data":"6fa837d4b98dc8d7f252c773a1c1288f7713ebd1f5472039027685f08fa64cf7"} Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.041415 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.079073 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.152139 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lhvdf"] Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.152377 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" podUID="c1a7725b-4536-4728-9aa7-99ca1c44daa6" containerName="dnsmasq-dns" containerID="cri-o://3d274a7defd5c586b9e8d4c4aa79a78cf623ffe79e6f44cff7d2d3f4aca9d1d7" gracePeriod=10 Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.231270 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.381709 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.801315 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.914646 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpdj4\" (UniqueName: \"kubernetes.io/projected/c1a7725b-4536-4728-9aa7-99ca1c44daa6-kube-api-access-lpdj4\") pod \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.914689 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-dns-swift-storage-0\") pod \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.914854 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-dns-svc\") pod \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.914875 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-ovsdbserver-sb\") pod \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.915039 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-config\") pod \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.915097 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-ovsdbserver-nb\") pod \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\" (UID: \"c1a7725b-4536-4728-9aa7-99ca1c44daa6\") " Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.926464 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a7725b-4536-4728-9aa7-99ca1c44daa6-kube-api-access-lpdj4" (OuterVolumeSpecName: "kube-api-access-lpdj4") pod "c1a7725b-4536-4728-9aa7-99ca1c44daa6" (UID: "c1a7725b-4536-4728-9aa7-99ca1c44daa6"). InnerVolumeSpecName "kube-api-access-lpdj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.935696 4966 generic.go:334] "Generic (PLEG): container finished" podID="c1a7725b-4536-4728-9aa7-99ca1c44daa6" containerID="3d274a7defd5c586b9e8d4c4aa79a78cf623ffe79e6f44cff7d2d3f4aca9d1d7" exitCode=0 Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.935777 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" event={"ID":"c1a7725b-4536-4728-9aa7-99ca1c44daa6","Type":"ContainerDied","Data":"3d274a7defd5c586b9e8d4c4aa79a78cf623ffe79e6f44cff7d2d3f4aca9d1d7"} Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.936082 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" event={"ID":"c1a7725b-4536-4728-9aa7-99ca1c44daa6","Type":"ContainerDied","Data":"aa927131a970f89d7e78417c7bdfad40feaa9196b177520dd4819347a8ce0937"} Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.936175 4966 scope.go:117] "RemoveContainer" containerID="3d274a7defd5c586b9e8d4c4aa79a78cf623ffe79e6f44cff7d2d3f4aca9d1d7" Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.935830 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-lhvdf" Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.958763 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fb22a22f-5bef-4207-a218-78f78129d538" containerName="cinder-scheduler" containerID="cri-o://2652351c1a18677ae03404b9cca67082f1e119d16cd030b8b48c966e2c3d8209" gracePeriod=30 Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.960026 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c7212827-c660-4b8a-b0ef-62d91f255dd6","Type":"ContainerStarted","Data":"9fd2c4de3d6356a1b2e835c4faa5833a93ee97f33c27663bbf0998039c9b7370"} Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.960515 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fb22a22f-5bef-4207-a218-78f78129d538" containerName="probe" containerID="cri-o://8f73022b4143e178e9d48477cbb119ce9d8a9ae56f9d49fea8488827be8b7de7" gracePeriod=30 Jan 27 16:06:12 crc kubenswrapper[4966]: I0127 16:06:12.960569 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.012499 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.01247931 podStartE2EDuration="4.01247931s" podCreationTimestamp="2026-01-27 16:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:06:13.00323086 +0000 UTC m=+1439.306024358" watchObservedRunningTime="2026-01-27 16:06:13.01247931 +0000 UTC m=+1439.315272798" Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.024329 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c1a7725b-4536-4728-9aa7-99ca1c44daa6" (UID: "c1a7725b-4536-4728-9aa7-99ca1c44daa6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.024373 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpdj4\" (UniqueName: \"kubernetes.io/projected/c1a7725b-4536-4728-9aa7-99ca1c44daa6-kube-api-access-lpdj4\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.053688 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c1a7725b-4536-4728-9aa7-99ca1c44daa6" (UID: "c1a7725b-4536-4728-9aa7-99ca1c44daa6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.060961 4966 scope.go:117] "RemoveContainer" containerID="399322d621c266f60dd71784a392209b6f80f692969184e92043f604bac485d4" Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.084307 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c1a7725b-4536-4728-9aa7-99ca1c44daa6" (UID: "c1a7725b-4536-4728-9aa7-99ca1c44daa6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.096339 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-config" (OuterVolumeSpecName: "config") pod "c1a7725b-4536-4728-9aa7-99ca1c44daa6" (UID: "c1a7725b-4536-4728-9aa7-99ca1c44daa6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.111188 4966 scope.go:117] "RemoveContainer" containerID="3d274a7defd5c586b9e8d4c4aa79a78cf623ffe79e6f44cff7d2d3f4aca9d1d7" Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.111527 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c1a7725b-4536-4728-9aa7-99ca1c44daa6" (UID: "c1a7725b-4536-4728-9aa7-99ca1c44daa6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:13 crc kubenswrapper[4966]: E0127 16:06:13.118053 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d274a7defd5c586b9e8d4c4aa79a78cf623ffe79e6f44cff7d2d3f4aca9d1d7\": container with ID starting with 3d274a7defd5c586b9e8d4c4aa79a78cf623ffe79e6f44cff7d2d3f4aca9d1d7 not found: ID does not exist" containerID="3d274a7defd5c586b9e8d4c4aa79a78cf623ffe79e6f44cff7d2d3f4aca9d1d7" Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.118108 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d274a7defd5c586b9e8d4c4aa79a78cf623ffe79e6f44cff7d2d3f4aca9d1d7"} err="failed to get container status \"3d274a7defd5c586b9e8d4c4aa79a78cf623ffe79e6f44cff7d2d3f4aca9d1d7\": rpc error: code = NotFound desc = could not find container \"3d274a7defd5c586b9e8d4c4aa79a78cf623ffe79e6f44cff7d2d3f4aca9d1d7\": container with ID starting with 3d274a7defd5c586b9e8d4c4aa79a78cf623ffe79e6f44cff7d2d3f4aca9d1d7 not found: ID does not exist" Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.118137 4966 scope.go:117] "RemoveContainer" containerID="399322d621c266f60dd71784a392209b6f80f692969184e92043f604bac485d4" Jan 27 16:06:13 crc kubenswrapper[4966]: E0127 16:06:13.122361 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"399322d621c266f60dd71784a392209b6f80f692969184e92043f604bac485d4\": container with ID starting with 399322d621c266f60dd71784a392209b6f80f692969184e92043f604bac485d4 not found: ID does not exist" containerID="399322d621c266f60dd71784a392209b6f80f692969184e92043f604bac485d4" Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.122407 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"399322d621c266f60dd71784a392209b6f80f692969184e92043f604bac485d4"} err="failed to get container status \"399322d621c266f60dd71784a392209b6f80f692969184e92043f604bac485d4\": rpc error: code = NotFound desc = could not find container \"399322d621c266f60dd71784a392209b6f80f692969184e92043f604bac485d4\": container with ID starting with 399322d621c266f60dd71784a392209b6f80f692969184e92043f604bac485d4 not found: ID does not exist" Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.126560 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.126599 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.126612 4966 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.126622 4966 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.126630 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1a7725b-4536-4728-9aa7-99ca1c44daa6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.305390 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lhvdf"] Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.333130 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lhvdf"] Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.971706 4966 generic.go:334] "Generic (PLEG): container finished" podID="fb22a22f-5bef-4207-a218-78f78129d538" containerID="8f73022b4143e178e9d48477cbb119ce9d8a9ae56f9d49fea8488827be8b7de7" exitCode=0 Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.971965 4966 generic.go:334] "Generic (PLEG): container finished" podID="fb22a22f-5bef-4207-a218-78f78129d538" containerID="2652351c1a18677ae03404b9cca67082f1e119d16cd030b8b48c966e2c3d8209" exitCode=0 Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.971915 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fb22a22f-5bef-4207-a218-78f78129d538","Type":"ContainerDied","Data":"8f73022b4143e178e9d48477cbb119ce9d8a9ae56f9d49fea8488827be8b7de7"} Jan 27 16:06:13 crc kubenswrapper[4966]: I0127 16:06:13.972014 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fb22a22f-5bef-4207-a218-78f78129d538","Type":"ContainerDied","Data":"2652351c1a18677ae03404b9cca67082f1e119d16cd030b8b48c966e2c3d8209"} Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.302188 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.563737 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1a7725b-4536-4728-9aa7-99ca1c44daa6" path="/var/lib/kubelet/pods/c1a7725b-4536-4728-9aa7-99ca1c44daa6/volumes" Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.692250 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.789113 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhpgk\" (UniqueName: \"kubernetes.io/projected/fb22a22f-5bef-4207-a218-78f78129d538-kube-api-access-zhpgk\") pod \"fb22a22f-5bef-4207-a218-78f78129d538\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.789234 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-combined-ca-bundle\") pod \"fb22a22f-5bef-4207-a218-78f78129d538\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.789361 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-scripts\") pod \"fb22a22f-5bef-4207-a218-78f78129d538\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.789422 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb22a22f-5bef-4207-a218-78f78129d538-etc-machine-id\") pod \"fb22a22f-5bef-4207-a218-78f78129d538\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.789459 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-config-data\") pod \"fb22a22f-5bef-4207-a218-78f78129d538\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.789497 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-config-data-custom\") pod \"fb22a22f-5bef-4207-a218-78f78129d538\" (UID: \"fb22a22f-5bef-4207-a218-78f78129d538\") " Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.789525 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb22a22f-5bef-4207-a218-78f78129d538-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fb22a22f-5bef-4207-a218-78f78129d538" (UID: "fb22a22f-5bef-4207-a218-78f78129d538"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.790125 4966 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb22a22f-5bef-4207-a218-78f78129d538-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.795039 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb22a22f-5bef-4207-a218-78f78129d538-kube-api-access-zhpgk" (OuterVolumeSpecName: "kube-api-access-zhpgk") pod "fb22a22f-5bef-4207-a218-78f78129d538" (UID: "fb22a22f-5bef-4207-a218-78f78129d538"). InnerVolumeSpecName "kube-api-access-zhpgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.796025 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-scripts" (OuterVolumeSpecName: "scripts") pod "fb22a22f-5bef-4207-a218-78f78129d538" (UID: "fb22a22f-5bef-4207-a218-78f78129d538"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.797646 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fb22a22f-5bef-4207-a218-78f78129d538" (UID: "fb22a22f-5bef-4207-a218-78f78129d538"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.859342 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb22a22f-5bef-4207-a218-78f78129d538" (UID: "fb22a22f-5bef-4207-a218-78f78129d538"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.892659 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhpgk\" (UniqueName: \"kubernetes.io/projected/fb22a22f-5bef-4207-a218-78f78129d538-kube-api-access-zhpgk\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.892692 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.892714 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.892725 4966 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.912141 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-config-data" (OuterVolumeSpecName: "config-data") pod "fb22a22f-5bef-4207-a218-78f78129d538" (UID: "fb22a22f-5bef-4207-a218-78f78129d538"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.961527 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-f68b888d8-25wpv" Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.984626 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fb22a22f-5bef-4207-a218-78f78129d538","Type":"ContainerDied","Data":"5da95639d31fc0e3c2fc04978ddddca842b54446ae7913a6b294d295d05500c1"} Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.984686 4966 scope.go:117] "RemoveContainer" containerID="8f73022b4143e178e9d48477cbb119ce9d8a9ae56f9d49fea8488827be8b7de7" Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.984711 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 16:06:14 crc kubenswrapper[4966]: I0127 16:06:14.994379 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb22a22f-5bef-4207-a218-78f78129d538-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.034648 4966 scope.go:117] "RemoveContainer" containerID="2652351c1a18677ae03404b9cca67082f1e119d16cd030b8b48c966e2c3d8209" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.043881 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.048321 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7d7bd8f5c6-k4z4r" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.066202 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.100947 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 16:06:15 crc kubenswrapper[4966]: E0127 16:06:15.101829 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a7725b-4536-4728-9aa7-99ca1c44daa6" containerName="init" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.101844 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a7725b-4536-4728-9aa7-99ca1c44daa6" containerName="init" Jan 27 16:06:15 crc kubenswrapper[4966]: E0127 16:06:15.101854 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb22a22f-5bef-4207-a218-78f78129d538" containerName="cinder-scheduler" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.101861 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb22a22f-5bef-4207-a218-78f78129d538" containerName="cinder-scheduler" Jan 27 16:06:15 crc kubenswrapper[4966]: E0127 16:06:15.101885 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a7725b-4536-4728-9aa7-99ca1c44daa6" containerName="dnsmasq-dns" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.101890 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a7725b-4536-4728-9aa7-99ca1c44daa6" containerName="dnsmasq-dns" Jan 27 16:06:15 crc kubenswrapper[4966]: E0127 16:06:15.101921 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb22a22f-5bef-4207-a218-78f78129d538" containerName="probe" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.101927 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb22a22f-5bef-4207-a218-78f78129d538" containerName="probe" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.102116 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb22a22f-5bef-4207-a218-78f78129d538" containerName="cinder-scheduler" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.102136 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a7725b-4536-4728-9aa7-99ca1c44daa6" containerName="dnsmasq-dns" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.102150 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb22a22f-5bef-4207-a218-78f78129d538" containerName="probe" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.103472 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.108774 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.126965 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.199881 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/987fabcb-b141-4f03-96fe-d2acf923452c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"987fabcb-b141-4f03-96fe-d2acf923452c\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.200053 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/987fabcb-b141-4f03-96fe-d2acf923452c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"987fabcb-b141-4f03-96fe-d2acf923452c\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.200275 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/987fabcb-b141-4f03-96fe-d2acf923452c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"987fabcb-b141-4f03-96fe-d2acf923452c\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.200332 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw9bd\" (UniqueName: \"kubernetes.io/projected/987fabcb-b141-4f03-96fe-d2acf923452c-kube-api-access-jw9bd\") pod \"cinder-scheduler-0\" (UID: \"987fabcb-b141-4f03-96fe-d2acf923452c\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.200516 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/987fabcb-b141-4f03-96fe-d2acf923452c-scripts\") pod \"cinder-scheduler-0\" (UID: \"987fabcb-b141-4f03-96fe-d2acf923452c\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.200684 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/987fabcb-b141-4f03-96fe-d2acf923452c-config-data\") pod \"cinder-scheduler-0\" (UID: \"987fabcb-b141-4f03-96fe-d2acf923452c\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.303679 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/987fabcb-b141-4f03-96fe-d2acf923452c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"987fabcb-b141-4f03-96fe-d2acf923452c\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.303808 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/987fabcb-b141-4f03-96fe-d2acf923452c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"987fabcb-b141-4f03-96fe-d2acf923452c\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.303869 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/987fabcb-b141-4f03-96fe-d2acf923452c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"987fabcb-b141-4f03-96fe-d2acf923452c\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.304014 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/987fabcb-b141-4f03-96fe-d2acf923452c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"987fabcb-b141-4f03-96fe-d2acf923452c\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.304134 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw9bd\" (UniqueName: \"kubernetes.io/projected/987fabcb-b141-4f03-96fe-d2acf923452c-kube-api-access-jw9bd\") pod \"cinder-scheduler-0\" (UID: \"987fabcb-b141-4f03-96fe-d2acf923452c\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.304251 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/987fabcb-b141-4f03-96fe-d2acf923452c-scripts\") pod \"cinder-scheduler-0\" (UID: \"987fabcb-b141-4f03-96fe-d2acf923452c\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.304362 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/987fabcb-b141-4f03-96fe-d2acf923452c-config-data\") pod \"cinder-scheduler-0\" (UID: \"987fabcb-b141-4f03-96fe-d2acf923452c\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.309344 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/987fabcb-b141-4f03-96fe-d2acf923452c-config-data\") pod \"cinder-scheduler-0\" (UID: \"987fabcb-b141-4f03-96fe-d2acf923452c\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.310401 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/987fabcb-b141-4f03-96fe-d2acf923452c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"987fabcb-b141-4f03-96fe-d2acf923452c\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.321559 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/987fabcb-b141-4f03-96fe-d2acf923452c-scripts\") pod \"cinder-scheduler-0\" (UID: \"987fabcb-b141-4f03-96fe-d2acf923452c\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.321876 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw9bd\" (UniqueName: \"kubernetes.io/projected/987fabcb-b141-4f03-96fe-d2acf923452c-kube-api-access-jw9bd\") pod \"cinder-scheduler-0\" (UID: \"987fabcb-b141-4f03-96fe-d2acf923452c\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.323384 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/987fabcb-b141-4f03-96fe-d2acf923452c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"987fabcb-b141-4f03-96fe-d2acf923452c\") " pod="openstack/cinder-scheduler-0" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.427682 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.924295 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 16:06:15 crc kubenswrapper[4966]: I0127 16:06:15.996887 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"987fabcb-b141-4f03-96fe-d2acf923452c","Type":"ContainerStarted","Data":"6fe8b5f0384f15a1ebe40ac727b43721c7584ef2336d1cda698598c10aef3808"} Jan 27 16:06:16 crc kubenswrapper[4966]: I0127 16:06:16.538680 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb22a22f-5bef-4207-a218-78f78129d538" path="/var/lib/kubelet/pods/fb22a22f-5bef-4207-a218-78f78129d538/volumes" Jan 27 16:06:16 crc kubenswrapper[4966]: I0127 16:06:16.683067 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6f886fdd58-d9dcc" Jan 27 16:06:17 crc kubenswrapper[4966]: I0127 16:06:17.027435 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"987fabcb-b141-4f03-96fe-d2acf923452c","Type":"ContainerStarted","Data":"e1196732ce70b236cab0778765692e918642b8d2323d9109cf37e08adf689da2"} Jan 27 16:06:18 crc kubenswrapper[4966]: I0127 16:06:18.037658 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"987fabcb-b141-4f03-96fe-d2acf923452c","Type":"ContainerStarted","Data":"375cfe3faf1b8abe5cd99c6733d0234c8567de870a98f8a004a92e651a55b3cb"} Jan 27 16:06:18 crc kubenswrapper[4966]: I0127 16:06:18.073126 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.073107378 podStartE2EDuration="3.073107378s" podCreationTimestamp="2026-01-27 16:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:06:18.056164896 +0000 UTC m=+1444.358958404" watchObservedRunningTime="2026-01-27 16:06:18.073107378 +0000 UTC m=+1444.375900866" Jan 27 16:06:18 crc kubenswrapper[4966]: I0127 16:06:18.654395 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5c7444dc4c-gxtck" Jan 27 16:06:18 crc kubenswrapper[4966]: I0127 16:06:18.723043 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f886fdd58-d9dcc"] Jan 27 16:06:18 crc kubenswrapper[4966]: I0127 16:06:18.723260 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f886fdd58-d9dcc" podUID="4b821f12-5d2b-476d-9a06-82c533b408cc" containerName="neutron-api" containerID="cri-o://a8b590e06b1aa4545f0914bd63c32e52516019a41e89d0157811b8b4aed63112" gracePeriod=30 Jan 27 16:06:18 crc kubenswrapper[4966]: I0127 16:06:18.723679 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f886fdd58-d9dcc" podUID="4b821f12-5d2b-476d-9a06-82c533b408cc" containerName="neutron-httpd" containerID="cri-o://231061fda531e6c517b98391a2b224fe2af512ea2318046034b9ac10b9af16ff" gracePeriod=30 Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.049806 4966 generic.go:334] "Generic (PLEG): container finished" podID="4b821f12-5d2b-476d-9a06-82c533b408cc" containerID="231061fda531e6c517b98391a2b224fe2af512ea2318046034b9ac10b9af16ff" exitCode=0 Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.051348 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f886fdd58-d9dcc" event={"ID":"4b821f12-5d2b-476d-9a06-82c533b408cc","Type":"ContainerDied","Data":"231061fda531e6c517b98391a2b224fe2af512ea2318046034b9ac10b9af16ff"} Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.089352 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.091388 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.101623 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9abfdd5d-c36d-4412-a88b-0a55237fb5b0\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.101772 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v4z5\" (UniqueName: \"kubernetes.io/projected/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-kube-api-access-7v4z5\") pod \"openstackclient\" (UID: \"9abfdd5d-c36d-4412-a88b-0a55237fb5b0\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.101860 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-openstack-config\") pod \"openstackclient\" (UID: \"9abfdd5d-c36d-4412-a88b-0a55237fb5b0\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.101924 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-openstack-config-secret\") pod \"openstackclient\" (UID: \"9abfdd5d-c36d-4412-a88b-0a55237fb5b0\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.108200 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-86bfd" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.108356 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.108485 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.126515 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.204095 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9abfdd5d-c36d-4412-a88b-0a55237fb5b0\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.204215 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v4z5\" (UniqueName: \"kubernetes.io/projected/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-kube-api-access-7v4z5\") pod \"openstackclient\" (UID: \"9abfdd5d-c36d-4412-a88b-0a55237fb5b0\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.204308 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-openstack-config\") pod \"openstackclient\" (UID: \"9abfdd5d-c36d-4412-a88b-0a55237fb5b0\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.204375 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-openstack-config-secret\") pod \"openstackclient\" (UID: \"9abfdd5d-c36d-4412-a88b-0a55237fb5b0\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.205375 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-openstack-config\") pod \"openstackclient\" (UID: \"9abfdd5d-c36d-4412-a88b-0a55237fb5b0\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.219596 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-openstack-config-secret\") pod \"openstackclient\" (UID: \"9abfdd5d-c36d-4412-a88b-0a55237fb5b0\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.219758 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9abfdd5d-c36d-4412-a88b-0a55237fb5b0\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.224885 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.232638 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v4z5\" (UniqueName: \"kubernetes.io/projected/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-kube-api-access-7v4z5\") pod \"openstackclient\" (UID: \"9abfdd5d-c36d-4412-a88b-0a55237fb5b0\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.290518 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kxq52"] Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.292751 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kxq52" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.306972 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtfqk\" (UniqueName: \"kubernetes.io/projected/b7e9482a-8aa4-4efa-92cd-8ac34c500eeb-kube-api-access-qtfqk\") pod \"redhat-operators-kxq52\" (UID: \"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb\") " pod="openshift-marketplace/redhat-operators-kxq52" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.307355 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7e9482a-8aa4-4efa-92cd-8ac34c500eeb-catalog-content\") pod \"redhat-operators-kxq52\" (UID: \"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb\") " pod="openshift-marketplace/redhat-operators-kxq52" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.307527 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7e9482a-8aa4-4efa-92cd-8ac34c500eeb-utilities\") pod \"redhat-operators-kxq52\" (UID: \"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb\") " pod="openshift-marketplace/redhat-operators-kxq52" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.308684 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kxq52"] Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.408871 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtfqk\" (UniqueName: \"kubernetes.io/projected/b7e9482a-8aa4-4efa-92cd-8ac34c500eeb-kube-api-access-qtfqk\") pod \"redhat-operators-kxq52\" (UID: \"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb\") " pod="openshift-marketplace/redhat-operators-kxq52" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.409056 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7e9482a-8aa4-4efa-92cd-8ac34c500eeb-catalog-content\") pod \"redhat-operators-kxq52\" (UID: \"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb\") " pod="openshift-marketplace/redhat-operators-kxq52" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.409131 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7e9482a-8aa4-4efa-92cd-8ac34c500eeb-utilities\") pod \"redhat-operators-kxq52\" (UID: \"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb\") " pod="openshift-marketplace/redhat-operators-kxq52" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.410484 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7e9482a-8aa4-4efa-92cd-8ac34c500eeb-utilities\") pod \"redhat-operators-kxq52\" (UID: \"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb\") " pod="openshift-marketplace/redhat-operators-kxq52" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.410573 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7e9482a-8aa4-4efa-92cd-8ac34c500eeb-catalog-content\") pod \"redhat-operators-kxq52\" (UID: \"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb\") " pod="openshift-marketplace/redhat-operators-kxq52" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.428756 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.429957 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.437699 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtfqk\" (UniqueName: \"kubernetes.io/projected/b7e9482a-8aa4-4efa-92cd-8ac34c500eeb-kube-api-access-qtfqk\") pod \"redhat-operators-kxq52\" (UID: \"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb\") " pod="openshift-marketplace/redhat-operators-kxq52" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.452967 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.490208 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.492253 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.520346 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.542405 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8d96bc9db-xwrs6" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.619922 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmwq9\" (UniqueName: \"kubernetes.io/projected/5461689b-4309-4ffe-9b5a-fef2eba77915-kube-api-access-bmwq9\") pod \"openstackclient\" (UID: \"5461689b-4309-4ffe-9b5a-fef2eba77915\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.620034 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5461689b-4309-4ffe-9b5a-fef2eba77915-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5461689b-4309-4ffe-9b5a-fef2eba77915\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.620074 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5461689b-4309-4ffe-9b5a-fef2eba77915-openstack-config\") pod \"openstackclient\" (UID: \"5461689b-4309-4ffe-9b5a-fef2eba77915\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.620096 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5461689b-4309-4ffe-9b5a-fef2eba77915-openstack-config-secret\") pod \"openstackclient\" (UID: \"5461689b-4309-4ffe-9b5a-fef2eba77915\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.658728 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kxq52" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.686361 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-f5c687874-swft6"] Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.687332 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-f5c687874-swft6" podUID="553ff3ba-f6fa-4006-925f-4c5d43741caa" containerName="barbican-api-log" containerID="cri-o://ff1f238673b10dbf617fc9443f7665d351815c0d97d01b94e7c2e82287541fa6" gracePeriod=30 Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.687821 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-f5c687874-swft6" podUID="553ff3ba-f6fa-4006-925f-4c5d43741caa" containerName="barbican-api" containerID="cri-o://303c0ce661863e2601c3f27ce4b6242771a0a594924877aca1d8f327ef7a7e40" gracePeriod=30 Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.722872 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmwq9\" (UniqueName: \"kubernetes.io/projected/5461689b-4309-4ffe-9b5a-fef2eba77915-kube-api-access-bmwq9\") pod \"openstackclient\" (UID: \"5461689b-4309-4ffe-9b5a-fef2eba77915\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.723418 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5461689b-4309-4ffe-9b5a-fef2eba77915-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5461689b-4309-4ffe-9b5a-fef2eba77915\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.723465 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5461689b-4309-4ffe-9b5a-fef2eba77915-openstack-config\") pod \"openstackclient\" (UID: \"5461689b-4309-4ffe-9b5a-fef2eba77915\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.723484 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5461689b-4309-4ffe-9b5a-fef2eba77915-openstack-config-secret\") pod \"openstackclient\" (UID: \"5461689b-4309-4ffe-9b5a-fef2eba77915\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.726791 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5461689b-4309-4ffe-9b5a-fef2eba77915-openstack-config\") pod \"openstackclient\" (UID: \"5461689b-4309-4ffe-9b5a-fef2eba77915\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.737081 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5461689b-4309-4ffe-9b5a-fef2eba77915-openstack-config-secret\") pod \"openstackclient\" (UID: \"5461689b-4309-4ffe-9b5a-fef2eba77915\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.740835 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5461689b-4309-4ffe-9b5a-fef2eba77915-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5461689b-4309-4ffe-9b5a-fef2eba77915\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.747570 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmwq9\" (UniqueName: \"kubernetes.io/projected/5461689b-4309-4ffe-9b5a-fef2eba77915-kube-api-access-bmwq9\") pod \"openstackclient\" (UID: \"5461689b-4309-4ffe-9b5a-fef2eba77915\") " pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: E0127 16:06:19.868002 4966 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 27 16:06:19 crc kubenswrapper[4966]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_9abfdd5d-c36d-4412-a88b-0a55237fb5b0_0(ac393a0da440b3fe7caac7bfc21a3459bcbb3b9816863d61188554a8207193d5): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ac393a0da440b3fe7caac7bfc21a3459bcbb3b9816863d61188554a8207193d5" Netns:"/var/run/netns/c4dab654-f95b-48fd-9f25-5c243607a420" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=ac393a0da440b3fe7caac7bfc21a3459bcbb3b9816863d61188554a8207193d5;K8S_POD_UID=9abfdd5d-c36d-4412-a88b-0a55237fb5b0" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/9abfdd5d-c36d-4412-a88b-0a55237fb5b0]: expected pod UID "9abfdd5d-c36d-4412-a88b-0a55237fb5b0" but got "5461689b-4309-4ffe-9b5a-fef2eba77915" from Kube API Jan 27 16:06:19 crc kubenswrapper[4966]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 16:06:19 crc kubenswrapper[4966]: > Jan 27 16:06:19 crc kubenswrapper[4966]: E0127 16:06:19.868057 4966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 27 16:06:19 crc kubenswrapper[4966]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_9abfdd5d-c36d-4412-a88b-0a55237fb5b0_0(ac393a0da440b3fe7caac7bfc21a3459bcbb3b9816863d61188554a8207193d5): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ac393a0da440b3fe7caac7bfc21a3459bcbb3b9816863d61188554a8207193d5" Netns:"/var/run/netns/c4dab654-f95b-48fd-9f25-5c243607a420" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=ac393a0da440b3fe7caac7bfc21a3459bcbb3b9816863d61188554a8207193d5;K8S_POD_UID=9abfdd5d-c36d-4412-a88b-0a55237fb5b0" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/9abfdd5d-c36d-4412-a88b-0a55237fb5b0]: expected pod UID "9abfdd5d-c36d-4412-a88b-0a55237fb5b0" but got "5461689b-4309-4ffe-9b5a-fef2eba77915" from Kube API Jan 27 16:06:19 crc kubenswrapper[4966]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 16:06:19 crc kubenswrapper[4966]: > pod="openstack/openstackclient" Jan 27 16:06:19 crc kubenswrapper[4966]: I0127 16:06:19.890734 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 16:06:20 crc kubenswrapper[4966]: I0127 16:06:20.078569 4966 generic.go:334] "Generic (PLEG): container finished" podID="553ff3ba-f6fa-4006-925f-4c5d43741caa" containerID="ff1f238673b10dbf617fc9443f7665d351815c0d97d01b94e7c2e82287541fa6" exitCode=143 Jan 27 16:06:20 crc kubenswrapper[4966]: I0127 16:06:20.078973 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f5c687874-swft6" event={"ID":"553ff3ba-f6fa-4006-925f-4c5d43741caa","Type":"ContainerDied","Data":"ff1f238673b10dbf617fc9443f7665d351815c0d97d01b94e7c2e82287541fa6"} Jan 27 16:06:20 crc kubenswrapper[4966]: I0127 16:06:20.079125 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 16:06:20 crc kubenswrapper[4966]: I0127 16:06:20.083506 4966 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="9abfdd5d-c36d-4412-a88b-0a55237fb5b0" podUID="5461689b-4309-4ffe-9b5a-fef2eba77915" Jan 27 16:06:20 crc kubenswrapper[4966]: I0127 16:06:20.098110 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 16:06:20 crc kubenswrapper[4966]: I0127 16:06:20.234477 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v4z5\" (UniqueName: \"kubernetes.io/projected/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-kube-api-access-7v4z5\") pod \"9abfdd5d-c36d-4412-a88b-0a55237fb5b0\" (UID: \"9abfdd5d-c36d-4412-a88b-0a55237fb5b0\") " Jan 27 16:06:20 crc kubenswrapper[4966]: I0127 16:06:20.234581 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-combined-ca-bundle\") pod \"9abfdd5d-c36d-4412-a88b-0a55237fb5b0\" (UID: \"9abfdd5d-c36d-4412-a88b-0a55237fb5b0\") " Jan 27 16:06:20 crc kubenswrapper[4966]: I0127 16:06:20.234708 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-openstack-config-secret\") pod \"9abfdd5d-c36d-4412-a88b-0a55237fb5b0\" (UID: \"9abfdd5d-c36d-4412-a88b-0a55237fb5b0\") " Jan 27 16:06:20 crc kubenswrapper[4966]: I0127 16:06:20.234748 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-openstack-config\") pod \"9abfdd5d-c36d-4412-a88b-0a55237fb5b0\" (UID: \"9abfdd5d-c36d-4412-a88b-0a55237fb5b0\") " Jan 27 16:06:20 crc kubenswrapper[4966]: I0127 16:06:20.235725 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9abfdd5d-c36d-4412-a88b-0a55237fb5b0" (UID: "9abfdd5d-c36d-4412-a88b-0a55237fb5b0"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:20 crc kubenswrapper[4966]: I0127 16:06:20.241046 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-kube-api-access-7v4z5" (OuterVolumeSpecName: "kube-api-access-7v4z5") pod "9abfdd5d-c36d-4412-a88b-0a55237fb5b0" (UID: "9abfdd5d-c36d-4412-a88b-0a55237fb5b0"). InnerVolumeSpecName "kube-api-access-7v4z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:20 crc kubenswrapper[4966]: I0127 16:06:20.245056 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9abfdd5d-c36d-4412-a88b-0a55237fb5b0" (UID: "9abfdd5d-c36d-4412-a88b-0a55237fb5b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:20 crc kubenswrapper[4966]: I0127 16:06:20.255062 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9abfdd5d-c36d-4412-a88b-0a55237fb5b0" (UID: "9abfdd5d-c36d-4412-a88b-0a55237fb5b0"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:20 crc kubenswrapper[4966]: I0127 16:06:20.318577 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kxq52"] Jan 27 16:06:20 crc kubenswrapper[4966]: I0127 16:06:20.338746 4966 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:20 crc kubenswrapper[4966]: I0127 16:06:20.338787 4966 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:20 crc kubenswrapper[4966]: I0127 16:06:20.338797 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v4z5\" (UniqueName: \"kubernetes.io/projected/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-kube-api-access-7v4z5\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:20 crc kubenswrapper[4966]: I0127 16:06:20.338805 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9abfdd5d-c36d-4412-a88b-0a55237fb5b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:20 crc kubenswrapper[4966]: I0127 16:06:20.429127 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 16:06:20 crc kubenswrapper[4966]: I0127 16:06:20.494627 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 16:06:20 crc kubenswrapper[4966]: I0127 16:06:20.537393 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9abfdd5d-c36d-4412-a88b-0a55237fb5b0" path="/var/lib/kubelet/pods/9abfdd5d-c36d-4412-a88b-0a55237fb5b0/volumes" Jan 27 16:06:21 crc kubenswrapper[4966]: I0127 16:06:21.090986 4966 generic.go:334] "Generic (PLEG): container finished" podID="b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" containerID="de88a3f14568ac8cefd0e2af199e5c16f8bb0a63503e5bb8156305e41ea47143" exitCode=0 Jan 27 16:06:21 crc kubenswrapper[4966]: I0127 16:06:21.091082 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxq52" event={"ID":"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb","Type":"ContainerDied","Data":"de88a3f14568ac8cefd0e2af199e5c16f8bb0a63503e5bb8156305e41ea47143"} Jan 27 16:06:21 crc kubenswrapper[4966]: I0127 16:06:21.091268 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxq52" event={"ID":"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb","Type":"ContainerStarted","Data":"0c07757bae41ae652d530b000539107f1e8639dff404fe518c1980e387f952d8"} Jan 27 16:06:21 crc kubenswrapper[4966]: I0127 16:06:21.092721 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 16:06:21 crc kubenswrapper[4966]: I0127 16:06:21.092725 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5461689b-4309-4ffe-9b5a-fef2eba77915","Type":"ContainerStarted","Data":"fdf7ac91197d099d103a4c3d67ea4184ef2b801bf05d3db1eeba2164d75162aa"} Jan 27 16:06:21 crc kubenswrapper[4966]: I0127 16:06:21.120066 4966 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="9abfdd5d-c36d-4412-a88b-0a55237fb5b0" podUID="5461689b-4309-4ffe-9b5a-fef2eba77915" Jan 27 16:06:22 crc kubenswrapper[4966]: I0127 16:06:22.544461 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 16:06:22 crc kubenswrapper[4966]: I0127 16:06:22.789350 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f886fdd58-d9dcc" Jan 27 16:06:22 crc kubenswrapper[4966]: I0127 16:06:22.818672 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-combined-ca-bundle\") pod \"4b821f12-5d2b-476d-9a06-82c533b408cc\" (UID: \"4b821f12-5d2b-476d-9a06-82c533b408cc\") " Jan 27 16:06:22 crc kubenswrapper[4966]: I0127 16:06:22.818811 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-config\") pod \"4b821f12-5d2b-476d-9a06-82c533b408cc\" (UID: \"4b821f12-5d2b-476d-9a06-82c533b408cc\") " Jan 27 16:06:22 crc kubenswrapper[4966]: I0127 16:06:22.818887 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5tkr\" (UniqueName: \"kubernetes.io/projected/4b821f12-5d2b-476d-9a06-82c533b408cc-kube-api-access-g5tkr\") pod \"4b821f12-5d2b-476d-9a06-82c533b408cc\" (UID: \"4b821f12-5d2b-476d-9a06-82c533b408cc\") " Jan 27 16:06:22 crc kubenswrapper[4966]: I0127 16:06:22.818990 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-httpd-config\") pod \"4b821f12-5d2b-476d-9a06-82c533b408cc\" (UID: \"4b821f12-5d2b-476d-9a06-82c533b408cc\") " Jan 27 16:06:22 crc kubenswrapper[4966]: I0127 16:06:22.819107 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-ovndb-tls-certs\") pod \"4b821f12-5d2b-476d-9a06-82c533b408cc\" (UID: \"4b821f12-5d2b-476d-9a06-82c533b408cc\") " Jan 27 16:06:22 crc kubenswrapper[4966]: I0127 16:06:22.829725 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4b821f12-5d2b-476d-9a06-82c533b408cc" (UID: "4b821f12-5d2b-476d-9a06-82c533b408cc"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:22 crc kubenswrapper[4966]: I0127 16:06:22.840855 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b821f12-5d2b-476d-9a06-82c533b408cc-kube-api-access-g5tkr" (OuterVolumeSpecName: "kube-api-access-g5tkr") pod "4b821f12-5d2b-476d-9a06-82c533b408cc" (UID: "4b821f12-5d2b-476d-9a06-82c533b408cc"). InnerVolumeSpecName "kube-api-access-g5tkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:22 crc kubenswrapper[4966]: I0127 16:06:22.913623 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b821f12-5d2b-476d-9a06-82c533b408cc" (UID: "4b821f12-5d2b-476d-9a06-82c533b408cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:22 crc kubenswrapper[4966]: I0127 16:06:22.915135 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-config" (OuterVolumeSpecName: "config") pod "4b821f12-5d2b-476d-9a06-82c533b408cc" (UID: "4b821f12-5d2b-476d-9a06-82c533b408cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:22 crc kubenswrapper[4966]: I0127 16:06:22.922042 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:22 crc kubenswrapper[4966]: I0127 16:06:22.922083 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:22 crc kubenswrapper[4966]: I0127 16:06:22.922097 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5tkr\" (UniqueName: \"kubernetes.io/projected/4b821f12-5d2b-476d-9a06-82c533b408cc-kube-api-access-g5tkr\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:22 crc kubenswrapper[4966]: I0127 16:06:22.922110 4966 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:22 crc kubenswrapper[4966]: I0127 16:06:22.930635 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4b821f12-5d2b-476d-9a06-82c533b408cc" (UID: "4b821f12-5d2b-476d-9a06-82c533b408cc"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:23 crc kubenswrapper[4966]: I0127 16:06:23.024768 4966 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b821f12-5d2b-476d-9a06-82c533b408cc-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:23 crc kubenswrapper[4966]: I0127 16:06:23.122044 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxq52" event={"ID":"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb","Type":"ContainerStarted","Data":"9eb953e556e73c7e43c680279efe4521dbc36ae9401a988c33287db1ccb06066"} Jan 27 16:06:23 crc kubenswrapper[4966]: I0127 16:06:23.126530 4966 generic.go:334] "Generic (PLEG): container finished" podID="4b821f12-5d2b-476d-9a06-82c533b408cc" containerID="a8b590e06b1aa4545f0914bd63c32e52516019a41e89d0157811b8b4aed63112" exitCode=0 Jan 27 16:06:23 crc kubenswrapper[4966]: I0127 16:06:23.126575 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f886fdd58-d9dcc" event={"ID":"4b821f12-5d2b-476d-9a06-82c533b408cc","Type":"ContainerDied","Data":"a8b590e06b1aa4545f0914bd63c32e52516019a41e89d0157811b8b4aed63112"} Jan 27 16:06:23 crc kubenswrapper[4966]: I0127 16:06:23.126607 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f886fdd58-d9dcc" event={"ID":"4b821f12-5d2b-476d-9a06-82c533b408cc","Type":"ContainerDied","Data":"f2ffe967c0ba215d63a0dadc854d1694a81c2c15b0edb30fdb5693c6eb0fb1df"} Jan 27 16:06:23 crc kubenswrapper[4966]: I0127 16:06:23.126627 4966 scope.go:117] "RemoveContainer" containerID="231061fda531e6c517b98391a2b224fe2af512ea2318046034b9ac10b9af16ff" Jan 27 16:06:23 crc kubenswrapper[4966]: I0127 16:06:23.126760 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f886fdd58-d9dcc" Jan 27 16:06:23 crc kubenswrapper[4966]: I0127 16:06:23.167195 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-f5c687874-swft6" podUID="553ff3ba-f6fa-4006-925f-4c5d43741caa" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.202:9311/healthcheck\": read tcp 10.217.0.2:55752->10.217.0.202:9311: read: connection reset by peer" Jan 27 16:06:23 crc kubenswrapper[4966]: I0127 16:06:23.167195 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-f5c687874-swft6" podUID="553ff3ba-f6fa-4006-925f-4c5d43741caa" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.202:9311/healthcheck\": read tcp 10.217.0.2:55748->10.217.0.202:9311: read: connection reset by peer" Jan 27 16:06:23 crc kubenswrapper[4966]: I0127 16:06:23.259086 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f886fdd58-d9dcc"] Jan 27 16:06:23 crc kubenswrapper[4966]: I0127 16:06:23.261413 4966 scope.go:117] "RemoveContainer" containerID="a8b590e06b1aa4545f0914bd63c32e52516019a41e89d0157811b8b4aed63112" Jan 27 16:06:23 crc kubenswrapper[4966]: I0127 16:06:23.276616 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6f886fdd58-d9dcc"] Jan 27 16:06:23 crc kubenswrapper[4966]: I0127 16:06:23.327448 4966 scope.go:117] "RemoveContainer" containerID="231061fda531e6c517b98391a2b224fe2af512ea2318046034b9ac10b9af16ff" Jan 27 16:06:23 crc kubenswrapper[4966]: E0127 16:06:23.328303 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"231061fda531e6c517b98391a2b224fe2af512ea2318046034b9ac10b9af16ff\": container with ID starting with 231061fda531e6c517b98391a2b224fe2af512ea2318046034b9ac10b9af16ff not found: ID does not exist" containerID="231061fda531e6c517b98391a2b224fe2af512ea2318046034b9ac10b9af16ff" Jan 27 16:06:23 crc kubenswrapper[4966]: I0127 16:06:23.328352 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"231061fda531e6c517b98391a2b224fe2af512ea2318046034b9ac10b9af16ff"} err="failed to get container status \"231061fda531e6c517b98391a2b224fe2af512ea2318046034b9ac10b9af16ff\": rpc error: code = NotFound desc = could not find container \"231061fda531e6c517b98391a2b224fe2af512ea2318046034b9ac10b9af16ff\": container with ID starting with 231061fda531e6c517b98391a2b224fe2af512ea2318046034b9ac10b9af16ff not found: ID does not exist" Jan 27 16:06:23 crc kubenswrapper[4966]: I0127 16:06:23.328382 4966 scope.go:117] "RemoveContainer" containerID="a8b590e06b1aa4545f0914bd63c32e52516019a41e89d0157811b8b4aed63112" Jan 27 16:06:23 crc kubenswrapper[4966]: E0127 16:06:23.329393 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8b590e06b1aa4545f0914bd63c32e52516019a41e89d0157811b8b4aed63112\": container with ID starting with a8b590e06b1aa4545f0914bd63c32e52516019a41e89d0157811b8b4aed63112 not found: ID does not exist" containerID="a8b590e06b1aa4545f0914bd63c32e52516019a41e89d0157811b8b4aed63112" Jan 27 16:06:23 crc kubenswrapper[4966]: I0127 16:06:23.329444 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8b590e06b1aa4545f0914bd63c32e52516019a41e89d0157811b8b4aed63112"} err="failed to get container status \"a8b590e06b1aa4545f0914bd63c32e52516019a41e89d0157811b8b4aed63112\": rpc error: code = NotFound desc = could not find container \"a8b590e06b1aa4545f0914bd63c32e52516019a41e89d0157811b8b4aed63112\": container with ID starting with a8b590e06b1aa4545f0914bd63c32e52516019a41e89d0157811b8b4aed63112 not found: ID does not exist" Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.016888 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.139477 4966 generic.go:334] "Generic (PLEG): container finished" podID="553ff3ba-f6fa-4006-925f-4c5d43741caa" containerID="303c0ce661863e2601c3f27ce4b6242771a0a594924877aca1d8f327ef7a7e40" exitCode=0 Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.139811 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f5c687874-swft6" event={"ID":"553ff3ba-f6fa-4006-925f-4c5d43741caa","Type":"ContainerDied","Data":"303c0ce661863e2601c3f27ce4b6242771a0a594924877aca1d8f327ef7a7e40"} Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.139816 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f5c687874-swft6" Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.139839 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f5c687874-swft6" event={"ID":"553ff3ba-f6fa-4006-925f-4c5d43741caa","Type":"ContainerDied","Data":"cf6377fc76930e9cc5ddc5850da4f4a1dcb91616ef91bb9de322a8818bdabf2c"} Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.139855 4966 scope.go:117] "RemoveContainer" containerID="303c0ce661863e2601c3f27ce4b6242771a0a594924877aca1d8f327ef7a7e40" Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.183596 4966 scope.go:117] "RemoveContainer" containerID="ff1f238673b10dbf617fc9443f7665d351815c0d97d01b94e7c2e82287541fa6" Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.189156 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/553ff3ba-f6fa-4006-925f-4c5d43741caa-config-data-custom\") pod \"553ff3ba-f6fa-4006-925f-4c5d43741caa\" (UID: \"553ff3ba-f6fa-4006-925f-4c5d43741caa\") " Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.189216 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/553ff3ba-f6fa-4006-925f-4c5d43741caa-logs\") pod \"553ff3ba-f6fa-4006-925f-4c5d43741caa\" (UID: \"553ff3ba-f6fa-4006-925f-4c5d43741caa\") " Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.189519 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/553ff3ba-f6fa-4006-925f-4c5d43741caa-logs" (OuterVolumeSpecName: "logs") pod "553ff3ba-f6fa-4006-925f-4c5d43741caa" (UID: "553ff3ba-f6fa-4006-925f-4c5d43741caa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.189680 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpl5q\" (UniqueName: \"kubernetes.io/projected/553ff3ba-f6fa-4006-925f-4c5d43741caa-kube-api-access-gpl5q\") pod \"553ff3ba-f6fa-4006-925f-4c5d43741caa\" (UID: \"553ff3ba-f6fa-4006-925f-4c5d43741caa\") " Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.189758 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553ff3ba-f6fa-4006-925f-4c5d43741caa-config-data\") pod \"553ff3ba-f6fa-4006-925f-4c5d43741caa\" (UID: \"553ff3ba-f6fa-4006-925f-4c5d43741caa\") " Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.189821 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553ff3ba-f6fa-4006-925f-4c5d43741caa-combined-ca-bundle\") pod \"553ff3ba-f6fa-4006-925f-4c5d43741caa\" (UID: \"553ff3ba-f6fa-4006-925f-4c5d43741caa\") " Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.191055 4966 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/553ff3ba-f6fa-4006-925f-4c5d43741caa-logs\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.196054 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/553ff3ba-f6fa-4006-925f-4c5d43741caa-kube-api-access-gpl5q" (OuterVolumeSpecName: "kube-api-access-gpl5q") pod "553ff3ba-f6fa-4006-925f-4c5d43741caa" (UID: "553ff3ba-f6fa-4006-925f-4c5d43741caa"). InnerVolumeSpecName "kube-api-access-gpl5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.215869 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553ff3ba-f6fa-4006-925f-4c5d43741caa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "553ff3ba-f6fa-4006-925f-4c5d43741caa" (UID: "553ff3ba-f6fa-4006-925f-4c5d43741caa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.230803 4966 scope.go:117] "RemoveContainer" containerID="303c0ce661863e2601c3f27ce4b6242771a0a594924877aca1d8f327ef7a7e40" Jan 27 16:06:24 crc kubenswrapper[4966]: E0127 16:06:24.231354 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"303c0ce661863e2601c3f27ce4b6242771a0a594924877aca1d8f327ef7a7e40\": container with ID starting with 303c0ce661863e2601c3f27ce4b6242771a0a594924877aca1d8f327ef7a7e40 not found: ID does not exist" containerID="303c0ce661863e2601c3f27ce4b6242771a0a594924877aca1d8f327ef7a7e40" Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.231403 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"303c0ce661863e2601c3f27ce4b6242771a0a594924877aca1d8f327ef7a7e40"} err="failed to get container status \"303c0ce661863e2601c3f27ce4b6242771a0a594924877aca1d8f327ef7a7e40\": rpc error: code = NotFound desc = could not find container \"303c0ce661863e2601c3f27ce4b6242771a0a594924877aca1d8f327ef7a7e40\": container with ID starting with 303c0ce661863e2601c3f27ce4b6242771a0a594924877aca1d8f327ef7a7e40 not found: ID does not exist" Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.231499 4966 scope.go:117] "RemoveContainer" containerID="ff1f238673b10dbf617fc9443f7665d351815c0d97d01b94e7c2e82287541fa6" Jan 27 16:06:24 crc kubenswrapper[4966]: E0127 16:06:24.231850 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff1f238673b10dbf617fc9443f7665d351815c0d97d01b94e7c2e82287541fa6\": container with ID starting with ff1f238673b10dbf617fc9443f7665d351815c0d97d01b94e7c2e82287541fa6 not found: ID does not exist" containerID="ff1f238673b10dbf617fc9443f7665d351815c0d97d01b94e7c2e82287541fa6" Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.231952 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff1f238673b10dbf617fc9443f7665d351815c0d97d01b94e7c2e82287541fa6"} err="failed to get container status \"ff1f238673b10dbf617fc9443f7665d351815c0d97d01b94e7c2e82287541fa6\": rpc error: code = NotFound desc = could not find container \"ff1f238673b10dbf617fc9443f7665d351815c0d97d01b94e7c2e82287541fa6\": container with ID starting with ff1f238673b10dbf617fc9443f7665d351815c0d97d01b94e7c2e82287541fa6 not found: ID does not exist" Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.237883 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553ff3ba-f6fa-4006-925f-4c5d43741caa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "553ff3ba-f6fa-4006-925f-4c5d43741caa" (UID: "553ff3ba-f6fa-4006-925f-4c5d43741caa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.269936 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553ff3ba-f6fa-4006-925f-4c5d43741caa-config-data" (OuterVolumeSpecName: "config-data") pod "553ff3ba-f6fa-4006-925f-4c5d43741caa" (UID: "553ff3ba-f6fa-4006-925f-4c5d43741caa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.293517 4966 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/553ff3ba-f6fa-4006-925f-4c5d43741caa-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.293553 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpl5q\" (UniqueName: \"kubernetes.io/projected/553ff3ba-f6fa-4006-925f-4c5d43741caa-kube-api-access-gpl5q\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.293565 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553ff3ba-f6fa-4006-925f-4c5d43741caa-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.293575 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553ff3ba-f6fa-4006-925f-4c5d43741caa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.485631 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-f5c687874-swft6"] Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.504098 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-f5c687874-swft6"] Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.536208 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b821f12-5d2b-476d-9a06-82c533b408cc" path="/var/lib/kubelet/pods/4b821f12-5d2b-476d-9a06-82c533b408cc/volumes" Jan 27 16:06:24 crc kubenswrapper[4966]: I0127 16:06:24.537168 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="553ff3ba-f6fa-4006-925f-4c5d43741caa" path="/var/lib/kubelet/pods/553ff3ba-f6fa-4006-925f-4c5d43741caa/volumes" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.325554 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.326423 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d50b72a4-4856-4248-b784-239009beb314" containerName="proxy-httpd" containerID="cri-o://9f96bb4e225ba3a3b8aea9db3e93be6156e744aa151f965afcb1acd3624c8bed" gracePeriod=30 Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.326583 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d50b72a4-4856-4248-b784-239009beb314" containerName="sg-core" containerID="cri-o://671c0014085b324b356b6f812ef1cd434ff28515f46fa482c08e3321703cbc0a" gracePeriod=30 Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.326649 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d50b72a4-4856-4248-b784-239009beb314" containerName="ceilometer-notification-agent" containerID="cri-o://5bbd3659f41fd5066a31dffdb9e308a96d92349f0b720d62c2bf6a95b77d7b0e" gracePeriod=30 Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.327126 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d50b72a4-4856-4248-b784-239009beb314" containerName="ceilometer-central-agent" containerID="cri-o://1dec27b91ce2f475d477ea70f8314f9a1eb7cea3697059c003c9e588fccff574" gracePeriod=30 Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.332271 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.687098 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7f67cf7b6c-fm8vs"] Jan 27 16:06:25 crc kubenswrapper[4966]: E0127 16:06:25.687970 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553ff3ba-f6fa-4006-925f-4c5d43741caa" containerName="barbican-api-log" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.687993 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="553ff3ba-f6fa-4006-925f-4c5d43741caa" containerName="barbican-api-log" Jan 27 16:06:25 crc kubenswrapper[4966]: E0127 16:06:25.688026 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b821f12-5d2b-476d-9a06-82c533b408cc" containerName="neutron-api" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.688034 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b821f12-5d2b-476d-9a06-82c533b408cc" containerName="neutron-api" Jan 27 16:06:25 crc kubenswrapper[4966]: E0127 16:06:25.688065 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b821f12-5d2b-476d-9a06-82c533b408cc" containerName="neutron-httpd" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.688075 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b821f12-5d2b-476d-9a06-82c533b408cc" containerName="neutron-httpd" Jan 27 16:06:25 crc kubenswrapper[4966]: E0127 16:06:25.688106 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553ff3ba-f6fa-4006-925f-4c5d43741caa" containerName="barbican-api" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.688115 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="553ff3ba-f6fa-4006-925f-4c5d43741caa" containerName="barbican-api" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.688357 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="553ff3ba-f6fa-4006-925f-4c5d43741caa" containerName="barbican-api-log" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.688384 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b821f12-5d2b-476d-9a06-82c533b408cc" containerName="neutron-httpd" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.688401 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="553ff3ba-f6fa-4006-925f-4c5d43741caa" containerName="barbican-api" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.688433 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b821f12-5d2b-476d-9a06-82c533b408cc" containerName="neutron-api" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.690070 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.693189 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.693442 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.693562 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.710211 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7f67cf7b6c-fm8vs"] Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.836307 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0a5c7d-bb55-4f56-9f03-268df91b2748-public-tls-certs\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.836398 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bb0a5c7d-bb55-4f56-9f03-268df91b2748-etc-swift\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.836429 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0a5c7d-bb55-4f56-9f03-268df91b2748-run-httpd\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.836555 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0a5c7d-bb55-4f56-9f03-268df91b2748-config-data\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.836653 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0a5c7d-bb55-4f56-9f03-268df91b2748-log-httpd\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.836773 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtbfc\" (UniqueName: \"kubernetes.io/projected/bb0a5c7d-bb55-4f56-9f03-268df91b2748-kube-api-access-vtbfc\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.836808 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0a5c7d-bb55-4f56-9f03-268df91b2748-combined-ca-bundle\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.836854 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0a5c7d-bb55-4f56-9f03-268df91b2748-internal-tls-certs\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.837211 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.939530 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0a5c7d-bb55-4f56-9f03-268df91b2748-public-tls-certs\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.939640 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bb0a5c7d-bb55-4f56-9f03-268df91b2748-etc-swift\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.939667 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0a5c7d-bb55-4f56-9f03-268df91b2748-run-httpd\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.939699 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0a5c7d-bb55-4f56-9f03-268df91b2748-config-data\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.939779 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0a5c7d-bb55-4f56-9f03-268df91b2748-log-httpd\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.939851 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtbfc\" (UniqueName: \"kubernetes.io/projected/bb0a5c7d-bb55-4f56-9f03-268df91b2748-kube-api-access-vtbfc\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.939874 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0a5c7d-bb55-4f56-9f03-268df91b2748-combined-ca-bundle\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.939920 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0a5c7d-bb55-4f56-9f03-268df91b2748-internal-tls-certs\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.940993 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0a5c7d-bb55-4f56-9f03-268df91b2748-log-httpd\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.941017 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0a5c7d-bb55-4f56-9f03-268df91b2748-run-httpd\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.946186 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0a5c7d-bb55-4f56-9f03-268df91b2748-config-data\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.947511 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0a5c7d-bb55-4f56-9f03-268df91b2748-internal-tls-certs\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.947547 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0a5c7d-bb55-4f56-9f03-268df91b2748-combined-ca-bundle\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.949614 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0a5c7d-bb55-4f56-9f03-268df91b2748-public-tls-certs\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.952258 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bb0a5c7d-bb55-4f56-9f03-268df91b2748-etc-swift\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:25 crc kubenswrapper[4966]: I0127 16:06:25.957711 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtbfc\" (UniqueName: \"kubernetes.io/projected/bb0a5c7d-bb55-4f56-9f03-268df91b2748-kube-api-access-vtbfc\") pod \"swift-proxy-7f67cf7b6c-fm8vs\" (UID: \"bb0a5c7d-bb55-4f56-9f03-268df91b2748\") " pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.012720 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.163619 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-84f46dccf4-wb9z5"] Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.175089 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-84f46dccf4-wb9z5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.181344 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.181844 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.182227 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-99swl" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.212793 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-84f46dccf4-wb9z5"] Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.213318 4966 generic.go:334] "Generic (PLEG): container finished" podID="d50b72a4-4856-4248-b784-239009beb314" containerID="9f96bb4e225ba3a3b8aea9db3e93be6156e744aa151f965afcb1acd3624c8bed" exitCode=0 Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.213339 4966 generic.go:334] "Generic (PLEG): container finished" podID="d50b72a4-4856-4248-b784-239009beb314" containerID="671c0014085b324b356b6f812ef1cd434ff28515f46fa482c08e3321703cbc0a" exitCode=2 Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.213349 4966 generic.go:334] "Generic (PLEG): container finished" podID="d50b72a4-4856-4248-b784-239009beb314" containerID="1dec27b91ce2f475d477ea70f8314f9a1eb7cea3697059c003c9e588fccff574" exitCode=0 Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.213363 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d50b72a4-4856-4248-b784-239009beb314","Type":"ContainerDied","Data":"9f96bb4e225ba3a3b8aea9db3e93be6156e744aa151f965afcb1acd3624c8bed"} Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.213385 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d50b72a4-4856-4248-b784-239009beb314","Type":"ContainerDied","Data":"671c0014085b324b356b6f812ef1cd434ff28515f46fa482c08e3321703cbc0a"} Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.213397 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d50b72a4-4856-4248-b784-239009beb314","Type":"ContainerDied","Data":"1dec27b91ce2f475d477ea70f8314f9a1eb7cea3697059c003c9e588fccff574"} Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.291022 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-mlr65"] Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.296789 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.302580 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-mlr65"] Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.334815 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7444bc9f5b-kfthl"] Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.336365 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7444bc9f5b-kfthl" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.339241 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.349477 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7444bc9f5b-kfthl"] Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.354150 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4704b7a-7be4-4303-ab29-d9123555133f-config-data\") pod \"heat-engine-84f46dccf4-wb9z5\" (UID: \"d4704b7a-7be4-4303-ab29-d9123555133f\") " pod="openstack/heat-engine-84f46dccf4-wb9z5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.354228 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcch2\" (UniqueName: \"kubernetes.io/projected/d4704b7a-7be4-4303-ab29-d9123555133f-kube-api-access-zcch2\") pod \"heat-engine-84f46dccf4-wb9z5\" (UID: \"d4704b7a-7be4-4303-ab29-d9123555133f\") " pod="openstack/heat-engine-84f46dccf4-wb9z5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.354262 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4704b7a-7be4-4303-ab29-d9123555133f-config-data-custom\") pod \"heat-engine-84f46dccf4-wb9z5\" (UID: \"d4704b7a-7be4-4303-ab29-d9123555133f\") " pod="openstack/heat-engine-84f46dccf4-wb9z5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.354277 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4704b7a-7be4-4303-ab29-d9123555133f-combined-ca-bundle\") pod \"heat-engine-84f46dccf4-wb9z5\" (UID: \"d4704b7a-7be4-4303-ab29-d9123555133f\") " pod="openstack/heat-engine-84f46dccf4-wb9z5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.363949 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5cfcc75859-nl7s5"] Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.365680 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5cfcc75859-nl7s5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.369439 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.396782 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5cfcc75859-nl7s5"] Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.456627 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgqqb\" (UniqueName: \"kubernetes.io/projected/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-kube-api-access-mgqqb\") pod \"heat-api-5cfcc75859-nl7s5\" (UID: \"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2\") " pod="openstack/heat-api-5cfcc75859-nl7s5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.456677 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856ac2bf-cc02-4921-84ff-947e5b947997-config-data-custom\") pod \"heat-cfnapi-7444bc9f5b-kfthl\" (UID: \"856ac2bf-cc02-4921-84ff-947e5b947997\") " pod="openstack/heat-cfnapi-7444bc9f5b-kfthl" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.456716 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcch2\" (UniqueName: \"kubernetes.io/projected/d4704b7a-7be4-4303-ab29-d9123555133f-kube-api-access-zcch2\") pod \"heat-engine-84f46dccf4-wb9z5\" (UID: \"d4704b7a-7be4-4303-ab29-d9123555133f\") " pod="openstack/heat-engine-84f46dccf4-wb9z5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.456851 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4704b7a-7be4-4303-ab29-d9123555133f-config-data-custom\") pod \"heat-engine-84f46dccf4-wb9z5\" (UID: \"d4704b7a-7be4-4303-ab29-d9123555133f\") " pod="openstack/heat-engine-84f46dccf4-wb9z5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.456973 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4704b7a-7be4-4303-ab29-d9123555133f-combined-ca-bundle\") pod \"heat-engine-84f46dccf4-wb9z5\" (UID: \"d4704b7a-7be4-4303-ab29-d9123555133f\") " pod="openstack/heat-engine-84f46dccf4-wb9z5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.457052 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-mlr65\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.457097 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-config\") pod \"dnsmasq-dns-688b9f5b49-mlr65\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.457141 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t52s6\" (UniqueName: \"kubernetes.io/projected/30e072ee-5227-4c65-8ccd-bc5e01bc5394-kube-api-access-t52s6\") pod \"dnsmasq-dns-688b9f5b49-mlr65\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.457210 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-config-data\") pod \"heat-api-5cfcc75859-nl7s5\" (UID: \"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2\") " pod="openstack/heat-api-5cfcc75859-nl7s5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.457424 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-config-data-custom\") pod \"heat-api-5cfcc75859-nl7s5\" (UID: \"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2\") " pod="openstack/heat-api-5cfcc75859-nl7s5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.457481 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-mlr65\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.457513 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856ac2bf-cc02-4921-84ff-947e5b947997-config-data\") pod \"heat-cfnapi-7444bc9f5b-kfthl\" (UID: \"856ac2bf-cc02-4921-84ff-947e5b947997\") " pod="openstack/heat-cfnapi-7444bc9f5b-kfthl" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.457588 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-mlr65\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.457799 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmcng\" (UniqueName: \"kubernetes.io/projected/856ac2bf-cc02-4921-84ff-947e5b947997-kube-api-access-vmcng\") pod \"heat-cfnapi-7444bc9f5b-kfthl\" (UID: \"856ac2bf-cc02-4921-84ff-947e5b947997\") " pod="openstack/heat-cfnapi-7444bc9f5b-kfthl" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.457885 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-mlr65\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.458037 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856ac2bf-cc02-4921-84ff-947e5b947997-combined-ca-bundle\") pod \"heat-cfnapi-7444bc9f5b-kfthl\" (UID: \"856ac2bf-cc02-4921-84ff-947e5b947997\") " pod="openstack/heat-cfnapi-7444bc9f5b-kfthl" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.458164 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-combined-ca-bundle\") pod \"heat-api-5cfcc75859-nl7s5\" (UID: \"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2\") " pod="openstack/heat-api-5cfcc75859-nl7s5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.458240 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4704b7a-7be4-4303-ab29-d9123555133f-config-data\") pod \"heat-engine-84f46dccf4-wb9z5\" (UID: \"d4704b7a-7be4-4303-ab29-d9123555133f\") " pod="openstack/heat-engine-84f46dccf4-wb9z5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.463038 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4704b7a-7be4-4303-ab29-d9123555133f-config-data-custom\") pod \"heat-engine-84f46dccf4-wb9z5\" (UID: \"d4704b7a-7be4-4303-ab29-d9123555133f\") " pod="openstack/heat-engine-84f46dccf4-wb9z5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.470180 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4704b7a-7be4-4303-ab29-d9123555133f-combined-ca-bundle\") pod \"heat-engine-84f46dccf4-wb9z5\" (UID: \"d4704b7a-7be4-4303-ab29-d9123555133f\") " pod="openstack/heat-engine-84f46dccf4-wb9z5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.472103 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4704b7a-7be4-4303-ab29-d9123555133f-config-data\") pod \"heat-engine-84f46dccf4-wb9z5\" (UID: \"d4704b7a-7be4-4303-ab29-d9123555133f\") " pod="openstack/heat-engine-84f46dccf4-wb9z5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.478694 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcch2\" (UniqueName: \"kubernetes.io/projected/d4704b7a-7be4-4303-ab29-d9123555133f-kube-api-access-zcch2\") pod \"heat-engine-84f46dccf4-wb9z5\" (UID: \"d4704b7a-7be4-4303-ab29-d9123555133f\") " pod="openstack/heat-engine-84f46dccf4-wb9z5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.514013 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-84f46dccf4-wb9z5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.561061 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-mlr65\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.561106 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-config\") pod \"dnsmasq-dns-688b9f5b49-mlr65\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.561130 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t52s6\" (UniqueName: \"kubernetes.io/projected/30e072ee-5227-4c65-8ccd-bc5e01bc5394-kube-api-access-t52s6\") pod \"dnsmasq-dns-688b9f5b49-mlr65\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.561166 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-config-data\") pod \"heat-api-5cfcc75859-nl7s5\" (UID: \"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2\") " pod="openstack/heat-api-5cfcc75859-nl7s5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.561209 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-config-data-custom\") pod \"heat-api-5cfcc75859-nl7s5\" (UID: \"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2\") " pod="openstack/heat-api-5cfcc75859-nl7s5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.561224 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-mlr65\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.561243 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856ac2bf-cc02-4921-84ff-947e5b947997-config-data\") pod \"heat-cfnapi-7444bc9f5b-kfthl\" (UID: \"856ac2bf-cc02-4921-84ff-947e5b947997\") " pod="openstack/heat-cfnapi-7444bc9f5b-kfthl" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.561275 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-mlr65\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.561320 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmcng\" (UniqueName: \"kubernetes.io/projected/856ac2bf-cc02-4921-84ff-947e5b947997-kube-api-access-vmcng\") pod \"heat-cfnapi-7444bc9f5b-kfthl\" (UID: \"856ac2bf-cc02-4921-84ff-947e5b947997\") " pod="openstack/heat-cfnapi-7444bc9f5b-kfthl" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.561357 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-mlr65\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.561398 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856ac2bf-cc02-4921-84ff-947e5b947997-combined-ca-bundle\") pod \"heat-cfnapi-7444bc9f5b-kfthl\" (UID: \"856ac2bf-cc02-4921-84ff-947e5b947997\") " pod="openstack/heat-cfnapi-7444bc9f5b-kfthl" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.561450 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-combined-ca-bundle\") pod \"heat-api-5cfcc75859-nl7s5\" (UID: \"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2\") " pod="openstack/heat-api-5cfcc75859-nl7s5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.561509 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgqqb\" (UniqueName: \"kubernetes.io/projected/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-kube-api-access-mgqqb\") pod \"heat-api-5cfcc75859-nl7s5\" (UID: \"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2\") " pod="openstack/heat-api-5cfcc75859-nl7s5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.561529 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856ac2bf-cc02-4921-84ff-947e5b947997-config-data-custom\") pod \"heat-cfnapi-7444bc9f5b-kfthl\" (UID: \"856ac2bf-cc02-4921-84ff-947e5b947997\") " pod="openstack/heat-cfnapi-7444bc9f5b-kfthl" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.562106 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-mlr65\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.562230 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-config\") pod \"dnsmasq-dns-688b9f5b49-mlr65\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.562450 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-mlr65\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.562679 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-mlr65\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.562906 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-mlr65\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.569018 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-config-data-custom\") pod \"heat-api-5cfcc75859-nl7s5\" (UID: \"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2\") " pod="openstack/heat-api-5cfcc75859-nl7s5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.571639 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856ac2bf-cc02-4921-84ff-947e5b947997-combined-ca-bundle\") pod \"heat-cfnapi-7444bc9f5b-kfthl\" (UID: \"856ac2bf-cc02-4921-84ff-947e5b947997\") " pod="openstack/heat-cfnapi-7444bc9f5b-kfthl" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.571765 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-combined-ca-bundle\") pod \"heat-api-5cfcc75859-nl7s5\" (UID: \"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2\") " pod="openstack/heat-api-5cfcc75859-nl7s5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.572827 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856ac2bf-cc02-4921-84ff-947e5b947997-config-data\") pod \"heat-cfnapi-7444bc9f5b-kfthl\" (UID: \"856ac2bf-cc02-4921-84ff-947e5b947997\") " pod="openstack/heat-cfnapi-7444bc9f5b-kfthl" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.583099 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgqqb\" (UniqueName: \"kubernetes.io/projected/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-kube-api-access-mgqqb\") pod \"heat-api-5cfcc75859-nl7s5\" (UID: \"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2\") " pod="openstack/heat-api-5cfcc75859-nl7s5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.583315 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-config-data\") pod \"heat-api-5cfcc75859-nl7s5\" (UID: \"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2\") " pod="openstack/heat-api-5cfcc75859-nl7s5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.585544 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t52s6\" (UniqueName: \"kubernetes.io/projected/30e072ee-5227-4c65-8ccd-bc5e01bc5394-kube-api-access-t52s6\") pod \"dnsmasq-dns-688b9f5b49-mlr65\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.587418 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmcng\" (UniqueName: \"kubernetes.io/projected/856ac2bf-cc02-4921-84ff-947e5b947997-kube-api-access-vmcng\") pod \"heat-cfnapi-7444bc9f5b-kfthl\" (UID: \"856ac2bf-cc02-4921-84ff-947e5b947997\") " pod="openstack/heat-cfnapi-7444bc9f5b-kfthl" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.590557 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856ac2bf-cc02-4921-84ff-947e5b947997-config-data-custom\") pod \"heat-cfnapi-7444bc9f5b-kfthl\" (UID: \"856ac2bf-cc02-4921-84ff-947e5b947997\") " pod="openstack/heat-cfnapi-7444bc9f5b-kfthl" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.656658 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.670720 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7444bc9f5b-kfthl" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.709456 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5cfcc75859-nl7s5" Jan 27 16:06:26 crc kubenswrapper[4966]: I0127 16:06:26.962212 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7f67cf7b6c-fm8vs"] Jan 27 16:06:27 crc kubenswrapper[4966]: I0127 16:06:27.187510 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-84f46dccf4-wb9z5"] Jan 27 16:06:27 crc kubenswrapper[4966]: I0127 16:06:27.232555 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" event={"ID":"bb0a5c7d-bb55-4f56-9f03-268df91b2748","Type":"ContainerStarted","Data":"e69b64738fd3b6ed2150e65f722f0f10892193f1c8f0d5ab4212a0599b463351"} Jan 27 16:06:27 crc kubenswrapper[4966]: I0127 16:06:27.235870 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-84f46dccf4-wb9z5" event={"ID":"d4704b7a-7be4-4303-ab29-d9123555133f","Type":"ContainerStarted","Data":"ddea96eb08ad5e8cc044256403dc94c12bb7afddbca3d63d11aeb547a6bea4b8"} Jan 27 16:06:27 crc kubenswrapper[4966]: I0127 16:06:27.653320 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5cfcc75859-nl7s5"] Jan 27 16:06:27 crc kubenswrapper[4966]: I0127 16:06:27.681121 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-mlr65"] Jan 27 16:06:27 crc kubenswrapper[4966]: I0127 16:06:27.710244 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7444bc9f5b-kfthl"] Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.267529 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7444bc9f5b-kfthl" event={"ID":"856ac2bf-cc02-4921-84ff-947e5b947997","Type":"ContainerStarted","Data":"463c0332fa7bb31263dc22103261ca0f2325bd13444d7344bc3a9b048a35ed3e"} Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.273249 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5cfcc75859-nl7s5" event={"ID":"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2","Type":"ContainerStarted","Data":"e5667ba4016d27adfc9af86eaeeba676726156e47986f8b7b569e90ddcf52fb9"} Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.276096 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" event={"ID":"30e072ee-5227-4c65-8ccd-bc5e01bc5394","Type":"ContainerStarted","Data":"81422c2d923cc67800aabb2ce129846fa2f578ee9af9adc128a4ba07c530b794"} Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.285127 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.288740 4966 generic.go:334] "Generic (PLEG): container finished" podID="d50b72a4-4856-4248-b784-239009beb314" containerID="5bbd3659f41fd5066a31dffdb9e308a96d92349f0b720d62c2bf6a95b77d7b0e" exitCode=0 Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.288794 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d50b72a4-4856-4248-b784-239009beb314","Type":"ContainerDied","Data":"5bbd3659f41fd5066a31dffdb9e308a96d92349f0b720d62c2bf6a95b77d7b0e"} Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.288823 4966 scope.go:117] "RemoveContainer" containerID="9f96bb4e225ba3a3b8aea9db3e93be6156e744aa151f965afcb1acd3624c8bed" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.298718 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" event={"ID":"bb0a5c7d-bb55-4f56-9f03-268df91b2748","Type":"ContainerStarted","Data":"7713a79f330650ba38fcbd768317c5e8148a336b9d4fe3ec05988ed8be14e5cd"} Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.298773 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" event={"ID":"bb0a5c7d-bb55-4f56-9f03-268df91b2748","Type":"ContainerStarted","Data":"72e382be7d55764ed1bf1afbda66f62d0e6342dab98fa29543c5a6253bd79215"} Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.298812 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.298827 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.309132 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-84f46dccf4-wb9z5" event={"ID":"d4704b7a-7be4-4303-ab29-d9123555133f","Type":"ContainerStarted","Data":"f7f564556805c1e4bc35ff9e16f5747520540171bb65850da8f17ed0c864c0ff"} Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.310278 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-84f46dccf4-wb9z5" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.392279 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" podStartSLOduration=3.392253203 podStartE2EDuration="3.392253203s" podCreationTimestamp="2026-01-27 16:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:06:28.368604312 +0000 UTC m=+1454.671397800" watchObservedRunningTime="2026-01-27 16:06:28.392253203 +0000 UTC m=+1454.695046691" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.406025 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-84f46dccf4-wb9z5" podStartSLOduration=2.406007375 podStartE2EDuration="2.406007375s" podCreationTimestamp="2026-01-27 16:06:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:06:28.402532687 +0000 UTC m=+1454.705326175" watchObservedRunningTime="2026-01-27 16:06:28.406007375 +0000 UTC m=+1454.708800863" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.406113 4966 scope.go:117] "RemoveContainer" containerID="671c0014085b324b356b6f812ef1cd434ff28515f46fa482c08e3321703cbc0a" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.415122 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-config-data\") pod \"d50b72a4-4856-4248-b784-239009beb314\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.415196 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-sg-core-conf-yaml\") pod \"d50b72a4-4856-4248-b784-239009beb314\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.415236 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d50b72a4-4856-4248-b784-239009beb314-run-httpd\") pod \"d50b72a4-4856-4248-b784-239009beb314\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.415333 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d50b72a4-4856-4248-b784-239009beb314-log-httpd\") pod \"d50b72a4-4856-4248-b784-239009beb314\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.415429 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpvzd\" (UniqueName: \"kubernetes.io/projected/d50b72a4-4856-4248-b784-239009beb314-kube-api-access-xpvzd\") pod \"d50b72a4-4856-4248-b784-239009beb314\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.415493 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-combined-ca-bundle\") pod \"d50b72a4-4856-4248-b784-239009beb314\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.415516 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-scripts\") pod \"d50b72a4-4856-4248-b784-239009beb314\" (UID: \"d50b72a4-4856-4248-b784-239009beb314\") " Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.420188 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d50b72a4-4856-4248-b784-239009beb314-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d50b72a4-4856-4248-b784-239009beb314" (UID: "d50b72a4-4856-4248-b784-239009beb314"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.423700 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d50b72a4-4856-4248-b784-239009beb314-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d50b72a4-4856-4248-b784-239009beb314" (UID: "d50b72a4-4856-4248-b784-239009beb314"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.428114 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-scripts" (OuterVolumeSpecName: "scripts") pod "d50b72a4-4856-4248-b784-239009beb314" (UID: "d50b72a4-4856-4248-b784-239009beb314"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.429797 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d50b72a4-4856-4248-b784-239009beb314-kube-api-access-xpvzd" (OuterVolumeSpecName: "kube-api-access-xpvzd") pod "d50b72a4-4856-4248-b784-239009beb314" (UID: "d50b72a4-4856-4248-b784-239009beb314"). InnerVolumeSpecName "kube-api-access-xpvzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.448885 4966 scope.go:117] "RemoveContainer" containerID="5bbd3659f41fd5066a31dffdb9e308a96d92349f0b720d62c2bf6a95b77d7b0e" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.472109 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d50b72a4-4856-4248-b784-239009beb314" (UID: "d50b72a4-4856-4248-b784-239009beb314"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.534028 4966 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.534056 4966 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d50b72a4-4856-4248-b784-239009beb314-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.534066 4966 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d50b72a4-4856-4248-b784-239009beb314-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.534075 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpvzd\" (UniqueName: \"kubernetes.io/projected/d50b72a4-4856-4248-b784-239009beb314-kube-api-access-xpvzd\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.534085 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.581609 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d50b72a4-4856-4248-b784-239009beb314" (UID: "d50b72a4-4856-4248-b784-239009beb314"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.635605 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-config-data" (OuterVolumeSpecName: "config-data") pod "d50b72a4-4856-4248-b784-239009beb314" (UID: "d50b72a4-4856-4248-b784-239009beb314"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.637018 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.637038 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d50b72a4-4856-4248-b784-239009beb314-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:28 crc kubenswrapper[4966]: I0127 16:06:28.792087 4966 scope.go:117] "RemoveContainer" containerID="1dec27b91ce2f475d477ea70f8314f9a1eb7cea3697059c003c9e588fccff574" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.324636 4966 generic.go:334] "Generic (PLEG): container finished" podID="30e072ee-5227-4c65-8ccd-bc5e01bc5394" containerID="3e0b475de56f30e878607d1fbbe61a9689adf74fb2357f7ea106db898cad61da" exitCode=0 Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.324701 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" event={"ID":"30e072ee-5227-4c65-8ccd-bc5e01bc5394","Type":"ContainerDied","Data":"3e0b475de56f30e878607d1fbbe61a9689adf74fb2357f7ea106db898cad61da"} Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.329584 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d50b72a4-4856-4248-b784-239009beb314","Type":"ContainerDied","Data":"b512d82ebb7a777df9b1a92fca506801927a3cc48e324747962b3ea261934524"} Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.329739 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.423483 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.433153 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.459678 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:06:29 crc kubenswrapper[4966]: E0127 16:06:29.460232 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d50b72a4-4856-4248-b784-239009beb314" containerName="sg-core" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.460253 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="d50b72a4-4856-4248-b784-239009beb314" containerName="sg-core" Jan 27 16:06:29 crc kubenswrapper[4966]: E0127 16:06:29.460310 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d50b72a4-4856-4248-b784-239009beb314" containerName="proxy-httpd" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.460320 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="d50b72a4-4856-4248-b784-239009beb314" containerName="proxy-httpd" Jan 27 16:06:29 crc kubenswrapper[4966]: E0127 16:06:29.460335 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d50b72a4-4856-4248-b784-239009beb314" containerName="ceilometer-central-agent" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.460347 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="d50b72a4-4856-4248-b784-239009beb314" containerName="ceilometer-central-agent" Jan 27 16:06:29 crc kubenswrapper[4966]: E0127 16:06:29.460368 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d50b72a4-4856-4248-b784-239009beb314" containerName="ceilometer-notification-agent" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.460379 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="d50b72a4-4856-4248-b784-239009beb314" containerName="ceilometer-notification-agent" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.460729 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="d50b72a4-4856-4248-b784-239009beb314" containerName="ceilometer-notification-agent" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.460760 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="d50b72a4-4856-4248-b784-239009beb314" containerName="proxy-httpd" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.460784 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="d50b72a4-4856-4248-b784-239009beb314" containerName="ceilometer-central-agent" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.460821 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="d50b72a4-4856-4248-b784-239009beb314" containerName="sg-core" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.463824 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.484100 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.484303 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.523418 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.569118 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b939989d-e7fd-4781-b285-ae9f22fc9bf4-run-httpd\") pod \"ceilometer-0\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.569172 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.569198 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn8j5\" (UniqueName: \"kubernetes.io/projected/b939989d-e7fd-4781-b285-ae9f22fc9bf4-kube-api-access-jn8j5\") pod \"ceilometer-0\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.569244 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-scripts\") pod \"ceilometer-0\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.569354 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b939989d-e7fd-4781-b285-ae9f22fc9bf4-log-httpd\") pod \"ceilometer-0\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.569385 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.569423 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-config-data\") pod \"ceilometer-0\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.671130 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b939989d-e7fd-4781-b285-ae9f22fc9bf4-log-httpd\") pod \"ceilometer-0\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.671192 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.671237 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-config-data\") pod \"ceilometer-0\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.671266 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b939989d-e7fd-4781-b285-ae9f22fc9bf4-run-httpd\") pod \"ceilometer-0\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.671288 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.671311 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn8j5\" (UniqueName: \"kubernetes.io/projected/b939989d-e7fd-4781-b285-ae9f22fc9bf4-kube-api-access-jn8j5\") pod \"ceilometer-0\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.671366 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-scripts\") pod \"ceilometer-0\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.671739 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b939989d-e7fd-4781-b285-ae9f22fc9bf4-log-httpd\") pod \"ceilometer-0\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.672137 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b939989d-e7fd-4781-b285-ae9f22fc9bf4-run-httpd\") pod \"ceilometer-0\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.683380 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-config-data\") pod \"ceilometer-0\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.687113 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.692036 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.693173 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-scripts\") pod \"ceilometer-0\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.704208 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn8j5\" (UniqueName: \"kubernetes.io/projected/b939989d-e7fd-4781-b285-ae9f22fc9bf4-kube-api-access-jn8j5\") pod \"ceilometer-0\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " pod="openstack/ceilometer-0" Jan 27 16:06:29 crc kubenswrapper[4966]: I0127 16:06:29.813665 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:06:30 crc kubenswrapper[4966]: I0127 16:06:30.345402 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" event={"ID":"30e072ee-5227-4c65-8ccd-bc5e01bc5394","Type":"ContainerStarted","Data":"724a74d414cadbe46ba20c2c8bab4a6aca86a3b58fa76fd899f283e5f952e86d"} Jan 27 16:06:30 crc kubenswrapper[4966]: I0127 16:06:30.345842 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:30 crc kubenswrapper[4966]: I0127 16:06:30.356081 4966 generic.go:334] "Generic (PLEG): container finished" podID="b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" containerID="9eb953e556e73c7e43c680279efe4521dbc36ae9401a988c33287db1ccb06066" exitCode=0 Jan 27 16:06:30 crc kubenswrapper[4966]: I0127 16:06:30.356171 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxq52" event={"ID":"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb","Type":"ContainerDied","Data":"9eb953e556e73c7e43c680279efe4521dbc36ae9401a988c33287db1ccb06066"} Jan 27 16:06:30 crc kubenswrapper[4966]: I0127 16:06:30.398244 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" podStartSLOduration=4.398229072 podStartE2EDuration="4.398229072s" podCreationTimestamp="2026-01-27 16:06:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:06:30.3752356 +0000 UTC m=+1456.678029098" watchObservedRunningTime="2026-01-27 16:06:30.398229072 +0000 UTC m=+1456.701022560" Jan 27 16:06:30 crc kubenswrapper[4966]: I0127 16:06:30.557043 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d50b72a4-4856-4248-b784-239009beb314" path="/var/lib/kubelet/pods/d50b72a4-4856-4248-b784-239009beb314/volumes" Jan 27 16:06:31 crc kubenswrapper[4966]: I0127 16:06:31.994983 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.537869 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5d57b89dfc-4z8bl"] Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.539416 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7b59f8d4fb-hj8g9"] Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.540361 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.540832 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5d57b89dfc-4z8bl" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.564035 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5d57b89dfc-4z8bl"] Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.570570 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7b59f8d4fb-hj8g9"] Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.609985 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-767f4dd79b-llll8"] Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.611844 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-767f4dd79b-llll8" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.626450 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-767f4dd79b-llll8"] Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.650759 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e72dc61-1e85-440d-a905-c8b1098da1d6-config-data\") pod \"heat-cfnapi-7b59f8d4fb-hj8g9\" (UID: \"8e72dc61-1e85-440d-a905-c8b1098da1d6\") " pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.651323 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96eb211-2a11-469e-9342-6881a3f3c799-config-data\") pod \"heat-engine-5d57b89dfc-4z8bl\" (UID: \"b96eb211-2a11-469e-9342-6881a3f3c799\") " pod="openstack/heat-engine-5d57b89dfc-4z8bl" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.651453 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96eb211-2a11-469e-9342-6881a3f3c799-combined-ca-bundle\") pod \"heat-engine-5d57b89dfc-4z8bl\" (UID: \"b96eb211-2a11-469e-9342-6881a3f3c799\") " pod="openstack/heat-engine-5d57b89dfc-4z8bl" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.651653 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e72dc61-1e85-440d-a905-c8b1098da1d6-config-data-custom\") pod \"heat-cfnapi-7b59f8d4fb-hj8g9\" (UID: \"8e72dc61-1e85-440d-a905-c8b1098da1d6\") " pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.651714 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m5zd\" (UniqueName: \"kubernetes.io/projected/8e72dc61-1e85-440d-a905-c8b1098da1d6-kube-api-access-6m5zd\") pod \"heat-cfnapi-7b59f8d4fb-hj8g9\" (UID: \"8e72dc61-1e85-440d-a905-c8b1098da1d6\") " pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.651917 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b96eb211-2a11-469e-9342-6881a3f3c799-config-data-custom\") pod \"heat-engine-5d57b89dfc-4z8bl\" (UID: \"b96eb211-2a11-469e-9342-6881a3f3c799\") " pod="openstack/heat-engine-5d57b89dfc-4z8bl" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.652338 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e72dc61-1e85-440d-a905-c8b1098da1d6-combined-ca-bundle\") pod \"heat-cfnapi-7b59f8d4fb-hj8g9\" (UID: \"8e72dc61-1e85-440d-a905-c8b1098da1d6\") " pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.652464 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh4hm\" (UniqueName: \"kubernetes.io/projected/b96eb211-2a11-469e-9342-6881a3f3c799-kube-api-access-vh4hm\") pod \"heat-engine-5d57b89dfc-4z8bl\" (UID: \"b96eb211-2a11-469e-9342-6881a3f3c799\") " pod="openstack/heat-engine-5d57b89dfc-4z8bl" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.754559 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e72dc61-1e85-440d-a905-c8b1098da1d6-combined-ca-bundle\") pod \"heat-cfnapi-7b59f8d4fb-hj8g9\" (UID: \"8e72dc61-1e85-440d-a905-c8b1098da1d6\") " pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.754603 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh4hm\" (UniqueName: \"kubernetes.io/projected/b96eb211-2a11-469e-9342-6881a3f3c799-kube-api-access-vh4hm\") pod \"heat-engine-5d57b89dfc-4z8bl\" (UID: \"b96eb211-2a11-469e-9342-6881a3f3c799\") " pod="openstack/heat-engine-5d57b89dfc-4z8bl" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.754659 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e72dc61-1e85-440d-a905-c8b1098da1d6-config-data\") pod \"heat-cfnapi-7b59f8d4fb-hj8g9\" (UID: \"8e72dc61-1e85-440d-a905-c8b1098da1d6\") " pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.754699 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96eb211-2a11-469e-9342-6881a3f3c799-config-data\") pod \"heat-engine-5d57b89dfc-4z8bl\" (UID: \"b96eb211-2a11-469e-9342-6881a3f3c799\") " pod="openstack/heat-engine-5d57b89dfc-4z8bl" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.754735 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96eb211-2a11-469e-9342-6881a3f3c799-combined-ca-bundle\") pod \"heat-engine-5d57b89dfc-4z8bl\" (UID: \"b96eb211-2a11-469e-9342-6881a3f3c799\") " pod="openstack/heat-engine-5d57b89dfc-4z8bl" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.754759 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-config-data-custom\") pod \"heat-api-767f4dd79b-llll8\" (UID: \"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421\") " pod="openstack/heat-api-767f4dd79b-llll8" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.754785 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e72dc61-1e85-440d-a905-c8b1098da1d6-config-data-custom\") pod \"heat-cfnapi-7b59f8d4fb-hj8g9\" (UID: \"8e72dc61-1e85-440d-a905-c8b1098da1d6\") " pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.754807 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m5zd\" (UniqueName: \"kubernetes.io/projected/8e72dc61-1e85-440d-a905-c8b1098da1d6-kube-api-access-6m5zd\") pod \"heat-cfnapi-7b59f8d4fb-hj8g9\" (UID: \"8e72dc61-1e85-440d-a905-c8b1098da1d6\") " pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.754838 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-combined-ca-bundle\") pod \"heat-api-767f4dd79b-llll8\" (UID: \"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421\") " pod="openstack/heat-api-767f4dd79b-llll8" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.754861 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b96eb211-2a11-469e-9342-6881a3f3c799-config-data-custom\") pod \"heat-engine-5d57b89dfc-4z8bl\" (UID: \"b96eb211-2a11-469e-9342-6881a3f3c799\") " pod="openstack/heat-engine-5d57b89dfc-4z8bl" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.754930 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqz9z\" (UniqueName: \"kubernetes.io/projected/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-kube-api-access-lqz9z\") pod \"heat-api-767f4dd79b-llll8\" (UID: \"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421\") " pod="openstack/heat-api-767f4dd79b-llll8" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.754976 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-config-data\") pod \"heat-api-767f4dd79b-llll8\" (UID: \"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421\") " pod="openstack/heat-api-767f4dd79b-llll8" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.763868 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96eb211-2a11-469e-9342-6881a3f3c799-combined-ca-bundle\") pod \"heat-engine-5d57b89dfc-4z8bl\" (UID: \"b96eb211-2a11-469e-9342-6881a3f3c799\") " pod="openstack/heat-engine-5d57b89dfc-4z8bl" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.764826 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e72dc61-1e85-440d-a905-c8b1098da1d6-combined-ca-bundle\") pod \"heat-cfnapi-7b59f8d4fb-hj8g9\" (UID: \"8e72dc61-1e85-440d-a905-c8b1098da1d6\") " pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.765283 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e72dc61-1e85-440d-a905-c8b1098da1d6-config-data\") pod \"heat-cfnapi-7b59f8d4fb-hj8g9\" (UID: \"8e72dc61-1e85-440d-a905-c8b1098da1d6\") " pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.765787 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96eb211-2a11-469e-9342-6881a3f3c799-config-data\") pod \"heat-engine-5d57b89dfc-4z8bl\" (UID: \"b96eb211-2a11-469e-9342-6881a3f3c799\") " pod="openstack/heat-engine-5d57b89dfc-4z8bl" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.773360 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b96eb211-2a11-469e-9342-6881a3f3c799-config-data-custom\") pod \"heat-engine-5d57b89dfc-4z8bl\" (UID: \"b96eb211-2a11-469e-9342-6881a3f3c799\") " pod="openstack/heat-engine-5d57b89dfc-4z8bl" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.775380 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e72dc61-1e85-440d-a905-c8b1098da1d6-config-data-custom\") pod \"heat-cfnapi-7b59f8d4fb-hj8g9\" (UID: \"8e72dc61-1e85-440d-a905-c8b1098da1d6\") " pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.777059 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m5zd\" (UniqueName: \"kubernetes.io/projected/8e72dc61-1e85-440d-a905-c8b1098da1d6-kube-api-access-6m5zd\") pod \"heat-cfnapi-7b59f8d4fb-hj8g9\" (UID: \"8e72dc61-1e85-440d-a905-c8b1098da1d6\") " pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.777685 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh4hm\" (UniqueName: \"kubernetes.io/projected/b96eb211-2a11-469e-9342-6881a3f3c799-kube-api-access-vh4hm\") pod \"heat-engine-5d57b89dfc-4z8bl\" (UID: \"b96eb211-2a11-469e-9342-6881a3f3c799\") " pod="openstack/heat-engine-5d57b89dfc-4z8bl" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.856741 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-config-data-custom\") pod \"heat-api-767f4dd79b-llll8\" (UID: \"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421\") " pod="openstack/heat-api-767f4dd79b-llll8" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.856813 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-combined-ca-bundle\") pod \"heat-api-767f4dd79b-llll8\" (UID: \"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421\") " pod="openstack/heat-api-767f4dd79b-llll8" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.856914 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqz9z\" (UniqueName: \"kubernetes.io/projected/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-kube-api-access-lqz9z\") pod \"heat-api-767f4dd79b-llll8\" (UID: \"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421\") " pod="openstack/heat-api-767f4dd79b-llll8" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.856960 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-config-data\") pod \"heat-api-767f4dd79b-llll8\" (UID: \"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421\") " pod="openstack/heat-api-767f4dd79b-llll8" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.861534 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-config-data\") pod \"heat-api-767f4dd79b-llll8\" (UID: \"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421\") " pod="openstack/heat-api-767f4dd79b-llll8" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.861562 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-config-data-custom\") pod \"heat-api-767f4dd79b-llll8\" (UID: \"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421\") " pod="openstack/heat-api-767f4dd79b-llll8" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.863534 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-combined-ca-bundle\") pod \"heat-api-767f4dd79b-llll8\" (UID: \"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421\") " pod="openstack/heat-api-767f4dd79b-llll8" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.869695 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.882448 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqz9z\" (UniqueName: \"kubernetes.io/projected/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-kube-api-access-lqz9z\") pod \"heat-api-767f4dd79b-llll8\" (UID: \"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421\") " pod="openstack/heat-api-767f4dd79b-llll8" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.902123 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5d57b89dfc-4z8bl" Jan 27 16:06:32 crc kubenswrapper[4966]: I0127 16:06:32.949342 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-767f4dd79b-llll8" Jan 27 16:06:33 crc kubenswrapper[4966]: I0127 16:06:33.936583 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5cfcc75859-nl7s5"] Jan 27 16:06:33 crc kubenswrapper[4966]: I0127 16:06:33.965823 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7444bc9f5b-kfthl"] Jan 27 16:06:33 crc kubenswrapper[4966]: I0127 16:06:33.988637 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7c6c4669c5-dd9mb"] Jan 27 16:06:33 crc kubenswrapper[4966]: I0127 16:06:33.991057 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:33 crc kubenswrapper[4966]: I0127 16:06:33.995268 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 27 16:06:33 crc kubenswrapper[4966]: I0127 16:06:33.995284 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.023839 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-d949f4598-q7z92"] Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.025862 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.037967 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7c6c4669c5-dd9mb"] Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.039044 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.043240 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.062044 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-d949f4598-q7z92"] Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.088200 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-combined-ca-bundle\") pod \"heat-api-7c6c4669c5-dd9mb\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.088536 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-internal-tls-certs\") pod \"heat-api-7c6c4669c5-dd9mb\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.089140 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-config-data\") pod \"heat-api-7c6c4669c5-dd9mb\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.089244 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-combined-ca-bundle\") pod \"heat-cfnapi-d949f4598-q7z92\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.089345 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl2sw\" (UniqueName: \"kubernetes.io/projected/4ea91e8a-6a88-4f54-a60e-81f68d447beb-kube-api-access-kl2sw\") pod \"heat-cfnapi-d949f4598-q7z92\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.089516 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-public-tls-certs\") pod \"heat-api-7c6c4669c5-dd9mb\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.089702 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2rqc\" (UniqueName: \"kubernetes.io/projected/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-kube-api-access-s2rqc\") pod \"heat-api-7c6c4669c5-dd9mb\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.089982 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-config-data\") pod \"heat-cfnapi-d949f4598-q7z92\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.090136 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-public-tls-certs\") pod \"heat-cfnapi-d949f4598-q7z92\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.090338 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-config-data-custom\") pod \"heat-api-7c6c4669c5-dd9mb\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.090460 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-config-data-custom\") pod \"heat-cfnapi-d949f4598-q7z92\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.090618 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-internal-tls-certs\") pod \"heat-cfnapi-d949f4598-q7z92\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.133288 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.134590 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4417e7ad-d093-4fe3-bf2a-d7504ba5db81" containerName="glance-log" containerID="cri-o://2557b7dbc0d640cce7ce19958c6f56ff01730dab2b69441f098f61907d948e84" gracePeriod=30 Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.134746 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4417e7ad-d093-4fe3-bf2a-d7504ba5db81" containerName="glance-httpd" containerID="cri-o://96160452ef69eb575158448d9a2c75f539b571a48dce8eeef6d2a9c545dfd924" gracePeriod=30 Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.193356 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-combined-ca-bundle\") pod \"heat-api-7c6c4669c5-dd9mb\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.193431 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-internal-tls-certs\") pod \"heat-api-7c6c4669c5-dd9mb\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.193469 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-config-data\") pod \"heat-api-7c6c4669c5-dd9mb\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.193488 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-combined-ca-bundle\") pod \"heat-cfnapi-d949f4598-q7z92\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.193504 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl2sw\" (UniqueName: \"kubernetes.io/projected/4ea91e8a-6a88-4f54-a60e-81f68d447beb-kube-api-access-kl2sw\") pod \"heat-cfnapi-d949f4598-q7z92\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.194579 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-public-tls-certs\") pod \"heat-api-7c6c4669c5-dd9mb\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.194625 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2rqc\" (UniqueName: \"kubernetes.io/projected/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-kube-api-access-s2rqc\") pod \"heat-api-7c6c4669c5-dd9mb\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.194726 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-config-data\") pod \"heat-cfnapi-d949f4598-q7z92\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.194756 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-public-tls-certs\") pod \"heat-cfnapi-d949f4598-q7z92\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.194809 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-config-data-custom\") pod \"heat-api-7c6c4669c5-dd9mb\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.194832 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-config-data-custom\") pod \"heat-cfnapi-d949f4598-q7z92\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.194870 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-internal-tls-certs\") pod \"heat-cfnapi-d949f4598-q7z92\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.200278 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-public-tls-certs\") pod \"heat-api-7c6c4669c5-dd9mb\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.200503 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-config-data-custom\") pod \"heat-api-7c6c4669c5-dd9mb\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.201029 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-public-tls-certs\") pod \"heat-cfnapi-d949f4598-q7z92\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.201490 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-combined-ca-bundle\") pod \"heat-api-7c6c4669c5-dd9mb\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.202060 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-internal-tls-certs\") pod \"heat-cfnapi-d949f4598-q7z92\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.202139 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-config-data\") pod \"heat-api-7c6c4669c5-dd9mb\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.203375 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-config-data-custom\") pod \"heat-cfnapi-d949f4598-q7z92\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.204142 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-config-data\") pod \"heat-cfnapi-d949f4598-q7z92\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.209374 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl2sw\" (UniqueName: \"kubernetes.io/projected/4ea91e8a-6a88-4f54-a60e-81f68d447beb-kube-api-access-kl2sw\") pod \"heat-cfnapi-d949f4598-q7z92\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.213224 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-internal-tls-certs\") pod \"heat-api-7c6c4669c5-dd9mb\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.215277 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2rqc\" (UniqueName: \"kubernetes.io/projected/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-kube-api-access-s2rqc\") pod \"heat-api-7c6c4669c5-dd9mb\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.224501 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-combined-ca-bundle\") pod \"heat-cfnapi-d949f4598-q7z92\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.313182 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.354838 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.411984 4966 generic.go:334] "Generic (PLEG): container finished" podID="4417e7ad-d093-4fe3-bf2a-d7504ba5db81" containerID="2557b7dbc0d640cce7ce19958c6f56ff01730dab2b69441f098f61907d948e84" exitCode=143 Jan 27 16:06:34 crc kubenswrapper[4966]: I0127 16:06:34.412024 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4417e7ad-d093-4fe3-bf2a-d7504ba5db81","Type":"ContainerDied","Data":"2557b7dbc0d640cce7ce19958c6f56ff01730dab2b69441f098f61907d948e84"} Jan 27 16:06:35 crc kubenswrapper[4966]: I0127 16:06:35.326061 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 16:06:35 crc kubenswrapper[4966]: I0127 16:06:35.327149 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ae145432-f4ac-4937-a71d-5c871832c20a" containerName="glance-log" containerID="cri-o://79955df90857be0dc204773aba9d4c7b251a4c1f8e34305d7a46483bde28a748" gracePeriod=30 Jan 27 16:06:35 crc kubenswrapper[4966]: I0127 16:06:35.327221 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ae145432-f4ac-4937-a71d-5c871832c20a" containerName="glance-httpd" containerID="cri-o://2d30a3136e0067a73d4556ad2583452bb94726a7e73445d9956209567a53e232" gracePeriod=30 Jan 27 16:06:36 crc kubenswrapper[4966]: I0127 16:06:36.018437 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:36 crc kubenswrapper[4966]: I0127 16:06:36.026555 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" Jan 27 16:06:36 crc kubenswrapper[4966]: I0127 16:06:36.435427 4966 generic.go:334] "Generic (PLEG): container finished" podID="ae145432-f4ac-4937-a71d-5c871832c20a" containerID="79955df90857be0dc204773aba9d4c7b251a4c1f8e34305d7a46483bde28a748" exitCode=143 Jan 27 16:06:36 crc kubenswrapper[4966]: I0127 16:06:36.435507 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae145432-f4ac-4937-a71d-5c871832c20a","Type":"ContainerDied","Data":"79955df90857be0dc204773aba9d4c7b251a4c1f8e34305d7a46483bde28a748"} Jan 27 16:06:36 crc kubenswrapper[4966]: I0127 16:06:36.658052 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:06:36 crc kubenswrapper[4966]: I0127 16:06:36.737016 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-22rl8"] Jan 27 16:06:36 crc kubenswrapper[4966]: I0127 16:06:36.737243 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-22rl8" podUID="f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6" containerName="dnsmasq-dns" containerID="cri-o://eeb0f1440ced4edb03a189170d2506dc0aa04d3d1985ed20a1eb40072dd82bb3" gracePeriod=10 Jan 27 16:06:37 crc kubenswrapper[4966]: I0127 16:06:37.077835 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-22rl8" podUID="f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.204:5353: connect: connection refused" Jan 27 16:06:37 crc kubenswrapper[4966]: I0127 16:06:37.446562 4966 generic.go:334] "Generic (PLEG): container finished" podID="f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6" containerID="eeb0f1440ced4edb03a189170d2506dc0aa04d3d1985ed20a1eb40072dd82bb3" exitCode=0 Jan 27 16:06:37 crc kubenswrapper[4966]: I0127 16:06:37.446600 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-22rl8" event={"ID":"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6","Type":"ContainerDied","Data":"eeb0f1440ced4edb03a189170d2506dc0aa04d3d1985ed20a1eb40072dd82bb3"} Jan 27 16:06:38 crc kubenswrapper[4966]: I0127 16:06:38.464194 4966 generic.go:334] "Generic (PLEG): container finished" podID="4417e7ad-d093-4fe3-bf2a-d7504ba5db81" containerID="96160452ef69eb575158448d9a2c75f539b571a48dce8eeef6d2a9c545dfd924" exitCode=0 Jan 27 16:06:38 crc kubenswrapper[4966]: I0127 16:06:38.464394 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4417e7ad-d093-4fe3-bf2a-d7504ba5db81","Type":"ContainerDied","Data":"96160452ef69eb575158448d9a2c75f539b571a48dce8eeef6d2a9c545dfd924"} Jan 27 16:06:38 crc kubenswrapper[4966]: I0127 16:06:38.482780 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="ae145432-f4ac-4937-a71d-5c871832c20a" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.190:9292/healthcheck\": read tcp 10.217.0.2:34700->10.217.0.190:9292: read: connection reset by peer" Jan 27 16:06:38 crc kubenswrapper[4966]: I0127 16:06:38.483126 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="ae145432-f4ac-4937-a71d-5c871832c20a" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.190:9292/healthcheck\": read tcp 10.217.0.2:34708->10.217.0.190:9292: read: connection reset by peer" Jan 27 16:06:38 crc kubenswrapper[4966]: I0127 16:06:38.708698 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:38 crc kubenswrapper[4966]: E0127 16:06:38.780118 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified" Jan 27 16:06:38 crc kubenswrapper[4966]: E0127 16:06:38.780414 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-cfnapi,Image:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_httpd_setup && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n585h658hdch5fdhd9h6dh585h666h595h8fh75h59fh645h5dch547h67dhc9hbbhc6h7bh9fh7fh567h544h55dh57bh5cdh659h79hb9h5bdhdbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:heat-cfnapi-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-custom,ReadOnly:true,MountPath:/etc/heat/heat.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vmcng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:10,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:10,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-cfnapi-7444bc9f5b-kfthl_openstack(856ac2bf-cc02-4921-84ff-947e5b947997): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 16:06:38 crc kubenswrapper[4966]: E0127 16:06:38.781831 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-cfnapi-7444bc9f5b-kfthl" podUID="856ac2bf-cc02-4921-84ff-947e5b947997" Jan 27 16:06:38 crc kubenswrapper[4966]: I0127 16:06:38.821084 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-ovsdbserver-sb\") pod \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " Jan 27 16:06:38 crc kubenswrapper[4966]: I0127 16:06:38.821468 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-dns-swift-storage-0\") pod \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " Jan 27 16:06:38 crc kubenswrapper[4966]: I0127 16:06:38.821838 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-ovsdbserver-nb\") pod \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " Jan 27 16:06:38 crc kubenswrapper[4966]: I0127 16:06:38.821999 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-dns-svc\") pod \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " Jan 27 16:06:38 crc kubenswrapper[4966]: I0127 16:06:38.822183 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cdl4\" (UniqueName: \"kubernetes.io/projected/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-kube-api-access-4cdl4\") pod \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " Jan 27 16:06:38 crc kubenswrapper[4966]: I0127 16:06:38.822326 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-config\") pod \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\" (UID: \"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6\") " Jan 27 16:06:38 crc kubenswrapper[4966]: I0127 16:06:38.831086 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-kube-api-access-4cdl4" (OuterVolumeSpecName: "kube-api-access-4cdl4") pod "f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6" (UID: "f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6"). InnerVolumeSpecName "kube-api-access-4cdl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:38 crc kubenswrapper[4966]: I0127 16:06:38.896749 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6" (UID: "f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:38 crc kubenswrapper[4966]: I0127 16:06:38.899456 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6" (UID: "f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:38 crc kubenswrapper[4966]: I0127 16:06:38.905496 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-config" (OuterVolumeSpecName: "config") pod "f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6" (UID: "f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:38 crc kubenswrapper[4966]: I0127 16:06:38.926306 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6" (UID: "f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:38 crc kubenswrapper[4966]: I0127 16:06:38.926439 4966 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:38 crc kubenswrapper[4966]: I0127 16:06:38.926472 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cdl4\" (UniqueName: \"kubernetes.io/projected/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-kube-api-access-4cdl4\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:38 crc kubenswrapper[4966]: I0127 16:06:38.926483 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:38 crc kubenswrapper[4966]: I0127 16:06:38.926492 4966 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:38 crc kubenswrapper[4966]: I0127 16:06:38.948968 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6" (UID: "f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.029199 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.029231 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.351910 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.443181 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-config-data\") pod \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.453665 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n65zc\" (UniqueName: \"kubernetes.io/projected/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-kube-api-access-n65zc\") pod \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.453832 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-httpd-run\") pod \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.453942 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-logs\") pod \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.455005 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\") pod \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.455034 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-scripts\") pod \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.455060 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-public-tls-certs\") pod \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.455101 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-combined-ca-bundle\") pod \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\" (UID: \"4417e7ad-d093-4fe3-bf2a-d7504ba5db81\") " Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.483933 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4417e7ad-d093-4fe3-bf2a-d7504ba5db81" (UID: "4417e7ad-d093-4fe3-bf2a-d7504ba5db81"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.485083 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-logs" (OuterVolumeSpecName: "logs") pod "4417e7ad-d093-4fe3-bf2a-d7504ba5db81" (UID: "4417e7ad-d093-4fe3-bf2a-d7504ba5db81"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.485214 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-scripts" (OuterVolumeSpecName: "scripts") pod "4417e7ad-d093-4fe3-bf2a-d7504ba5db81" (UID: "4417e7ad-d093-4fe3-bf2a-d7504ba5db81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.486462 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-kube-api-access-n65zc" (OuterVolumeSpecName: "kube-api-access-n65zc") pod "4417e7ad-d093-4fe3-bf2a-d7504ba5db81" (UID: "4417e7ad-d093-4fe3-bf2a-d7504ba5db81"). InnerVolumeSpecName "kube-api-access-n65zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.554602 4966 generic.go:334] "Generic (PLEG): container finished" podID="ae145432-f4ac-4937-a71d-5c871832c20a" containerID="2d30a3136e0067a73d4556ad2583452bb94726a7e73445d9956209567a53e232" exitCode=0 Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.554675 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae145432-f4ac-4937-a71d-5c871832c20a","Type":"ContainerDied","Data":"2d30a3136e0067a73d4556ad2583452bb94726a7e73445d9956209567a53e232"} Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.560508 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4417e7ad-d093-4fe3-bf2a-d7504ba5db81","Type":"ContainerDied","Data":"d69d4366a002a308cc0590eda89e2ed8f76fc9f7917ae244075ab4a73a2ee2a0"} Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.560573 4966 scope.go:117] "RemoveContainer" containerID="96160452ef69eb575158448d9a2c75f539b571a48dce8eeef6d2a9c545dfd924" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.560866 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.561510 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n65zc\" (UniqueName: \"kubernetes.io/projected/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-kube-api-access-n65zc\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.562341 4966 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.562375 4966 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-logs\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.562386 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.564703 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-22rl8" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.564780 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-22rl8" event={"ID":"f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6","Type":"ContainerDied","Data":"aa53c6dbb151e70bd802b8ee22d4fbdc2e3733c42fc42c6eb4c2dad0961af1df"} Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.597229 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.225047201 podStartE2EDuration="20.597212782s" podCreationTimestamp="2026-01-27 16:06:19 +0000 UTC" firstStartedPulling="2026-01-27 16:06:20.49482674 +0000 UTC m=+1446.797620228" lastFinishedPulling="2026-01-27 16:06:38.866992321 +0000 UTC m=+1465.169785809" observedRunningTime="2026-01-27 16:06:39.592007729 +0000 UTC m=+1465.894801217" watchObservedRunningTime="2026-01-27 16:06:39.597212782 +0000 UTC m=+1465.900006270" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.598325 4966 scope.go:117] "RemoveContainer" containerID="2557b7dbc0d640cce7ce19958c6f56ff01730dab2b69441f098f61907d948e84" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.657596 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0" (OuterVolumeSpecName: "glance") pod "4417e7ad-d093-4fe3-bf2a-d7504ba5db81" (UID: "4417e7ad-d093-4fe3-bf2a-d7504ba5db81"). InnerVolumeSpecName "pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.662840 4966 scope.go:117] "RemoveContainer" containerID="eeb0f1440ced4edb03a189170d2506dc0aa04d3d1985ed20a1eb40072dd82bb3" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.664621 4966 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\") on node \"crc\" " Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.671865 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-22rl8"] Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.683518 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-22rl8"] Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.691224 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.696326 4966 scope.go:117] "RemoveContainer" containerID="c3c8c68a7e898c416b965815724fc5fd6a882d9cb078e852cca229a4fa3254b2" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.765676 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-scripts\") pod \"ae145432-f4ac-4937-a71d-5c871832c20a\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.767967 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-internal-tls-certs\") pod \"ae145432-f4ac-4937-a71d-5c871832c20a\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.768573 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51\") pod \"ae145432-f4ac-4937-a71d-5c871832c20a\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.768655 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae145432-f4ac-4937-a71d-5c871832c20a-logs\") pod \"ae145432-f4ac-4937-a71d-5c871832c20a\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.768877 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldnm9\" (UniqueName: \"kubernetes.io/projected/ae145432-f4ac-4937-a71d-5c871832c20a-kube-api-access-ldnm9\") pod \"ae145432-f4ac-4937-a71d-5c871832c20a\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.769246 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae145432-f4ac-4937-a71d-5c871832c20a-logs" (OuterVolumeSpecName: "logs") pod "ae145432-f4ac-4937-a71d-5c871832c20a" (UID: "ae145432-f4ac-4937-a71d-5c871832c20a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.769312 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-combined-ca-bundle\") pod \"ae145432-f4ac-4937-a71d-5c871832c20a\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.769419 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-config-data\") pod \"ae145432-f4ac-4937-a71d-5c871832c20a\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.769494 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae145432-f4ac-4937-a71d-5c871832c20a-httpd-run\") pod \"ae145432-f4ac-4937-a71d-5c871832c20a\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.770717 4966 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae145432-f4ac-4937-a71d-5c871832c20a-logs\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.770992 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae145432-f4ac-4937-a71d-5c871832c20a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ae145432-f4ac-4937-a71d-5c871832c20a" (UID: "ae145432-f4ac-4937-a71d-5c871832c20a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.776316 4966 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.776454 4966 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0") on node "crc" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.783681 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae145432-f4ac-4937-a71d-5c871832c20a-kube-api-access-ldnm9" (OuterVolumeSpecName: "kube-api-access-ldnm9") pod "ae145432-f4ac-4937-a71d-5c871832c20a" (UID: "ae145432-f4ac-4937-a71d-5c871832c20a"). InnerVolumeSpecName "kube-api-access-ldnm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.784190 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-scripts" (OuterVolumeSpecName: "scripts") pod "ae145432-f4ac-4937-a71d-5c871832c20a" (UID: "ae145432-f4ac-4937-a71d-5c871832c20a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.819205 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4417e7ad-d093-4fe3-bf2a-d7504ba5db81" (UID: "4417e7ad-d093-4fe3-bf2a-d7504ba5db81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:39 crc kubenswrapper[4966]: E0127 16:06:39.835224 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51 podName:ae145432-f4ac-4937-a71d-5c871832c20a nodeName:}" failed. No retries permitted until 2026-01-27 16:06:40.335179388 +0000 UTC m=+1466.637972876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "glance" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51") pod "ae145432-f4ac-4937-a71d-5c871832c20a" (UID: "ae145432-f4ac-4937-a71d-5c871832c20a") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.861958 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae145432-f4ac-4937-a71d-5c871832c20a" (UID: "ae145432-f4ac-4937-a71d-5c871832c20a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.866117 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4417e7ad-d093-4fe3-bf2a-d7504ba5db81" (UID: "4417e7ad-d093-4fe3-bf2a-d7504ba5db81"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.866204 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-config-data" (OuterVolumeSpecName: "config-data") pod "4417e7ad-d093-4fe3-bf2a-d7504ba5db81" (UID: "4417e7ad-d093-4fe3-bf2a-d7504ba5db81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.874804 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.874839 4966 reconciler_common.go:293] "Volume detached for volume \"pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.874853 4966 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.874867 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldnm9\" (UniqueName: \"kubernetes.io/projected/ae145432-f4ac-4937-a71d-5c871832c20a-kube-api-access-ldnm9\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.874880 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.874890 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.874916 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4417e7ad-d093-4fe3-bf2a-d7504ba5db81-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.874926 4966 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae145432-f4ac-4937-a71d-5c871832c20a-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.897167 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-config-data" (OuterVolumeSpecName: "config-data") pod "ae145432-f4ac-4937-a71d-5c871832c20a" (UID: "ae145432-f4ac-4937-a71d-5c871832c20a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.902974 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ae145432-f4ac-4937-a71d-5c871832c20a" (UID: "ae145432-f4ac-4937-a71d-5c871832c20a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.977177 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:39 crc kubenswrapper[4966]: I0127 16:06:39.977217 4966 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae145432-f4ac-4937-a71d-5c871832c20a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.124713 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.124771 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.192077 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7c6c4669c5-dd9mb"] Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.260469 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-767f4dd79b-llll8"] Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.313686 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5d57b89dfc-4z8bl"] Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.319913 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7444bc9f5b-kfthl" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.347667 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7b59f8d4fb-hj8g9"] Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.393487 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51\") pod \"ae145432-f4ac-4937-a71d-5c871832c20a\" (UID: \"ae145432-f4ac-4937-a71d-5c871832c20a\") " Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.402053 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-bh6sq"] Jan 27 16:06:40 crc kubenswrapper[4966]: E0127 16:06:40.406016 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4417e7ad-d093-4fe3-bf2a-d7504ba5db81" containerName="glance-log" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.406178 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4417e7ad-d093-4fe3-bf2a-d7504ba5db81" containerName="glance-log" Jan 27 16:06:40 crc kubenswrapper[4966]: E0127 16:06:40.406551 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6" containerName="dnsmasq-dns" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.406617 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6" containerName="dnsmasq-dns" Jan 27 16:06:40 crc kubenswrapper[4966]: E0127 16:06:40.406683 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae145432-f4ac-4937-a71d-5c871832c20a" containerName="glance-log" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.406740 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae145432-f4ac-4937-a71d-5c871832c20a" containerName="glance-log" Jan 27 16:06:40 crc kubenswrapper[4966]: E0127 16:06:40.406799 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae145432-f4ac-4937-a71d-5c871832c20a" containerName="glance-httpd" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.406849 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae145432-f4ac-4937-a71d-5c871832c20a" containerName="glance-httpd" Jan 27 16:06:40 crc kubenswrapper[4966]: E0127 16:06:40.406922 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6" containerName="init" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.406976 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6" containerName="init" Jan 27 16:06:40 crc kubenswrapper[4966]: E0127 16:06:40.407074 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4417e7ad-d093-4fe3-bf2a-d7504ba5db81" containerName="glance-httpd" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.407132 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4417e7ad-d093-4fe3-bf2a-d7504ba5db81" containerName="glance-httpd" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.407424 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6" containerName="dnsmasq-dns" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.407488 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae145432-f4ac-4937-a71d-5c871832c20a" containerName="glance-httpd" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.407551 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4417e7ad-d093-4fe3-bf2a-d7504ba5db81" containerName="glance-httpd" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.407966 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae145432-f4ac-4937-a71d-5c871832c20a" containerName="glance-log" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.408096 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4417e7ad-d093-4fe3-bf2a-d7504ba5db81" containerName="glance-log" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.408971 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bh6sq" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.440555 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.486218 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bh6sq"] Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.497410 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856ac2bf-cc02-4921-84ff-947e5b947997-combined-ca-bundle\") pod \"856ac2bf-cc02-4921-84ff-947e5b947997\" (UID: \"856ac2bf-cc02-4921-84ff-947e5b947997\") " Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.497760 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856ac2bf-cc02-4921-84ff-947e5b947997-config-data\") pod \"856ac2bf-cc02-4921-84ff-947e5b947997\" (UID: \"856ac2bf-cc02-4921-84ff-947e5b947997\") " Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.497956 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856ac2bf-cc02-4921-84ff-947e5b947997-config-data-custom\") pod \"856ac2bf-cc02-4921-84ff-947e5b947997\" (UID: \"856ac2bf-cc02-4921-84ff-947e5b947997\") " Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.498069 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmcng\" (UniqueName: \"kubernetes.io/projected/856ac2bf-cc02-4921-84ff-947e5b947997-kube-api-access-vmcng\") pod \"856ac2bf-cc02-4921-84ff-947e5b947997\" (UID: \"856ac2bf-cc02-4921-84ff-947e5b947997\") " Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.506771 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856ac2bf-cc02-4921-84ff-947e5b947997-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "856ac2bf-cc02-4921-84ff-947e5b947997" (UID: "856ac2bf-cc02-4921-84ff-947e5b947997"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.510163 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856ac2bf-cc02-4921-84ff-947e5b947997-config-data" (OuterVolumeSpecName: "config-data") pod "856ac2bf-cc02-4921-84ff-947e5b947997" (UID: "856ac2bf-cc02-4921-84ff-947e5b947997"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.510968 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856ac2bf-cc02-4921-84ff-947e5b947997-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "856ac2bf-cc02-4921-84ff-947e5b947997" (UID: "856ac2bf-cc02-4921-84ff-947e5b947997"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.516973 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/856ac2bf-cc02-4921-84ff-947e5b947997-kube-api-access-vmcng" (OuterVolumeSpecName: "kube-api-access-vmcng") pod "856ac2bf-cc02-4921-84ff-947e5b947997" (UID: "856ac2bf-cc02-4921-84ff-947e5b947997"). InnerVolumeSpecName "kube-api-access-vmcng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.518207 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51" (OuterVolumeSpecName: "glance") pod "ae145432-f4ac-4937-a71d-5c871832c20a" (UID: "ae145432-f4ac-4937-a71d-5c871832c20a"). InnerVolumeSpecName "pvc-eac12953-51d3-4d82-a334-6290e6023c51". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.570308 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6" path="/var/lib/kubelet/pods/f6a2afb9-0e3d-4e5c-8da7-9141bf67ccd6/volumes" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.571916 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.571948 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-d949f4598-q7z92"] Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.593124 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.607683 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6v89\" (UniqueName: \"kubernetes.io/projected/e4fc12a0-b687-4984-a892-6ce0cdf2c920-kube-api-access-l6v89\") pod \"nova-api-db-create-bh6sq\" (UID: \"e4fc12a0-b687-4984-a892-6ce0cdf2c920\") " pod="openstack/nova-api-db-create-bh6sq" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.607863 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4fc12a0-b687-4984-a892-6ce0cdf2c920-operator-scripts\") pod \"nova-api-db-create-bh6sq\" (UID: \"e4fc12a0-b687-4984-a892-6ce0cdf2c920\") " pod="openstack/nova-api-db-create-bh6sq" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.607989 4966 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-eac12953-51d3-4d82-a334-6290e6023c51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51\") on node \"crc\" " Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.608010 4966 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856ac2bf-cc02-4921-84ff-947e5b947997-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.608021 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmcng\" (UniqueName: \"kubernetes.io/projected/856ac2bf-cc02-4921-84ff-947e5b947997-kube-api-access-vmcng\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.608035 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856ac2bf-cc02-4921-84ff-947e5b947997-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.608045 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856ac2bf-cc02-4921-84ff-947e5b947997-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.622040 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" event={"ID":"8e72dc61-1e85-440d-a905-c8b1098da1d6","Type":"ContainerStarted","Data":"0c68908095acab19e29db447e05b13e4b4bd1f2b72e0346edeadf1d462fb42f4"} Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.693555 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxq52" event={"ID":"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb","Type":"ContainerStarted","Data":"7231d6bf02993ae8aa7403697c35960483922ec96248cff0aa08b09f1e2b9459"} Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.700977 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.702943 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.703740 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5461689b-4309-4ffe-9b5a-fef2eba77915","Type":"ContainerStarted","Data":"9bb72d3edd8037eb9884c4e32a60c246278365adcec0b11768578d25d564c950"} Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.713646 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.714134 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.715649 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b939989d-e7fd-4781-b285-ae9f22fc9bf4","Type":"ContainerStarted","Data":"5d3e2d20036aad7be4c7d10f6d20d4265f41e574225c6070de914becfa4f8625"} Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.716020 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.717112 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4fc12a0-b687-4984-a892-6ce0cdf2c920-operator-scripts\") pod \"nova-api-db-create-bh6sq\" (UID: \"e4fc12a0-b687-4984-a892-6ce0cdf2c920\") " pod="openstack/nova-api-db-create-bh6sq" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.717393 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6v89\" (UniqueName: \"kubernetes.io/projected/e4fc12a0-b687-4984-a892-6ce0cdf2c920-kube-api-access-l6v89\") pod \"nova-api-db-create-bh6sq\" (UID: \"e4fc12a0-b687-4984-a892-6ce0cdf2c920\") " pod="openstack/nova-api-db-create-bh6sq" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.717992 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c6c4669c5-dd9mb" event={"ID":"8ca408b2-f3aa-4504-9f89-8028f0cdc94a","Type":"ContainerStarted","Data":"c7a82188155f3aaa5440fd9d351780fb77e7f69950a47edc88c1737eef393156"} Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.718299 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4fc12a0-b687-4984-a892-6ce0cdf2c920-operator-scripts\") pod \"nova-api-db-create-bh6sq\" (UID: \"e4fc12a0-b687-4984-a892-6ce0cdf2c920\") " pod="openstack/nova-api-db-create-bh6sq" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.720355 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-767f4dd79b-llll8" event={"ID":"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421","Type":"ContainerStarted","Data":"91b82123ab8ea193d0c2379ba59fbc5080b0c2faba22f1f8f2a02bee23733920"} Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.721680 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d949f4598-q7z92" event={"ID":"4ea91e8a-6a88-4f54-a60e-81f68d447beb","Type":"ContainerStarted","Data":"f0df67c86d0aa7dbef7664b195448ff033981d162d9a4a9ef959aebfba051e79"} Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.729190 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.735752 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae145432-f4ac-4937-a71d-5c871832c20a","Type":"ContainerDied","Data":"a703be70ecb949d53b21c1086e596499ec6a76f1b363ffeff1180859322b065f"} Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.735820 4966 scope.go:117] "RemoveContainer" containerID="2d30a3136e0067a73d4556ad2583452bb94726a7e73445d9956209567a53e232" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.759113 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6v89\" (UniqueName: \"kubernetes.io/projected/e4fc12a0-b687-4984-a892-6ce0cdf2c920-kube-api-access-l6v89\") pod \"nova-api-db-create-bh6sq\" (UID: \"e4fc12a0-b687-4984-a892-6ce0cdf2c920\") " pod="openstack/nova-api-db-create-bh6sq" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.760107 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-7djpj"] Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.762219 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7444bc9f5b-kfthl" Jan 27 16:06:40 crc kubenswrapper[4966]: I0127 16:06:40.773353 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-5cfcc75859-nl7s5" podUID="3a1374f0-f77f-4d71-9e9d-e0276e4beaa2" containerName="heat-api" containerID="cri-o://c23be1534151f89adfcbb184557ba6adb34249bc1224f6c32ef2b738cab80472" gracePeriod=60 Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.800535 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5cfcc75859-nl7s5" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.800574 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7444bc9f5b-kfthl" event={"ID":"856ac2bf-cc02-4921-84ff-947e5b947997","Type":"ContainerDied","Data":"463c0332fa7bb31263dc22103261ca0f2325bd13444d7344bc3a9b048a35ed3e"} Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.800604 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5cfcc75859-nl7s5" event={"ID":"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2","Type":"ContainerStarted","Data":"c23be1534151f89adfcbb184557ba6adb34249bc1224f6c32ef2b738cab80472"} Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.800648 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7djpj"] Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.800758 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7djpj" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.800822 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5d57b89dfc-4z8bl" event={"ID":"b96eb211-2a11-469e-9342-6881a3f3c799","Type":"ContainerStarted","Data":"cdd2b07a4ff3851754e8f705edfcc846b84ecc656f3848d792fd555b344c2af5"} Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.813200 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bh6sq" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.821625 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.821773 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3089e5a-e4ea-4397-a676-fd4230311639-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.821916 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3089e5a-e4ea-4397-a676-fd4230311639-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.821990 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3089e5a-e4ea-4397-a676-fd4230311639-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.822079 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3089e5a-e4ea-4397-a676-fd4230311639-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.822116 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9brk\" (UniqueName: \"kubernetes.io/projected/c3089e5a-e4ea-4397-a676-fd4230311639-kube-api-access-g9brk\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.822150 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3089e5a-e4ea-4397-a676-fd4230311639-logs\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.822180 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3089e5a-e4ea-4397-a676-fd4230311639-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.837166 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-039f-account-create-update-5k2qz"] Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.842166 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-039f-account-create-update-5k2qz" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.843963 4966 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.844192 4966 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-eac12953-51d3-4d82-a334-6290e6023c51" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51") on node "crc" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.846247 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.866273 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-039f-account-create-update-5k2qz"] Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.912980 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-57bt9"] Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.921061 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-57bt9" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.923354 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-57bt9"] Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.924182 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.924248 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcl5k\" (UniqueName: \"kubernetes.io/projected/c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe-kube-api-access-wcl5k\") pod \"nova-cell0-db-create-7djpj\" (UID: \"c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe\") " pod="openstack/nova-cell0-db-create-7djpj" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.924307 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3089e5a-e4ea-4397-a676-fd4230311639-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.924420 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3089e5a-e4ea-4397-a676-fd4230311639-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.924467 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe-operator-scripts\") pod \"nova-cell0-db-create-7djpj\" (UID: \"c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe\") " pod="openstack/nova-cell0-db-create-7djpj" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.924487 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3089e5a-e4ea-4397-a676-fd4230311639-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.924536 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3089e5a-e4ea-4397-a676-fd4230311639-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.924560 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9brk\" (UniqueName: \"kubernetes.io/projected/c3089e5a-e4ea-4397-a676-fd4230311639-kube-api-access-g9brk\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.924576 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3089e5a-e4ea-4397-a676-fd4230311639-logs\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.924595 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3089e5a-e4ea-4397-a676-fd4230311639-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.924649 4966 reconciler_common.go:293] "Volume detached for volume \"pvc-eac12953-51d3-4d82-a334-6290e6023c51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.928495 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3089e5a-e4ea-4397-a676-fd4230311639-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.928751 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3089e5a-e4ea-4397-a676-fd4230311639-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.929274 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3089e5a-e4ea-4397-a676-fd4230311639-logs\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.929883 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3089e5a-e4ea-4397-a676-fd4230311639-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.932788 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3089e5a-e4ea-4397-a676-fd4230311639-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.935952 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-a74e-account-create-update-vg7tr"] Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.936093 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3089e5a-e4ea-4397-a676-fd4230311639-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.937666 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a74e-account-create-update-vg7tr" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.939357 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.939397 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/976b8fe222d013f8479d40e9925328ad0f72c98e7e279071e3e1ab72dc6f328d/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.941245 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.946487 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9brk\" (UniqueName: \"kubernetes.io/projected/c3089e5a-e4ea-4397-a676-fd4230311639-kube-api-access-g9brk\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.949414 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-a74e-account-create-update-vg7tr"] Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.961873 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kxq52" podStartSLOduration=3.919557097 podStartE2EDuration="21.961852018s" podCreationTimestamp="2026-01-27 16:06:19 +0000 UTC" firstStartedPulling="2026-01-27 16:06:21.092461051 +0000 UTC m=+1447.395254539" lastFinishedPulling="2026-01-27 16:06:39.134755972 +0000 UTC m=+1465.437549460" observedRunningTime="2026-01-27 16:06:40.714358943 +0000 UTC m=+1467.017152431" watchObservedRunningTime="2026-01-27 16:06:40.961852018 +0000 UTC m=+1467.264645506" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:40.981001 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5cfcc75859-nl7s5" podStartSLOduration=3.793778478 podStartE2EDuration="14.980983279s" podCreationTimestamp="2026-01-27 16:06:26 +0000 UTC" firstStartedPulling="2026-01-27 16:06:27.680065889 +0000 UTC m=+1453.982859377" lastFinishedPulling="2026-01-27 16:06:38.86727069 +0000 UTC m=+1465.170064178" observedRunningTime="2026-01-27 16:06:40.791806222 +0000 UTC m=+1467.094599710" watchObservedRunningTime="2026-01-27 16:06:40.980983279 +0000 UTC m=+1467.283776767" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.026075 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66aafada-a280-4e39-96bd-0171e2f190f7-operator-scripts\") pod \"nova-cell1-db-create-57bt9\" (UID: \"66aafada-a280-4e39-96bd-0171e2f190f7\") " pod="openstack/nova-cell1-db-create-57bt9" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.026338 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zbpg\" (UniqueName: \"kubernetes.io/projected/d060b664-4536-45b0-921f-4bc5a759fb1a-kube-api-access-7zbpg\") pod \"nova-api-039f-account-create-update-5k2qz\" (UID: \"d060b664-4536-45b0-921f-4bc5a759fb1a\") " pod="openstack/nova-api-039f-account-create-update-5k2qz" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.026397 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe-operator-scripts\") pod \"nova-cell0-db-create-7djpj\" (UID: \"c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe\") " pod="openstack/nova-cell0-db-create-7djpj" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.026421 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rplpb\" (UniqueName: \"kubernetes.io/projected/20b288ea-02c0-423e-b6d7-621f137afa58-kube-api-access-rplpb\") pod \"nova-cell0-a74e-account-create-update-vg7tr\" (UID: \"20b288ea-02c0-423e-b6d7-621f137afa58\") " pod="openstack/nova-cell0-a74e-account-create-update-vg7tr" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.026453 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d060b664-4536-45b0-921f-4bc5a759fb1a-operator-scripts\") pod \"nova-api-039f-account-create-update-5k2qz\" (UID: \"d060b664-4536-45b0-921f-4bc5a759fb1a\") " pod="openstack/nova-api-039f-account-create-update-5k2qz" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.026516 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzdvp\" (UniqueName: \"kubernetes.io/projected/66aafada-a280-4e39-96bd-0171e2f190f7-kube-api-access-wzdvp\") pod \"nova-cell1-db-create-57bt9\" (UID: \"66aafada-a280-4e39-96bd-0171e2f190f7\") " pod="openstack/nova-cell1-db-create-57bt9" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.026516 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b27751e2-ad89-42a1-8e5e-5cea6d2c0bf0\") pod \"glance-default-external-api-0\" (UID: \"c3089e5a-e4ea-4397-a676-fd4230311639\") " pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.026557 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20b288ea-02c0-423e-b6d7-621f137afa58-operator-scripts\") pod \"nova-cell0-a74e-account-create-update-vg7tr\" (UID: \"20b288ea-02c0-423e-b6d7-621f137afa58\") " pod="openstack/nova-cell0-a74e-account-create-update-vg7tr" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.026579 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcl5k\" (UniqueName: \"kubernetes.io/projected/c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe-kube-api-access-wcl5k\") pod \"nova-cell0-db-create-7djpj\" (UID: \"c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe\") " pod="openstack/nova-cell0-db-create-7djpj" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.027455 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe-operator-scripts\") pod \"nova-cell0-db-create-7djpj\" (UID: \"c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe\") " pod="openstack/nova-cell0-db-create-7djpj" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.041340 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-9a9a-account-create-update-zf625"] Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.051237 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9a9a-account-create-update-zf625" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.052978 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcl5k\" (UniqueName: \"kubernetes.io/projected/c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe-kube-api-access-wcl5k\") pod \"nova-cell0-db-create-7djpj\" (UID: \"c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe\") " pod="openstack/nova-cell0-db-create-7djpj" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.055358 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9a9a-account-create-update-zf625"] Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.060150 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.128613 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66aafada-a280-4e39-96bd-0171e2f190f7-operator-scripts\") pod \"nova-cell1-db-create-57bt9\" (UID: \"66aafada-a280-4e39-96bd-0171e2f190f7\") " pod="openstack/nova-cell1-db-create-57bt9" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.128675 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zbpg\" (UniqueName: \"kubernetes.io/projected/d060b664-4536-45b0-921f-4bc5a759fb1a-kube-api-access-7zbpg\") pod \"nova-api-039f-account-create-update-5k2qz\" (UID: \"d060b664-4536-45b0-921f-4bc5a759fb1a\") " pod="openstack/nova-api-039f-account-create-update-5k2qz" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.128734 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rplpb\" (UniqueName: \"kubernetes.io/projected/20b288ea-02c0-423e-b6d7-621f137afa58-kube-api-access-rplpb\") pod \"nova-cell0-a74e-account-create-update-vg7tr\" (UID: \"20b288ea-02c0-423e-b6d7-621f137afa58\") " pod="openstack/nova-cell0-a74e-account-create-update-vg7tr" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.128769 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d060b664-4536-45b0-921f-4bc5a759fb1a-operator-scripts\") pod \"nova-api-039f-account-create-update-5k2qz\" (UID: \"d060b664-4536-45b0-921f-4bc5a759fb1a\") " pod="openstack/nova-api-039f-account-create-update-5k2qz" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.128821 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzdvp\" (UniqueName: \"kubernetes.io/projected/66aafada-a280-4e39-96bd-0171e2f190f7-kube-api-access-wzdvp\") pod \"nova-cell1-db-create-57bt9\" (UID: \"66aafada-a280-4e39-96bd-0171e2f190f7\") " pod="openstack/nova-cell1-db-create-57bt9" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.128857 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20b288ea-02c0-423e-b6d7-621f137afa58-operator-scripts\") pod \"nova-cell0-a74e-account-create-update-vg7tr\" (UID: \"20b288ea-02c0-423e-b6d7-621f137afa58\") " pod="openstack/nova-cell0-a74e-account-create-update-vg7tr" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.128887 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f3edb84-792d-4066-bfae-a43c8fefe6da-operator-scripts\") pod \"nova-cell1-9a9a-account-create-update-zf625\" (UID: \"1f3edb84-792d-4066-bfae-a43c8fefe6da\") " pod="openstack/nova-cell1-9a9a-account-create-update-zf625" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.128941 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2jxp\" (UniqueName: \"kubernetes.io/projected/1f3edb84-792d-4066-bfae-a43c8fefe6da-kube-api-access-p2jxp\") pod \"nova-cell1-9a9a-account-create-update-zf625\" (UID: \"1f3edb84-792d-4066-bfae-a43c8fefe6da\") " pod="openstack/nova-cell1-9a9a-account-create-update-zf625" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.129758 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66aafada-a280-4e39-96bd-0171e2f190f7-operator-scripts\") pod \"nova-cell1-db-create-57bt9\" (UID: \"66aafada-a280-4e39-96bd-0171e2f190f7\") " pod="openstack/nova-cell1-db-create-57bt9" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.130701 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d060b664-4536-45b0-921f-4bc5a759fb1a-operator-scripts\") pod \"nova-api-039f-account-create-update-5k2qz\" (UID: \"d060b664-4536-45b0-921f-4bc5a759fb1a\") " pod="openstack/nova-api-039f-account-create-update-5k2qz" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.131437 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20b288ea-02c0-423e-b6d7-621f137afa58-operator-scripts\") pod \"nova-cell0-a74e-account-create-update-vg7tr\" (UID: \"20b288ea-02c0-423e-b6d7-621f137afa58\") " pod="openstack/nova-cell0-a74e-account-create-update-vg7tr" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.175176 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zbpg\" (UniqueName: \"kubernetes.io/projected/d060b664-4536-45b0-921f-4bc5a759fb1a-kube-api-access-7zbpg\") pod \"nova-api-039f-account-create-update-5k2qz\" (UID: \"d060b664-4536-45b0-921f-4bc5a759fb1a\") " pod="openstack/nova-api-039f-account-create-update-5k2qz" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.179368 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rplpb\" (UniqueName: \"kubernetes.io/projected/20b288ea-02c0-423e-b6d7-621f137afa58-kube-api-access-rplpb\") pod \"nova-cell0-a74e-account-create-update-vg7tr\" (UID: \"20b288ea-02c0-423e-b6d7-621f137afa58\") " pod="openstack/nova-cell0-a74e-account-create-update-vg7tr" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.182288 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzdvp\" (UniqueName: \"kubernetes.io/projected/66aafada-a280-4e39-96bd-0171e2f190f7-kube-api-access-wzdvp\") pod \"nova-cell1-db-create-57bt9\" (UID: \"66aafada-a280-4e39-96bd-0171e2f190f7\") " pod="openstack/nova-cell1-db-create-57bt9" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.231451 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f3edb84-792d-4066-bfae-a43c8fefe6da-operator-scripts\") pod \"nova-cell1-9a9a-account-create-update-zf625\" (UID: \"1f3edb84-792d-4066-bfae-a43c8fefe6da\") " pod="openstack/nova-cell1-9a9a-account-create-update-zf625" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.231537 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2jxp\" (UniqueName: \"kubernetes.io/projected/1f3edb84-792d-4066-bfae-a43c8fefe6da-kube-api-access-p2jxp\") pod \"nova-cell1-9a9a-account-create-update-zf625\" (UID: \"1f3edb84-792d-4066-bfae-a43c8fefe6da\") " pod="openstack/nova-cell1-9a9a-account-create-update-zf625" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.232418 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f3edb84-792d-4066-bfae-a43c8fefe6da-operator-scripts\") pod \"nova-cell1-9a9a-account-create-update-zf625\" (UID: \"1f3edb84-792d-4066-bfae-a43c8fefe6da\") " pod="openstack/nova-cell1-9a9a-account-create-update-zf625" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.247680 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2jxp\" (UniqueName: \"kubernetes.io/projected/1f3edb84-792d-4066-bfae-a43c8fefe6da-kube-api-access-p2jxp\") pod \"nova-cell1-9a9a-account-create-update-zf625\" (UID: \"1f3edb84-792d-4066-bfae-a43c8fefe6da\") " pod="openstack/nova-cell1-9a9a-account-create-update-zf625" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.571383 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bh6sq"] Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.619173 4966 scope.go:117] "RemoveContainer" containerID="79955df90857be0dc204773aba9d4c7b251a4c1f8e34305d7a46483bde28a748" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.825711 4966 generic.go:334] "Generic (PLEG): container finished" podID="e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421" containerID="836c64ff28f1772e5ab1868e347c4a874b7d1f44f0d7aed5ea62289fc5dff7ff" exitCode=1 Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.826054 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-767f4dd79b-llll8" event={"ID":"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421","Type":"ContainerDied","Data":"836c64ff28f1772e5ab1868e347c4a874b7d1f44f0d7aed5ea62289fc5dff7ff"} Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.826579 4966 scope.go:117] "RemoveContainer" containerID="836c64ff28f1772e5ab1868e347c4a874b7d1f44f0d7aed5ea62289fc5dff7ff" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.831913 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.833134 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bh6sq" event={"ID":"e4fc12a0-b687-4984-a892-6ce0cdf2c920","Type":"ContainerStarted","Data":"3f899a31243032761f5d83a7ef68c6d39dd6794293514a68ce929515dcb4ea2f"} Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.851507 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.852419 4966 generic.go:334] "Generic (PLEG): container finished" podID="3a1374f0-f77f-4d71-9e9d-e0276e4beaa2" containerID="c23be1534151f89adfcbb184557ba6adb34249bc1224f6c32ef2b738cab80472" exitCode=0 Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.852447 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5cfcc75859-nl7s5" event={"ID":"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2","Type":"ContainerDied","Data":"c23be1534151f89adfcbb184557ba6adb34249bc1224f6c32ef2b738cab80472"} Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.871817 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.921095 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7djpj" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.924579 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7444bc9f5b-kfthl"] Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.939145 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.941914 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.942675 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-039f-account-create-update-5k2qz" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.948679 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7444bc9f5b-kfthl"] Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.968961 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 16:06:41 crc kubenswrapper[4966]: I0127 16:06:41.971360 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.004103 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.200220 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c4e4ebf-c37e-45fb-9248-b60defccda7f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.200470 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4e4ebf-c37e-45fb-9248-b60defccda7f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.200512 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4e4ebf-c37e-45fb-9248-b60defccda7f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.200548 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c4e4ebf-c37e-45fb-9248-b60defccda7f-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.200592 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eac12953-51d3-4d82-a334-6290e6023c51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.200676 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8xrz\" (UniqueName: \"kubernetes.io/projected/6c4e4ebf-c37e-45fb-9248-b60defccda7f-kube-api-access-t8xrz\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.200744 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c4e4ebf-c37e-45fb-9248-b60defccda7f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.200769 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4e4ebf-c37e-45fb-9248-b60defccda7f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.302534 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4e4ebf-c37e-45fb-9248-b60defccda7f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.302842 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c4e4ebf-c37e-45fb-9248-b60defccda7f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.302881 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4e4ebf-c37e-45fb-9248-b60defccda7f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.302936 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4e4ebf-c37e-45fb-9248-b60defccda7f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.302959 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c4e4ebf-c37e-45fb-9248-b60defccda7f-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.303005 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-eac12953-51d3-4d82-a334-6290e6023c51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.303075 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8xrz\" (UniqueName: \"kubernetes.io/projected/6c4e4ebf-c37e-45fb-9248-b60defccda7f-kube-api-access-t8xrz\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.303141 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c4e4ebf-c37e-45fb-9248-b60defccda7f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.303564 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c4e4ebf-c37e-45fb-9248-b60defccda7f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.303793 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c4e4ebf-c37e-45fb-9248-b60defccda7f-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.311333 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.311368 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-eac12953-51d3-4d82-a334-6290e6023c51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/228a00ce595fa766fa34f65a641b8516d33aa40c9543b442d44ee069fda194ff/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.316247 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4e4ebf-c37e-45fb-9248-b60defccda7f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.323627 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4e4ebf-c37e-45fb-9248-b60defccda7f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.327077 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c4e4ebf-c37e-45fb-9248-b60defccda7f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.350812 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4e4ebf-c37e-45fb-9248-b60defccda7f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.353077 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8xrz\" (UniqueName: \"kubernetes.io/projected/6c4e4ebf-c37e-45fb-9248-b60defccda7f-kube-api-access-t8xrz\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.426439 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-eac12953-51d3-4d82-a334-6290e6023c51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac12953-51d3-4d82-a334-6290e6023c51\") pod \"glance-default-internal-api-0\" (UID: \"6c4e4ebf-c37e-45fb-9248-b60defccda7f\") " pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.453947 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a74e-account-create-update-vg7tr" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.468877 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-57bt9" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.477925 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9a9a-account-create-update-zf625" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.489027 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5cfcc75859-nl7s5" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.512121 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.580789 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4417e7ad-d093-4fe3-bf2a-d7504ba5db81" path="/var/lib/kubelet/pods/4417e7ad-d093-4fe3-bf2a-d7504ba5db81/volumes" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.582401 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="856ac2bf-cc02-4921-84ff-947e5b947997" path="/var/lib/kubelet/pods/856ac2bf-cc02-4921-84ff-947e5b947997/volumes" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.582967 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae145432-f4ac-4937-a71d-5c871832c20a" path="/var/lib/kubelet/pods/ae145432-f4ac-4937-a71d-5c871832c20a/volumes" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.685641 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-combined-ca-bundle\") pod \"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2\" (UID: \"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2\") " Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.685826 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgqqb\" (UniqueName: \"kubernetes.io/projected/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-kube-api-access-mgqqb\") pod \"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2\" (UID: \"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2\") " Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.685924 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-config-data\") pod \"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2\" (UID: \"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2\") " Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.685976 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-config-data-custom\") pod \"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2\" (UID: \"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2\") " Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.693697 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-kube-api-access-mgqqb" (OuterVolumeSpecName: "kube-api-access-mgqqb") pod "3a1374f0-f77f-4d71-9e9d-e0276e4beaa2" (UID: "3a1374f0-f77f-4d71-9e9d-e0276e4beaa2"). InnerVolumeSpecName "kube-api-access-mgqqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.701925 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3a1374f0-f77f-4d71-9e9d-e0276e4beaa2" (UID: "3a1374f0-f77f-4d71-9e9d-e0276e4beaa2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.787719 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a1374f0-f77f-4d71-9e9d-e0276e4beaa2" (UID: "3a1374f0-f77f-4d71-9e9d-e0276e4beaa2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.788560 4966 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.788585 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.788594 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgqqb\" (UniqueName: \"kubernetes.io/projected/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-kube-api-access-mgqqb\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.847426 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-config-data" (OuterVolumeSpecName: "config-data") pod "3a1374f0-f77f-4d71-9e9d-e0276e4beaa2" (UID: "3a1374f0-f77f-4d71-9e9d-e0276e4beaa2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.900829 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7djpj"] Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.902297 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.921494 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.948822 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-767f4dd79b-llll8" event={"ID":"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421","Type":"ContainerStarted","Data":"2e613c647969b1b4bccccf25db07b9fa88166bf69dceb4db7a0a1f400cee364a"} Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.951129 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-767f4dd79b-llll8" Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.976682 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5d57b89dfc-4z8bl" event={"ID":"b96eb211-2a11-469e-9342-6881a3f3c799","Type":"ContainerStarted","Data":"c5cdbb6a04da173be2362aafa21817e560aad65c2444fe4d05c486c0eec640cb"} Jan 27 16:06:42 crc kubenswrapper[4966]: I0127 16:06:42.976751 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5d57b89dfc-4z8bl" Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.014807 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-039f-account-create-update-5k2qz"] Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.015704 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-767f4dd79b-llll8" podStartSLOduration=11.015685858 podStartE2EDuration="11.015685858s" podCreationTimestamp="2026-01-27 16:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:06:42.974589959 +0000 UTC m=+1469.277383447" watchObservedRunningTime="2026-01-27 16:06:43.015685858 +0000 UTC m=+1469.318479346" Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.019360 4966 generic.go:334] "Generic (PLEG): container finished" podID="8e72dc61-1e85-440d-a905-c8b1098da1d6" containerID="55694eee7a9b387b6bbacae9113f690753922e5bb716189e1e40528bc0d04eaf" exitCode=1 Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.019436 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" event={"ID":"8e72dc61-1e85-440d-a905-c8b1098da1d6","Type":"ContainerDied","Data":"55694eee7a9b387b6bbacae9113f690753922e5bb716189e1e40528bc0d04eaf"} Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.019970 4966 scope.go:117] "RemoveContainer" containerID="55694eee7a9b387b6bbacae9113f690753922e5bb716189e1e40528bc0d04eaf" Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.023543 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d949f4598-q7z92" event={"ID":"4ea91e8a-6a88-4f54-a60e-81f68d447beb","Type":"ContainerStarted","Data":"5b3e5fa076c447451790096f85e86ded1780fa91802a91bf88a63b744db33d26"} Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.024502 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.036064 4966 generic.go:334] "Generic (PLEG): container finished" podID="e4fc12a0-b687-4984-a892-6ce0cdf2c920" containerID="1e845d1895b3637fc8815ffe8589396ce7e555f7811afda15e4f49275df47c5b" exitCode=0 Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.036185 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bh6sq" event={"ID":"e4fc12a0-b687-4984-a892-6ce0cdf2c920","Type":"ContainerDied","Data":"1e845d1895b3637fc8815ffe8589396ce7e555f7811afda15e4f49275df47c5b"} Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.053087 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5cfcc75859-nl7s5" event={"ID":"3a1374f0-f77f-4d71-9e9d-e0276e4beaa2","Type":"ContainerDied","Data":"e5667ba4016d27adfc9af86eaeeba676726156e47986f8b7b569e90ddcf52fb9"} Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.053387 4966 scope.go:117] "RemoveContainer" containerID="c23be1534151f89adfcbb184557ba6adb34249bc1224f6c32ef2b738cab80472" Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.053500 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5cfcc75859-nl7s5" Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.082029 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5d57b89dfc-4z8bl" podStartSLOduration=11.082015499 podStartE2EDuration="11.082015499s" podCreationTimestamp="2026-01-27 16:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:06:43.00649875 +0000 UTC m=+1469.309292258" watchObservedRunningTime="2026-01-27 16:06:43.082015499 +0000 UTC m=+1469.384808987" Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.105501 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b939989d-e7fd-4781-b285-ae9f22fc9bf4","Type":"ContainerStarted","Data":"9bff6d199561c3bef1275af1a978185d82a846130c9c5740cd21cee41bff4abe"} Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.122087 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c6c4669c5-dd9mb" event={"ID":"8ca408b2-f3aa-4504-9f89-8028f0cdc94a","Type":"ContainerStarted","Data":"1cd41d813f83df6bb7678233b9e011a99b55f2e3c3305c812477d3cbfd8f2143"} Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.122428 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.123560 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-d949f4598-q7z92" podStartSLOduration=9.308463308 podStartE2EDuration="10.123540532s" podCreationTimestamp="2026-01-27 16:06:33 +0000 UTC" firstStartedPulling="2026-01-27 16:06:40.384302167 +0000 UTC m=+1466.687095655" lastFinishedPulling="2026-01-27 16:06:41.199379391 +0000 UTC m=+1467.502172879" observedRunningTime="2026-01-27 16:06:43.06196106 +0000 UTC m=+1469.364754548" watchObservedRunningTime="2026-01-27 16:06:43.123540532 +0000 UTC m=+1469.426334020" Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.162919 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7c6c4669c5-dd9mb" podStartSLOduration=10.162885876 podStartE2EDuration="10.162885876s" podCreationTimestamp="2026-01-27 16:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:06:43.146611876 +0000 UTC m=+1469.449405364" watchObservedRunningTime="2026-01-27 16:06:43.162885876 +0000 UTC m=+1469.465679364" Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.318141 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9a9a-account-create-update-zf625"] Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.364983 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5cfcc75859-nl7s5"] Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.405146 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5cfcc75859-nl7s5"] Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.418620 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.606454 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-57bt9"] Jan 27 16:06:43 crc kubenswrapper[4966]: I0127 16:06:43.625013 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-a74e-account-create-update-vg7tr"] Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.151001 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b939989d-e7fd-4781-b285-ae9f22fc9bf4","Type":"ContainerStarted","Data":"476030d34fce52a7e4ae563042570f430938d7bac77457f13a7c1c0e7387785d"} Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.153414 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7djpj" event={"ID":"c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe","Type":"ContainerStarted","Data":"3fdc5eb1f5d7d3d4f3e78e778416f6e48d035bdc8d2d5777718cc3cdba142e3e"} Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.153468 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7djpj" event={"ID":"c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe","Type":"ContainerStarted","Data":"ebb1cc0f6f70c07e1bb1a5bb16b9ef1617f83524ee8781d249bbb7b5e3f78d52"} Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.159961 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-039f-account-create-update-5k2qz" event={"ID":"d060b664-4536-45b0-921f-4bc5a759fb1a","Type":"ContainerStarted","Data":"eb0b24bc5997f3e6578816ab3d39d3ba4b520eaf856e176519b90ac75fd0f588"} Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.160015 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-039f-account-create-update-5k2qz" event={"ID":"d060b664-4536-45b0-921f-4bc5a759fb1a","Type":"ContainerStarted","Data":"8e1084f9007c9eb8fc2027ac61c0767f41ff9e31ad01876de38e7027276cc7d8"} Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.162148 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9a9a-account-create-update-zf625" event={"ID":"1f3edb84-792d-4066-bfae-a43c8fefe6da","Type":"ContainerStarted","Data":"81d0c9cc6a9cd65a06ad5c8aea269792723874b5c60dfee8b219db19e7626468"} Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.162228 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9a9a-account-create-update-zf625" event={"ID":"1f3edb84-792d-4066-bfae-a43c8fefe6da","Type":"ContainerStarted","Data":"80daac897d49cfca9297cf303fe0d54982c4c723ab2d1f8e6cd023a892a26768"} Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.174607 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-7djpj" podStartSLOduration=4.17458721 podStartE2EDuration="4.17458721s" podCreationTimestamp="2026-01-27 16:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:06:44.171829203 +0000 UTC m=+1470.474622711" watchObservedRunningTime="2026-01-27 16:06:44.17458721 +0000 UTC m=+1470.477380708" Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.176580 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" event={"ID":"8e72dc61-1e85-440d-a905-c8b1098da1d6","Type":"ContainerStarted","Data":"eba0113c798868784fc96c476133feb75a1eed03041e9ed7602fce0ad01c0b03"} Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.176970 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.200057 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-57bt9" event={"ID":"66aafada-a280-4e39-96bd-0171e2f190f7","Type":"ContainerStarted","Data":"3c4814d17bcf4fd234b51906d9e86e86da5dace55c6a60709d19c6f5a2604bd8"} Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.210853 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3089e5a-e4ea-4397-a676-fd4230311639","Type":"ContainerStarted","Data":"71a6551677a561f8ff7c1b812ecca1f5106ccb7f474bda45e02908af80165e02"} Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.227503 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-039f-account-create-update-5k2qz" podStartSLOduration=4.227480269 podStartE2EDuration="4.227480269s" podCreationTimestamp="2026-01-27 16:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:06:44.18638658 +0000 UTC m=+1470.489180078" watchObservedRunningTime="2026-01-27 16:06:44.227480269 +0000 UTC m=+1470.530273757" Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.239279 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-9a9a-account-create-update-zf625" podStartSLOduration=4.239262859 podStartE2EDuration="4.239262859s" podCreationTimestamp="2026-01-27 16:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:06:44.203203318 +0000 UTC m=+1470.505996846" watchObservedRunningTime="2026-01-27 16:06:44.239262859 +0000 UTC m=+1470.542056347" Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.241179 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a74e-account-create-update-vg7tr" event={"ID":"20b288ea-02c0-423e-b6d7-621f137afa58","Type":"ContainerStarted","Data":"a35e9a867f1fb2bad4739cb208b9cd65f2425f751cf20993492e8696481ef8a9"} Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.245536 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c4e4ebf-c37e-45fb-9248-b60defccda7f","Type":"ContainerStarted","Data":"96eeb0a12846da86e9889bdeca84dd13b8dbbc69253320aefc79a5afb28d77a6"} Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.250197 4966 generic.go:334] "Generic (PLEG): container finished" podID="e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421" containerID="2e613c647969b1b4bccccf25db07b9fa88166bf69dceb4db7a0a1f400cee364a" exitCode=1 Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.251549 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-767f4dd79b-llll8" event={"ID":"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421","Type":"ContainerDied","Data":"2e613c647969b1b4bccccf25db07b9fa88166bf69dceb4db7a0a1f400cee364a"} Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.251625 4966 scope.go:117] "RemoveContainer" containerID="836c64ff28f1772e5ab1868e347c4a874b7d1f44f0d7aed5ea62289fc5dff7ff" Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.252768 4966 scope.go:117] "RemoveContainer" containerID="2e613c647969b1b4bccccf25db07b9fa88166bf69dceb4db7a0a1f400cee364a" Jan 27 16:06:44 crc kubenswrapper[4966]: E0127 16:06:44.253039 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-767f4dd79b-llll8_openstack(e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421)\"" pod="openstack/heat-api-767f4dd79b-llll8" podUID="e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421" Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.269836 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" podStartSLOduration=11.741313876 podStartE2EDuration="12.269820688s" podCreationTimestamp="2026-01-27 16:06:32 +0000 UTC" firstStartedPulling="2026-01-27 16:06:40.383026687 +0000 UTC m=+1466.685820175" lastFinishedPulling="2026-01-27 16:06:40.911533499 +0000 UTC m=+1467.214326987" observedRunningTime="2026-01-27 16:06:44.2281532 +0000 UTC m=+1470.530946708" watchObservedRunningTime="2026-01-27 16:06:44.269820688 +0000 UTC m=+1470.572614176" Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.578259 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a1374f0-f77f-4d71-9e9d-e0276e4beaa2" path="/var/lib/kubelet/pods/3a1374f0-f77f-4d71-9e9d-e0276e4beaa2/volumes" Jan 27 16:06:44 crc kubenswrapper[4966]: E0127 16:06:44.596114 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e72dc61_1e85_440d_a905_c8b1098da1d6.slice/crio-conmon-eba0113c798868784fc96c476133feb75a1eed03041e9ed7602fce0ad01c0b03.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3cf7364_2a74_42fe_bdb7_c00b4f21b5fe.slice/crio-conmon-3fdc5eb1f5d7d3d4f3e78e778416f6e48d035bdc8d2d5777718cc3cdba142e3e.scope\": RecentStats: unable to find data in memory cache]" Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.645808 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bh6sq" Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.781806 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6v89\" (UniqueName: \"kubernetes.io/projected/e4fc12a0-b687-4984-a892-6ce0cdf2c920-kube-api-access-l6v89\") pod \"e4fc12a0-b687-4984-a892-6ce0cdf2c920\" (UID: \"e4fc12a0-b687-4984-a892-6ce0cdf2c920\") " Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.782081 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4fc12a0-b687-4984-a892-6ce0cdf2c920-operator-scripts\") pod \"e4fc12a0-b687-4984-a892-6ce0cdf2c920\" (UID: \"e4fc12a0-b687-4984-a892-6ce0cdf2c920\") " Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.840043 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4fc12a0-b687-4984-a892-6ce0cdf2c920-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e4fc12a0-b687-4984-a892-6ce0cdf2c920" (UID: "e4fc12a0-b687-4984-a892-6ce0cdf2c920"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.847200 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4fc12a0-b687-4984-a892-6ce0cdf2c920-kube-api-access-l6v89" (OuterVolumeSpecName: "kube-api-access-l6v89") pod "e4fc12a0-b687-4984-a892-6ce0cdf2c920" (UID: "e4fc12a0-b687-4984-a892-6ce0cdf2c920"). InnerVolumeSpecName "kube-api-access-l6v89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.885994 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4fc12a0-b687-4984-a892-6ce0cdf2c920-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:44 crc kubenswrapper[4966]: I0127 16:06:44.886040 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6v89\" (UniqueName: \"kubernetes.io/projected/e4fc12a0-b687-4984-a892-6ce0cdf2c920-kube-api-access-l6v89\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.267472 4966 generic.go:334] "Generic (PLEG): container finished" podID="c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe" containerID="3fdc5eb1f5d7d3d4f3e78e778416f6e48d035bdc8d2d5777718cc3cdba142e3e" exitCode=0 Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.267882 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7djpj" event={"ID":"c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe","Type":"ContainerDied","Data":"3fdc5eb1f5d7d3d4f3e78e778416f6e48d035bdc8d2d5777718cc3cdba142e3e"} Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.271475 4966 scope.go:117] "RemoveContainer" containerID="2e613c647969b1b4bccccf25db07b9fa88166bf69dceb4db7a0a1f400cee364a" Jan 27 16:06:45 crc kubenswrapper[4966]: E0127 16:06:45.271791 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-767f4dd79b-llll8_openstack(e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421)\"" pod="openstack/heat-api-767f4dd79b-llll8" podUID="e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421" Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.273215 4966 generic.go:334] "Generic (PLEG): container finished" podID="8e72dc61-1e85-440d-a905-c8b1098da1d6" containerID="eba0113c798868784fc96c476133feb75a1eed03041e9ed7602fce0ad01c0b03" exitCode=1 Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.273250 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" event={"ID":"8e72dc61-1e85-440d-a905-c8b1098da1d6","Type":"ContainerDied","Data":"eba0113c798868784fc96c476133feb75a1eed03041e9ed7602fce0ad01c0b03"} Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.273347 4966 scope.go:117] "RemoveContainer" containerID="55694eee7a9b387b6bbacae9113f690753922e5bb716189e1e40528bc0d04eaf" Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.274240 4966 scope.go:117] "RemoveContainer" containerID="eba0113c798868784fc96c476133feb75a1eed03041e9ed7602fce0ad01c0b03" Jan 27 16:06:45 crc kubenswrapper[4966]: E0127 16:06:45.274596 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7b59f8d4fb-hj8g9_openstack(8e72dc61-1e85-440d-a905-c8b1098da1d6)\"" pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" podUID="8e72dc61-1e85-440d-a905-c8b1098da1d6" Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.276977 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bh6sq" event={"ID":"e4fc12a0-b687-4984-a892-6ce0cdf2c920","Type":"ContainerDied","Data":"3f899a31243032761f5d83a7ef68c6d39dd6794293514a68ce929515dcb4ea2f"} Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.277015 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f899a31243032761f5d83a7ef68c6d39dd6794293514a68ce929515dcb4ea2f" Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.277124 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bh6sq" Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.279584 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3089e5a-e4ea-4397-a676-fd4230311639","Type":"ContainerStarted","Data":"fc939e45e84a93374deff53df12a26c17182933aebdae4f71ba1068ad9c48efd"} Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.279613 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3089e5a-e4ea-4397-a676-fd4230311639","Type":"ContainerStarted","Data":"69f769c865fbf24181dfd72996cf2bc42a6b52fef7a54ea13bd829c95608ce3b"} Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.282658 4966 generic.go:334] "Generic (PLEG): container finished" podID="20b288ea-02c0-423e-b6d7-621f137afa58" containerID="dcfbbf86faa9752ddb4523bfbfe993503f3943cbe0e2fb222f8c9ffbecbc8280" exitCode=0 Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.282717 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a74e-account-create-update-vg7tr" event={"ID":"20b288ea-02c0-423e-b6d7-621f137afa58","Type":"ContainerDied","Data":"dcfbbf86faa9752ddb4523bfbfe993503f3943cbe0e2fb222f8c9ffbecbc8280"} Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.293272 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c4e4ebf-c37e-45fb-9248-b60defccda7f","Type":"ContainerStarted","Data":"200eaf2e5c42ccbe5508a6d38b23093b09b16b94db1af69d7f801d7db7ad8269"} Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.295744 4966 generic.go:334] "Generic (PLEG): container finished" podID="66aafada-a280-4e39-96bd-0171e2f190f7" containerID="896ae9306b9d4c18ba3802f9b388c30ebde835563e201621a16126f893b022ad" exitCode=0 Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.295802 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-57bt9" event={"ID":"66aafada-a280-4e39-96bd-0171e2f190f7","Type":"ContainerDied","Data":"896ae9306b9d4c18ba3802f9b388c30ebde835563e201621a16126f893b022ad"} Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.308075 4966 generic.go:334] "Generic (PLEG): container finished" podID="d060b664-4536-45b0-921f-4bc5a759fb1a" containerID="eb0b24bc5997f3e6578816ab3d39d3ba4b520eaf856e176519b90ac75fd0f588" exitCode=0 Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.308259 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-039f-account-create-update-5k2qz" event={"ID":"d060b664-4536-45b0-921f-4bc5a759fb1a","Type":"ContainerDied","Data":"eb0b24bc5997f3e6578816ab3d39d3ba4b520eaf856e176519b90ac75fd0f588"} Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.309957 4966 generic.go:334] "Generic (PLEG): container finished" podID="1f3edb84-792d-4066-bfae-a43c8fefe6da" containerID="81d0c9cc6a9cd65a06ad5c8aea269792723874b5c60dfee8b219db19e7626468" exitCode=0 Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.311061 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9a9a-account-create-update-zf625" event={"ID":"1f3edb84-792d-4066-bfae-a43c8fefe6da","Type":"ContainerDied","Data":"81d0c9cc6a9cd65a06ad5c8aea269792723874b5c60dfee8b219db19e7626468"} Jan 27 16:06:45 crc kubenswrapper[4966]: I0127 16:06:45.367215 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.367192748 podStartE2EDuration="5.367192748s" podCreationTimestamp="2026-01-27 16:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:06:45.342579586 +0000 UTC m=+1471.645373084" watchObservedRunningTime="2026-01-27 16:06:45.367192748 +0000 UTC m=+1471.669986226" Jan 27 16:06:46 crc kubenswrapper[4966]: I0127 16:06:46.327532 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c4e4ebf-c37e-45fb-9248-b60defccda7f","Type":"ContainerStarted","Data":"ac45dd58cb6484b040b3fcb9dbce646c4dcc093b21cd854cd3137cf50d7af72d"} Jan 27 16:06:46 crc kubenswrapper[4966]: I0127 16:06:46.329795 4966 scope.go:117] "RemoveContainer" containerID="eba0113c798868784fc96c476133feb75a1eed03041e9ed7602fce0ad01c0b03" Jan 27 16:06:46 crc kubenswrapper[4966]: E0127 16:06:46.330259 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7b59f8d4fb-hj8g9_openstack(8e72dc61-1e85-440d-a905-c8b1098da1d6)\"" pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" podUID="8e72dc61-1e85-440d-a905-c8b1098da1d6" Jan 27 16:06:46 crc kubenswrapper[4966]: I0127 16:06:46.332250 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b939989d-e7fd-4781-b285-ae9f22fc9bf4","Type":"ContainerStarted","Data":"011a3421660c0cd4cafbf7b70c395dd078e7afb83666ab17790d57d411747a10"} Jan 27 16:06:46 crc kubenswrapper[4966]: I0127 16:06:46.377419 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.377405544 podStartE2EDuration="5.377405544s" podCreationTimestamp="2026-01-27 16:06:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:06:46.362035802 +0000 UTC m=+1472.664829310" watchObservedRunningTime="2026-01-27 16:06:46.377405544 +0000 UTC m=+1472.680199032" Jan 27 16:06:46 crc kubenswrapper[4966]: I0127 16:06:46.562278 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-84f46dccf4-wb9z5" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.205361 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7djpj" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.212275 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-57bt9" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.219749 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a74e-account-create-update-vg7tr" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.240754 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-039f-account-create-update-5k2qz" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.249391 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9a9a-account-create-update-zf625" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.301708 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzdvp\" (UniqueName: \"kubernetes.io/projected/66aafada-a280-4e39-96bd-0171e2f190f7-kube-api-access-wzdvp\") pod \"66aafada-a280-4e39-96bd-0171e2f190f7\" (UID: \"66aafada-a280-4e39-96bd-0171e2f190f7\") " Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.301824 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zbpg\" (UniqueName: \"kubernetes.io/projected/d060b664-4536-45b0-921f-4bc5a759fb1a-kube-api-access-7zbpg\") pod \"d060b664-4536-45b0-921f-4bc5a759fb1a\" (UID: \"d060b664-4536-45b0-921f-4bc5a759fb1a\") " Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.301873 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66aafada-a280-4e39-96bd-0171e2f190f7-operator-scripts\") pod \"66aafada-a280-4e39-96bd-0171e2f190f7\" (UID: \"66aafada-a280-4e39-96bd-0171e2f190f7\") " Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.301909 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcl5k\" (UniqueName: \"kubernetes.io/projected/c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe-kube-api-access-wcl5k\") pod \"c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe\" (UID: \"c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe\") " Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.301936 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f3edb84-792d-4066-bfae-a43c8fefe6da-operator-scripts\") pod \"1f3edb84-792d-4066-bfae-a43c8fefe6da\" (UID: \"1f3edb84-792d-4066-bfae-a43c8fefe6da\") " Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.301987 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20b288ea-02c0-423e-b6d7-621f137afa58-operator-scripts\") pod \"20b288ea-02c0-423e-b6d7-621f137afa58\" (UID: \"20b288ea-02c0-423e-b6d7-621f137afa58\") " Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.302029 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2jxp\" (UniqueName: \"kubernetes.io/projected/1f3edb84-792d-4066-bfae-a43c8fefe6da-kube-api-access-p2jxp\") pod \"1f3edb84-792d-4066-bfae-a43c8fefe6da\" (UID: \"1f3edb84-792d-4066-bfae-a43c8fefe6da\") " Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.302083 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe-operator-scripts\") pod \"c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe\" (UID: \"c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe\") " Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.302185 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rplpb\" (UniqueName: \"kubernetes.io/projected/20b288ea-02c0-423e-b6d7-621f137afa58-kube-api-access-rplpb\") pod \"20b288ea-02c0-423e-b6d7-621f137afa58\" (UID: \"20b288ea-02c0-423e-b6d7-621f137afa58\") " Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.302221 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d060b664-4536-45b0-921f-4bc5a759fb1a-operator-scripts\") pod \"d060b664-4536-45b0-921f-4bc5a759fb1a\" (UID: \"d060b664-4536-45b0-921f-4bc5a759fb1a\") " Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.303072 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d060b664-4536-45b0-921f-4bc5a759fb1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d060b664-4536-45b0-921f-4bc5a759fb1a" (UID: "d060b664-4536-45b0-921f-4bc5a759fb1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.303334 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe" (UID: "c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.303727 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f3edb84-792d-4066-bfae-a43c8fefe6da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f3edb84-792d-4066-bfae-a43c8fefe6da" (UID: "1f3edb84-792d-4066-bfae-a43c8fefe6da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.304250 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66aafada-a280-4e39-96bd-0171e2f190f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66aafada-a280-4e39-96bd-0171e2f190f7" (UID: "66aafada-a280-4e39-96bd-0171e2f190f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.305541 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20b288ea-02c0-423e-b6d7-621f137afa58-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20b288ea-02c0-423e-b6d7-621f137afa58" (UID: "20b288ea-02c0-423e-b6d7-621f137afa58"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.349645 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a74e-account-create-update-vg7tr" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.350143 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a74e-account-create-update-vg7tr" event={"ID":"20b288ea-02c0-423e-b6d7-621f137afa58","Type":"ContainerDied","Data":"a35e9a867f1fb2bad4739cb208b9cd65f2425f751cf20993492e8696481ef8a9"} Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.350168 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a35e9a867f1fb2bad4739cb208b9cd65f2425f751cf20993492e8696481ef8a9" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.352272 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7djpj" event={"ID":"c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe","Type":"ContainerDied","Data":"ebb1cc0f6f70c07e1bb1a5bb16b9ef1617f83524ee8781d249bbb7b5e3f78d52"} Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.352407 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebb1cc0f6f70c07e1bb1a5bb16b9ef1617f83524ee8781d249bbb7b5e3f78d52" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.352538 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7djpj" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.354886 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-57bt9" event={"ID":"66aafada-a280-4e39-96bd-0171e2f190f7","Type":"ContainerDied","Data":"3c4814d17bcf4fd234b51906d9e86e86da5dace55c6a60709d19c6f5a2604bd8"} Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.355060 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c4814d17bcf4fd234b51906d9e86e86da5dace55c6a60709d19c6f5a2604bd8" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.355197 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-57bt9" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.356755 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-039f-account-create-update-5k2qz" event={"ID":"d060b664-4536-45b0-921f-4bc5a759fb1a","Type":"ContainerDied","Data":"8e1084f9007c9eb8fc2027ac61c0767f41ff9e31ad01876de38e7027276cc7d8"} Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.356836 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e1084f9007c9eb8fc2027ac61c0767f41ff9e31ad01876de38e7027276cc7d8" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.356932 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-039f-account-create-update-5k2qz" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.358465 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9a9a-account-create-update-zf625" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.358522 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9a9a-account-create-update-zf625" event={"ID":"1f3edb84-792d-4066-bfae-a43c8fefe6da","Type":"ContainerDied","Data":"80daac897d49cfca9297cf303fe0d54982c4c723ab2d1f8e6cd023a892a26768"} Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.358543 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80daac897d49cfca9297cf303fe0d54982c4c723ab2d1f8e6cd023a892a26768" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.404797 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.404831 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d060b664-4536-45b0-921f-4bc5a759fb1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.404845 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66aafada-a280-4e39-96bd-0171e2f190f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.404859 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f3edb84-792d-4066-bfae-a43c8fefe6da-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.404871 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20b288ea-02c0-423e-b6d7-621f137afa58-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.569385 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66aafada-a280-4e39-96bd-0171e2f190f7-kube-api-access-wzdvp" (OuterVolumeSpecName: "kube-api-access-wzdvp") pod "66aafada-a280-4e39-96bd-0171e2f190f7" (UID: "66aafada-a280-4e39-96bd-0171e2f190f7"). InnerVolumeSpecName "kube-api-access-wzdvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.569507 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d060b664-4536-45b0-921f-4bc5a759fb1a-kube-api-access-7zbpg" (OuterVolumeSpecName: "kube-api-access-7zbpg") pod "d060b664-4536-45b0-921f-4bc5a759fb1a" (UID: "d060b664-4536-45b0-921f-4bc5a759fb1a"). InnerVolumeSpecName "kube-api-access-7zbpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.569575 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f3edb84-792d-4066-bfae-a43c8fefe6da-kube-api-access-p2jxp" (OuterVolumeSpecName: "kube-api-access-p2jxp") pod "1f3edb84-792d-4066-bfae-a43c8fefe6da" (UID: "1f3edb84-792d-4066-bfae-a43c8fefe6da"). InnerVolumeSpecName "kube-api-access-p2jxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.569604 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b288ea-02c0-423e-b6d7-621f137afa58-kube-api-access-rplpb" (OuterVolumeSpecName: "kube-api-access-rplpb") pod "20b288ea-02c0-423e-b6d7-621f137afa58" (UID: "20b288ea-02c0-423e-b6d7-621f137afa58"). InnerVolumeSpecName "kube-api-access-rplpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.570220 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe-kube-api-access-wcl5k" (OuterVolumeSpecName: "kube-api-access-wcl5k") pod "c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe" (UID: "c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe"). InnerVolumeSpecName "kube-api-access-wcl5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.617415 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzdvp\" (UniqueName: \"kubernetes.io/projected/66aafada-a280-4e39-96bd-0171e2f190f7-kube-api-access-wzdvp\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.617473 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zbpg\" (UniqueName: \"kubernetes.io/projected/d060b664-4536-45b0-921f-4bc5a759fb1a-kube-api-access-7zbpg\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.617492 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcl5k\" (UniqueName: \"kubernetes.io/projected/c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe-kube-api-access-wcl5k\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.617508 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2jxp\" (UniqueName: \"kubernetes.io/projected/1f3edb84-792d-4066-bfae-a43c8fefe6da-kube-api-access-p2jxp\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.617524 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rplpb\" (UniqueName: \"kubernetes.io/projected/20b288ea-02c0-423e-b6d7-621f137afa58-kube-api-access-rplpb\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.870441 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.871455 4966 scope.go:117] "RemoveContainer" containerID="eba0113c798868784fc96c476133feb75a1eed03041e9ed7602fce0ad01c0b03" Jan 27 16:06:47 crc kubenswrapper[4966]: E0127 16:06:47.871763 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7b59f8d4fb-hj8g9_openstack(8e72dc61-1e85-440d-a905-c8b1098da1d6)\"" pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" podUID="8e72dc61-1e85-440d-a905-c8b1098da1d6" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.950137 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-767f4dd79b-llll8" Jan 27 16:06:47 crc kubenswrapper[4966]: I0127 16:06:47.950828 4966 scope.go:117] "RemoveContainer" containerID="2e613c647969b1b4bccccf25db07b9fa88166bf69dceb4db7a0a1f400cee364a" Jan 27 16:06:47 crc kubenswrapper[4966]: E0127 16:06:47.951115 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-767f4dd79b-llll8_openstack(e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421)\"" pod="openstack/heat-api-767f4dd79b-llll8" podUID="e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421" Jan 27 16:06:48 crc kubenswrapper[4966]: I0127 16:06:48.371107 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b939989d-e7fd-4781-b285-ae9f22fc9bf4","Type":"ContainerStarted","Data":"e822352201079c39f08a1f3aee3ca267175f45a76fc2e25836de619f1004677d"} Jan 27 16:06:48 crc kubenswrapper[4966]: I0127 16:06:48.371278 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" containerName="ceilometer-central-agent" containerID="cri-o://9bff6d199561c3bef1275af1a978185d82a846130c9c5740cd21cee41bff4abe" gracePeriod=30 Jan 27 16:06:48 crc kubenswrapper[4966]: I0127 16:06:48.371356 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" containerName="proxy-httpd" containerID="cri-o://e822352201079c39f08a1f3aee3ca267175f45a76fc2e25836de619f1004677d" gracePeriod=30 Jan 27 16:06:48 crc kubenswrapper[4966]: I0127 16:06:48.371636 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 16:06:48 crc kubenswrapper[4966]: I0127 16:06:48.371394 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" containerName="sg-core" containerID="cri-o://011a3421660c0cd4cafbf7b70c395dd078e7afb83666ab17790d57d411747a10" gracePeriod=30 Jan 27 16:06:48 crc kubenswrapper[4966]: I0127 16:06:48.371374 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" containerName="ceilometer-notification-agent" containerID="cri-o://476030d34fce52a7e4ae563042570f430938d7bac77457f13a7c1c0e7387785d" gracePeriod=30 Jan 27 16:06:48 crc kubenswrapper[4966]: I0127 16:06:48.399962 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=12.945010435 podStartE2EDuration="19.399939311s" podCreationTimestamp="2026-01-27 16:06:29 +0000 UTC" firstStartedPulling="2026-01-27 16:06:40.439978964 +0000 UTC m=+1466.742772452" lastFinishedPulling="2026-01-27 16:06:46.89490785 +0000 UTC m=+1473.197701328" observedRunningTime="2026-01-27 16:06:48.3900263 +0000 UTC m=+1474.692819788" watchObservedRunningTime="2026-01-27 16:06:48.399939311 +0000 UTC m=+1474.702732799" Jan 27 16:06:49 crc kubenswrapper[4966]: I0127 16:06:49.386975 4966 generic.go:334] "Generic (PLEG): container finished" podID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" containerID="e822352201079c39f08a1f3aee3ca267175f45a76fc2e25836de619f1004677d" exitCode=0 Jan 27 16:06:49 crc kubenswrapper[4966]: I0127 16:06:49.387284 4966 generic.go:334] "Generic (PLEG): container finished" podID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" containerID="011a3421660c0cd4cafbf7b70c395dd078e7afb83666ab17790d57d411747a10" exitCode=2 Jan 27 16:06:49 crc kubenswrapper[4966]: I0127 16:06:49.387299 4966 generic.go:334] "Generic (PLEG): container finished" podID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" containerID="476030d34fce52a7e4ae563042570f430938d7bac77457f13a7c1c0e7387785d" exitCode=0 Jan 27 16:06:49 crc kubenswrapper[4966]: I0127 16:06:49.387195 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b939989d-e7fd-4781-b285-ae9f22fc9bf4","Type":"ContainerDied","Data":"e822352201079c39f08a1f3aee3ca267175f45a76fc2e25836de619f1004677d"} Jan 27 16:06:49 crc kubenswrapper[4966]: I0127 16:06:49.387345 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b939989d-e7fd-4781-b285-ae9f22fc9bf4","Type":"ContainerDied","Data":"011a3421660c0cd4cafbf7b70c395dd078e7afb83666ab17790d57d411747a10"} Jan 27 16:06:49 crc kubenswrapper[4966]: I0127 16:06:49.387362 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b939989d-e7fd-4781-b285-ae9f22fc9bf4","Type":"ContainerDied","Data":"476030d34fce52a7e4ae563042570f430938d7bac77457f13a7c1c0e7387785d"} Jan 27 16:06:49 crc kubenswrapper[4966]: I0127 16:06:49.660476 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kxq52" Jan 27 16:06:49 crc kubenswrapper[4966]: I0127 16:06:49.660931 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kxq52" Jan 27 16:06:50 crc kubenswrapper[4966]: I0127 16:06:50.714828 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kxq52" podUID="b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" containerName="registry-server" probeResult="failure" output=< Jan 27 16:06:50 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 16:06:50 crc kubenswrapper[4966]: > Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.014129 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sd9k4"] Jan 27 16:06:51 crc kubenswrapper[4966]: E0127 16:06:51.014569 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d060b664-4536-45b0-921f-4bc5a759fb1a" containerName="mariadb-account-create-update" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.014585 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="d060b664-4536-45b0-921f-4bc5a759fb1a" containerName="mariadb-account-create-update" Jan 27 16:06:51 crc kubenswrapper[4966]: E0127 16:06:51.014594 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3edb84-792d-4066-bfae-a43c8fefe6da" containerName="mariadb-account-create-update" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.014600 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3edb84-792d-4066-bfae-a43c8fefe6da" containerName="mariadb-account-create-update" Jan 27 16:06:51 crc kubenswrapper[4966]: E0127 16:06:51.014608 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4fc12a0-b687-4984-a892-6ce0cdf2c920" containerName="mariadb-database-create" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.014614 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4fc12a0-b687-4984-a892-6ce0cdf2c920" containerName="mariadb-database-create" Jan 27 16:06:51 crc kubenswrapper[4966]: E0127 16:06:51.014638 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66aafada-a280-4e39-96bd-0171e2f190f7" containerName="mariadb-database-create" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.014643 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="66aafada-a280-4e39-96bd-0171e2f190f7" containerName="mariadb-database-create" Jan 27 16:06:51 crc kubenswrapper[4966]: E0127 16:06:51.014653 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b288ea-02c0-423e-b6d7-621f137afa58" containerName="mariadb-account-create-update" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.014660 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b288ea-02c0-423e-b6d7-621f137afa58" containerName="mariadb-account-create-update" Jan 27 16:06:51 crc kubenswrapper[4966]: E0127 16:06:51.014699 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a1374f0-f77f-4d71-9e9d-e0276e4beaa2" containerName="heat-api" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.014705 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a1374f0-f77f-4d71-9e9d-e0276e4beaa2" containerName="heat-api" Jan 27 16:06:51 crc kubenswrapper[4966]: E0127 16:06:51.014718 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe" containerName="mariadb-database-create" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.014723 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe" containerName="mariadb-database-create" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.014941 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4fc12a0-b687-4984-a892-6ce0cdf2c920" containerName="mariadb-database-create" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.014964 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b288ea-02c0-423e-b6d7-621f137afa58" containerName="mariadb-account-create-update" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.014973 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="66aafada-a280-4e39-96bd-0171e2f190f7" containerName="mariadb-database-create" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.014984 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f3edb84-792d-4066-bfae-a43c8fefe6da" containerName="mariadb-account-create-update" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.014996 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a1374f0-f77f-4d71-9e9d-e0276e4beaa2" containerName="heat-api" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.015007 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe" containerName="mariadb-database-create" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.015016 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="d060b664-4536-45b0-921f-4bc5a759fb1a" containerName="mariadb-account-create-update" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.015754 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-sd9k4" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.017912 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-g9f4k" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.017965 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.022455 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.024501 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sd9k4"] Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.102084 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2352ad0-f232-4d71-bb33-4bc933b12ca6-scripts\") pod \"nova-cell0-conductor-db-sync-sd9k4\" (UID: \"b2352ad0-f232-4d71-bb33-4bc933b12ca6\") " pod="openstack/nova-cell0-conductor-db-sync-sd9k4" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.102260 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2352ad0-f232-4d71-bb33-4bc933b12ca6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-sd9k4\" (UID: \"b2352ad0-f232-4d71-bb33-4bc933b12ca6\") " pod="openstack/nova-cell0-conductor-db-sync-sd9k4" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.102500 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jsd2\" (UniqueName: \"kubernetes.io/projected/b2352ad0-f232-4d71-bb33-4bc933b12ca6-kube-api-access-8jsd2\") pod \"nova-cell0-conductor-db-sync-sd9k4\" (UID: \"b2352ad0-f232-4d71-bb33-4bc933b12ca6\") " pod="openstack/nova-cell0-conductor-db-sync-sd9k4" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.102822 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2352ad0-f232-4d71-bb33-4bc933b12ca6-config-data\") pod \"nova-cell0-conductor-db-sync-sd9k4\" (UID: \"b2352ad0-f232-4d71-bb33-4bc933b12ca6\") " pod="openstack/nova-cell0-conductor-db-sync-sd9k4" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.189116 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.205359 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2352ad0-f232-4d71-bb33-4bc933b12ca6-config-data\") pod \"nova-cell0-conductor-db-sync-sd9k4\" (UID: \"b2352ad0-f232-4d71-bb33-4bc933b12ca6\") " pod="openstack/nova-cell0-conductor-db-sync-sd9k4" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.205449 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2352ad0-f232-4d71-bb33-4bc933b12ca6-scripts\") pod \"nova-cell0-conductor-db-sync-sd9k4\" (UID: \"b2352ad0-f232-4d71-bb33-4bc933b12ca6\") " pod="openstack/nova-cell0-conductor-db-sync-sd9k4" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.205547 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2352ad0-f232-4d71-bb33-4bc933b12ca6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-sd9k4\" (UID: \"b2352ad0-f232-4d71-bb33-4bc933b12ca6\") " pod="openstack/nova-cell0-conductor-db-sync-sd9k4" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.205652 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jsd2\" (UniqueName: \"kubernetes.io/projected/b2352ad0-f232-4d71-bb33-4bc933b12ca6-kube-api-access-8jsd2\") pod \"nova-cell0-conductor-db-sync-sd9k4\" (UID: \"b2352ad0-f232-4d71-bb33-4bc933b12ca6\") " pod="openstack/nova-cell0-conductor-db-sync-sd9k4" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.217572 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2352ad0-f232-4d71-bb33-4bc933b12ca6-scripts\") pod \"nova-cell0-conductor-db-sync-sd9k4\" (UID: \"b2352ad0-f232-4d71-bb33-4bc933b12ca6\") " pod="openstack/nova-cell0-conductor-db-sync-sd9k4" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.219625 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2352ad0-f232-4d71-bb33-4bc933b12ca6-config-data\") pod \"nova-cell0-conductor-db-sync-sd9k4\" (UID: \"b2352ad0-f232-4d71-bb33-4bc933b12ca6\") " pod="openstack/nova-cell0-conductor-db-sync-sd9k4" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.228166 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2352ad0-f232-4d71-bb33-4bc933b12ca6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-sd9k4\" (UID: \"b2352ad0-f232-4d71-bb33-4bc933b12ca6\") " pod="openstack/nova-cell0-conductor-db-sync-sd9k4" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.228950 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jsd2\" (UniqueName: \"kubernetes.io/projected/b2352ad0-f232-4d71-bb33-4bc933b12ca6-kube-api-access-8jsd2\") pod \"nova-cell0-conductor-db-sync-sd9k4\" (UID: \"b2352ad0-f232-4d71-bb33-4bc933b12ca6\") " pod="openstack/nova-cell0-conductor-db-sync-sd9k4" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.284593 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.290880 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7b59f8d4fb-hj8g9"] Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.336956 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-sd9k4" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.412465 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-767f4dd79b-llll8"] Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.833078 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.833422 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.889404 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.904683 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-767f4dd79b-llll8" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.943086 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e72dc61-1e85-440d-a905-c8b1098da1d6-combined-ca-bundle\") pod \"8e72dc61-1e85-440d-a905-c8b1098da1d6\" (UID: \"8e72dc61-1e85-440d-a905-c8b1098da1d6\") " Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.943139 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e72dc61-1e85-440d-a905-c8b1098da1d6-config-data\") pod \"8e72dc61-1e85-440d-a905-c8b1098da1d6\" (UID: \"8e72dc61-1e85-440d-a905-c8b1098da1d6\") " Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.943281 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e72dc61-1e85-440d-a905-c8b1098da1d6-config-data-custom\") pod \"8e72dc61-1e85-440d-a905-c8b1098da1d6\" (UID: \"8e72dc61-1e85-440d-a905-c8b1098da1d6\") " Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.943315 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m5zd\" (UniqueName: \"kubernetes.io/projected/8e72dc61-1e85-440d-a905-c8b1098da1d6-kube-api-access-6m5zd\") pod \"8e72dc61-1e85-440d-a905-c8b1098da1d6\" (UID: \"8e72dc61-1e85-440d-a905-c8b1098da1d6\") " Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.945405 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.945509 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.954157 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e72dc61-1e85-440d-a905-c8b1098da1d6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8e72dc61-1e85-440d-a905-c8b1098da1d6" (UID: "8e72dc61-1e85-440d-a905-c8b1098da1d6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.962629 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e72dc61-1e85-440d-a905-c8b1098da1d6-kube-api-access-6m5zd" (OuterVolumeSpecName: "kube-api-access-6m5zd") pod "8e72dc61-1e85-440d-a905-c8b1098da1d6" (UID: "8e72dc61-1e85-440d-a905-c8b1098da1d6"). InnerVolumeSpecName "kube-api-access-6m5zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.987245 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sd9k4"] Jan 27 16:06:51 crc kubenswrapper[4966]: I0127 16:06:51.993752 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e72dc61-1e85-440d-a905-c8b1098da1d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e72dc61-1e85-440d-a905-c8b1098da1d6" (UID: "8e72dc61-1e85-440d-a905-c8b1098da1d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.021359 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e72dc61-1e85-440d-a905-c8b1098da1d6-config-data" (OuterVolumeSpecName: "config-data") pod "8e72dc61-1e85-440d-a905-c8b1098da1d6" (UID: "8e72dc61-1e85-440d-a905-c8b1098da1d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.048774 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-config-data\") pod \"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421\" (UID: \"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421\") " Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.048907 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-config-data-custom\") pod \"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421\" (UID: \"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421\") " Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.064072 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqz9z\" (UniqueName: \"kubernetes.io/projected/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-kube-api-access-lqz9z\") pod \"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421\" (UID: \"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421\") " Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.064335 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-combined-ca-bundle\") pod \"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421\" (UID: \"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421\") " Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.065576 4966 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e72dc61-1e85-440d-a905-c8b1098da1d6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.065599 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m5zd\" (UniqueName: \"kubernetes.io/projected/8e72dc61-1e85-440d-a905-c8b1098da1d6-kube-api-access-6m5zd\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.065613 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e72dc61-1e85-440d-a905-c8b1098da1d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.065625 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e72dc61-1e85-440d-a905-c8b1098da1d6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.071625 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421" (UID: "e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.078071 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-kube-api-access-lqz9z" (OuterVolumeSpecName: "kube-api-access-lqz9z") pod "e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421" (UID: "e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421"). InnerVolumeSpecName "kube-api-access-lqz9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.156040 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421" (UID: "e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.167828 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.167862 4966 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.167874 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqz9z\" (UniqueName: \"kubernetes.io/projected/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-kube-api-access-lqz9z\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.175040 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-config-data" (OuterVolumeSpecName: "config-data") pod "e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421" (UID: "e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.269675 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.429495 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-767f4dd79b-llll8" event={"ID":"e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421","Type":"ContainerDied","Data":"91b82123ab8ea193d0c2379ba59fbc5080b0c2faba22f1f8f2a02bee23733920"} Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.429554 4966 scope.go:117] "RemoveContainer" containerID="2e613c647969b1b4bccccf25db07b9fa88166bf69dceb4db7a0a1f400cee364a" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.429550 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-767f4dd79b-llll8" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.431207 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" event={"ID":"8e72dc61-1e85-440d-a905-c8b1098da1d6","Type":"ContainerDied","Data":"0c68908095acab19e29db447e05b13e4b4bd1f2b72e0346edeadf1d462fb42f4"} Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.431218 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b59f8d4fb-hj8g9" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.432726 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-sd9k4" event={"ID":"b2352ad0-f232-4d71-bb33-4bc933b12ca6","Type":"ContainerStarted","Data":"67ca1ad3ecb5611069b79e6746b60e30a4133390766e5316c9bbe2addf6f73c6"} Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.432958 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.432987 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.472702 4966 scope.go:117] "RemoveContainer" containerID="eba0113c798868784fc96c476133feb75a1eed03041e9ed7602fce0ad01c0b03" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.517456 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.517995 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.557677 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.602579 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.624014 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-767f4dd79b-llll8"] Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.636306 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-767f4dd79b-llll8"] Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.657693 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7b59f8d4fb-hj8g9"] Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.672938 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7b59f8d4fb-hj8g9"] Jan 27 16:06:52 crc kubenswrapper[4966]: I0127 16:06:52.949376 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5d57b89dfc-4z8bl" Jan 27 16:06:53 crc kubenswrapper[4966]: I0127 16:06:53.134942 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-84f46dccf4-wb9z5"] Jan 27 16:06:53 crc kubenswrapper[4966]: I0127 16:06:53.135203 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-84f46dccf4-wb9z5" podUID="d4704b7a-7be4-4303-ab29-d9123555133f" containerName="heat-engine" containerID="cri-o://f7f564556805c1e4bc35ff9e16f5747520540171bb65850da8f17ed0c864c0ff" gracePeriod=60 Jan 27 16:06:53 crc kubenswrapper[4966]: I0127 16:06:53.447122 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 16:06:53 crc kubenswrapper[4966]: I0127 16:06:53.447156 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 16:06:54 crc kubenswrapper[4966]: I0127 16:06:54.540490 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e72dc61-1e85-440d-a905-c8b1098da1d6" path="/var/lib/kubelet/pods/8e72dc61-1e85-440d-a905-c8b1098da1d6/volumes" Jan 27 16:06:54 crc kubenswrapper[4966]: I0127 16:06:54.541763 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421" path="/var/lib/kubelet/pods/e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421/volumes" Jan 27 16:06:56 crc kubenswrapper[4966]: E0127 16:06:56.516271 4966 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f7f564556805c1e4bc35ff9e16f5747520540171bb65850da8f17ed0c864c0ff" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 16:06:56 crc kubenswrapper[4966]: E0127 16:06:56.518567 4966 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f7f564556805c1e4bc35ff9e16f5747520540171bb65850da8f17ed0c864c0ff" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 16:06:56 crc kubenswrapper[4966]: E0127 16:06:56.521011 4966 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f7f564556805c1e4bc35ff9e16f5747520540171bb65850da8f17ed0c864c0ff" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 16:06:56 crc kubenswrapper[4966]: E0127 16:06:56.521078 4966 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-84f46dccf4-wb9z5" podUID="d4704b7a-7be4-4303-ab29-d9123555133f" containerName="heat-engine" Jan 27 16:06:58 crc kubenswrapper[4966]: I0127 16:06:58.512079 4966 generic.go:334] "Generic (PLEG): container finished" podID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" containerID="9bff6d199561c3bef1275af1a978185d82a846130c9c5740cd21cee41bff4abe" exitCode=0 Jan 27 16:06:58 crc kubenswrapper[4966]: I0127 16:06:58.512158 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b939989d-e7fd-4781-b285-ae9f22fc9bf4","Type":"ContainerDied","Data":"9bff6d199561c3bef1275af1a978185d82a846130c9c5740cd21cee41bff4abe"} Jan 27 16:06:58 crc kubenswrapper[4966]: I0127 16:06:58.684783 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 16:06:58 crc kubenswrapper[4966]: I0127 16:06:58.685251 4966 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 16:06:58 crc kubenswrapper[4966]: I0127 16:06:58.690581 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 16:06:58 crc kubenswrapper[4966]: I0127 16:06:58.699798 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 16:06:58 crc kubenswrapper[4966]: I0127 16:06:58.700016 4966 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 16:06:58 crc kubenswrapper[4966]: I0127 16:06:58.720395 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 16:06:59 crc kubenswrapper[4966]: I0127 16:06:59.525320 4966 generic.go:334] "Generic (PLEG): container finished" podID="d4704b7a-7be4-4303-ab29-d9123555133f" containerID="f7f564556805c1e4bc35ff9e16f5747520540171bb65850da8f17ed0c864c0ff" exitCode=0 Jan 27 16:06:59 crc kubenswrapper[4966]: I0127 16:06:59.525531 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-84f46dccf4-wb9z5" event={"ID":"d4704b7a-7be4-4303-ab29-d9123555133f","Type":"ContainerDied","Data":"f7f564556805c1e4bc35ff9e16f5747520540171bb65850da8f17ed0c864c0ff"} Jan 27 16:06:59 crc kubenswrapper[4966]: I0127 16:06:59.816729 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.217:3000/\": dial tcp 10.217.0.217:3000: connect: connection refused" Jan 27 16:07:00 crc kubenswrapper[4966]: I0127 16:07:00.740669 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kxq52" podUID="b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" containerName="registry-server" probeResult="failure" output=< Jan 27 16:07:00 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 16:07:00 crc kubenswrapper[4966]: > Jan 27 16:07:02 crc kubenswrapper[4966]: I0127 16:07:02.977254 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:07:02 crc kubenswrapper[4966]: I0127 16:07:02.977844 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-84f46dccf4-wb9z5" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.056577 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4704b7a-7be4-4303-ab29-d9123555133f-config-data-custom\") pod \"d4704b7a-7be4-4303-ab29-d9123555133f\" (UID: \"d4704b7a-7be4-4303-ab29-d9123555133f\") " Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.056768 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4704b7a-7be4-4303-ab29-d9123555133f-config-data\") pod \"d4704b7a-7be4-4303-ab29-d9123555133f\" (UID: \"d4704b7a-7be4-4303-ab29-d9123555133f\") " Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.056802 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-config-data\") pod \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.057030 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn8j5\" (UniqueName: \"kubernetes.io/projected/b939989d-e7fd-4781-b285-ae9f22fc9bf4-kube-api-access-jn8j5\") pod \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.057066 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-sg-core-conf-yaml\") pod \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.057228 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-scripts\") pod \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.057257 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4704b7a-7be4-4303-ab29-d9123555133f-combined-ca-bundle\") pod \"d4704b7a-7be4-4303-ab29-d9123555133f\" (UID: \"d4704b7a-7be4-4303-ab29-d9123555133f\") " Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.057363 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b939989d-e7fd-4781-b285-ae9f22fc9bf4-run-httpd\") pod \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.057382 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-combined-ca-bundle\") pod \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.057642 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcch2\" (UniqueName: \"kubernetes.io/projected/d4704b7a-7be4-4303-ab29-d9123555133f-kube-api-access-zcch2\") pod \"d4704b7a-7be4-4303-ab29-d9123555133f\" (UID: \"d4704b7a-7be4-4303-ab29-d9123555133f\") " Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.057813 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b939989d-e7fd-4781-b285-ae9f22fc9bf4-log-httpd\") pod \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\" (UID: \"b939989d-e7fd-4781-b285-ae9f22fc9bf4\") " Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.059542 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b939989d-e7fd-4781-b285-ae9f22fc9bf4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b939989d-e7fd-4781-b285-ae9f22fc9bf4" (UID: "b939989d-e7fd-4781-b285-ae9f22fc9bf4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.060139 4966 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b939989d-e7fd-4781-b285-ae9f22fc9bf4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.065171 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b939989d-e7fd-4781-b285-ae9f22fc9bf4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b939989d-e7fd-4781-b285-ae9f22fc9bf4" (UID: "b939989d-e7fd-4781-b285-ae9f22fc9bf4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.066368 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-scripts" (OuterVolumeSpecName: "scripts") pod "b939989d-e7fd-4781-b285-ae9f22fc9bf4" (UID: "b939989d-e7fd-4781-b285-ae9f22fc9bf4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.066589 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4704b7a-7be4-4303-ab29-d9123555133f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d4704b7a-7be4-4303-ab29-d9123555133f" (UID: "d4704b7a-7be4-4303-ab29-d9123555133f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.082101 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4704b7a-7be4-4303-ab29-d9123555133f-kube-api-access-zcch2" (OuterVolumeSpecName: "kube-api-access-zcch2") pod "d4704b7a-7be4-4303-ab29-d9123555133f" (UID: "d4704b7a-7be4-4303-ab29-d9123555133f"). InnerVolumeSpecName "kube-api-access-zcch2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.082678 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b939989d-e7fd-4781-b285-ae9f22fc9bf4-kube-api-access-jn8j5" (OuterVolumeSpecName: "kube-api-access-jn8j5") pod "b939989d-e7fd-4781-b285-ae9f22fc9bf4" (UID: "b939989d-e7fd-4781-b285-ae9f22fc9bf4"). InnerVolumeSpecName "kube-api-access-jn8j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.116237 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4704b7a-7be4-4303-ab29-d9123555133f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4704b7a-7be4-4303-ab29-d9123555133f" (UID: "d4704b7a-7be4-4303-ab29-d9123555133f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.145413 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b939989d-e7fd-4781-b285-ae9f22fc9bf4" (UID: "b939989d-e7fd-4781-b285-ae9f22fc9bf4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.162074 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.162103 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4704b7a-7be4-4303-ab29-d9123555133f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.162115 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcch2\" (UniqueName: \"kubernetes.io/projected/d4704b7a-7be4-4303-ab29-d9123555133f-kube-api-access-zcch2\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.162125 4966 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b939989d-e7fd-4781-b285-ae9f22fc9bf4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.162134 4966 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4704b7a-7be4-4303-ab29-d9123555133f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.162142 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn8j5\" (UniqueName: \"kubernetes.io/projected/b939989d-e7fd-4781-b285-ae9f22fc9bf4-kube-api-access-jn8j5\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.162151 4966 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.189549 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4704b7a-7be4-4303-ab29-d9123555133f-config-data" (OuterVolumeSpecName: "config-data") pod "d4704b7a-7be4-4303-ab29-d9123555133f" (UID: "d4704b7a-7be4-4303-ab29-d9123555133f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.213044 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-config-data" (OuterVolumeSpecName: "config-data") pod "b939989d-e7fd-4781-b285-ae9f22fc9bf4" (UID: "b939989d-e7fd-4781-b285-ae9f22fc9bf4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.220826 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b939989d-e7fd-4781-b285-ae9f22fc9bf4" (UID: "b939989d-e7fd-4781-b285-ae9f22fc9bf4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.266771 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4704b7a-7be4-4303-ab29-d9123555133f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.266808 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.266818 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b939989d-e7fd-4781-b285-ae9f22fc9bf4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.591957 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-84f46dccf4-wb9z5" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.591975 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-84f46dccf4-wb9z5" event={"ID":"d4704b7a-7be4-4303-ab29-d9123555133f","Type":"ContainerDied","Data":"ddea96eb08ad5e8cc044256403dc94c12bb7afddbca3d63d11aeb547a6bea4b8"} Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.592982 4966 scope.go:117] "RemoveContainer" containerID="f7f564556805c1e4bc35ff9e16f5747520540171bb65850da8f17ed0c864c0ff" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.595086 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-sd9k4" event={"ID":"b2352ad0-f232-4d71-bb33-4bc933b12ca6","Type":"ContainerStarted","Data":"6076e27de99e7a8620e839aa9ebb4c9b7fd2ea23a74f1df77ecaaf9403f6eb1e"} Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.603391 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b939989d-e7fd-4781-b285-ae9f22fc9bf4","Type":"ContainerDied","Data":"5d3e2d20036aad7be4c7d10f6d20d4265f41e574225c6070de914becfa4f8625"} Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.603535 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.622889 4966 scope.go:117] "RemoveContainer" containerID="e822352201079c39f08a1f3aee3ca267175f45a76fc2e25836de619f1004677d" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.654826 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-sd9k4" podStartSLOduration=3.221187129 podStartE2EDuration="13.654804686s" podCreationTimestamp="2026-01-27 16:06:50 +0000 UTC" firstStartedPulling="2026-01-27 16:06:51.956103946 +0000 UTC m=+1478.258897434" lastFinishedPulling="2026-01-27 16:07:02.389721503 +0000 UTC m=+1488.692514991" observedRunningTime="2026-01-27 16:07:03.631935288 +0000 UTC m=+1489.934728776" watchObservedRunningTime="2026-01-27 16:07:03.654804686 +0000 UTC m=+1489.957598174" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.675996 4966 scope.go:117] "RemoveContainer" containerID="011a3421660c0cd4cafbf7b70c395dd078e7afb83666ab17790d57d411747a10" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.699200 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-84f46dccf4-wb9z5"] Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.729830 4966 scope.go:117] "RemoveContainer" containerID="476030d34fce52a7e4ae563042570f430938d7bac77457f13a7c1c0e7387785d" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.737944 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-84f46dccf4-wb9z5"] Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.761104 4966 scope.go:117] "RemoveContainer" containerID="9bff6d199561c3bef1275af1a978185d82a846130c9c5740cd21cee41bff4abe" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.761261 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.777544 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.792398 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:07:03 crc kubenswrapper[4966]: E0127 16:07:03.793107 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421" containerName="heat-api" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.793124 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421" containerName="heat-api" Jan 27 16:07:03 crc kubenswrapper[4966]: E0127 16:07:03.793137 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e72dc61-1e85-440d-a905-c8b1098da1d6" containerName="heat-cfnapi" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.793143 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e72dc61-1e85-440d-a905-c8b1098da1d6" containerName="heat-cfnapi" Jan 27 16:07:03 crc kubenswrapper[4966]: E0127 16:07:03.793155 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" containerName="proxy-httpd" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.793161 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" containerName="proxy-httpd" Jan 27 16:07:03 crc kubenswrapper[4966]: E0127 16:07:03.793177 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4704b7a-7be4-4303-ab29-d9123555133f" containerName="heat-engine" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.793183 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4704b7a-7be4-4303-ab29-d9123555133f" containerName="heat-engine" Jan 27 16:07:03 crc kubenswrapper[4966]: E0127 16:07:03.793212 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" containerName="ceilometer-central-agent" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.793218 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" containerName="ceilometer-central-agent" Jan 27 16:07:03 crc kubenswrapper[4966]: E0127 16:07:03.793230 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" containerName="sg-core" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.793235 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" containerName="sg-core" Jan 27 16:07:03 crc kubenswrapper[4966]: E0127 16:07:03.793247 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" containerName="ceilometer-notification-agent" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.793253 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" containerName="ceilometer-notification-agent" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.793541 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4704b7a-7be4-4303-ab29-d9123555133f" containerName="heat-engine" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.793552 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" containerName="sg-core" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.793568 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421" containerName="heat-api" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.793580 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" containerName="ceilometer-notification-agent" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.793591 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" containerName="ceilometer-central-agent" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.793599 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" containerName="proxy-httpd" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.793622 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e72dc61-1e85-440d-a905-c8b1098da1d6" containerName="heat-cfnapi" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.793633 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e72dc61-1e85-440d-a905-c8b1098da1d6" containerName="heat-cfnapi" Jan 27 16:07:03 crc kubenswrapper[4966]: E0127 16:07:03.793862 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421" containerName="heat-api" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.793875 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421" containerName="heat-api" Jan 27 16:07:03 crc kubenswrapper[4966]: E0127 16:07:03.793910 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e72dc61-1e85-440d-a905-c8b1098da1d6" containerName="heat-cfnapi" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.793920 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e72dc61-1e85-440d-a905-c8b1098da1d6" containerName="heat-cfnapi" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.794278 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3fbe7b0-b1d1-4c76-9bf8-ca0810c6e421" containerName="heat-api" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.798130 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.801185 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.801807 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.816194 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.883146 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " pod="openstack/ceilometer-0" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.883262 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " pod="openstack/ceilometer-0" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.883304 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d12fc36-1730-412d-97bd-92054dca1544-run-httpd\") pod \"ceilometer-0\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " pod="openstack/ceilometer-0" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.883324 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t57pm\" (UniqueName: \"kubernetes.io/projected/2d12fc36-1730-412d-97bd-92054dca1544-kube-api-access-t57pm\") pod \"ceilometer-0\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " pod="openstack/ceilometer-0" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.883345 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-config-data\") pod \"ceilometer-0\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " pod="openstack/ceilometer-0" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.883360 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d12fc36-1730-412d-97bd-92054dca1544-log-httpd\") pod \"ceilometer-0\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " pod="openstack/ceilometer-0" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.883454 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-scripts\") pod \"ceilometer-0\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " pod="openstack/ceilometer-0" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.985945 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " pod="openstack/ceilometer-0" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.986022 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d12fc36-1730-412d-97bd-92054dca1544-run-httpd\") pod \"ceilometer-0\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " pod="openstack/ceilometer-0" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.986052 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t57pm\" (UniqueName: \"kubernetes.io/projected/2d12fc36-1730-412d-97bd-92054dca1544-kube-api-access-t57pm\") pod \"ceilometer-0\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " pod="openstack/ceilometer-0" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.986088 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-config-data\") pod \"ceilometer-0\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " pod="openstack/ceilometer-0" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.986111 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d12fc36-1730-412d-97bd-92054dca1544-log-httpd\") pod \"ceilometer-0\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " pod="openstack/ceilometer-0" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.986574 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d12fc36-1730-412d-97bd-92054dca1544-run-httpd\") pod \"ceilometer-0\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " pod="openstack/ceilometer-0" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.986694 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d12fc36-1730-412d-97bd-92054dca1544-log-httpd\") pod \"ceilometer-0\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " pod="openstack/ceilometer-0" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.987179 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-scripts\") pod \"ceilometer-0\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " pod="openstack/ceilometer-0" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.987358 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " pod="openstack/ceilometer-0" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.991563 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-scripts\") pod \"ceilometer-0\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " pod="openstack/ceilometer-0" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.992273 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " pod="openstack/ceilometer-0" Jan 27 16:07:03 crc kubenswrapper[4966]: I0127 16:07:03.992797 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-config-data\") pod \"ceilometer-0\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " pod="openstack/ceilometer-0" Jan 27 16:07:04 crc kubenswrapper[4966]: I0127 16:07:04.000350 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " pod="openstack/ceilometer-0" Jan 27 16:07:04 crc kubenswrapper[4966]: I0127 16:07:04.004289 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t57pm\" (UniqueName: \"kubernetes.io/projected/2d12fc36-1730-412d-97bd-92054dca1544-kube-api-access-t57pm\") pod \"ceilometer-0\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " pod="openstack/ceilometer-0" Jan 27 16:07:04 crc kubenswrapper[4966]: I0127 16:07:04.117544 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:07:04 crc kubenswrapper[4966]: I0127 16:07:04.542636 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b939989d-e7fd-4781-b285-ae9f22fc9bf4" path="/var/lib/kubelet/pods/b939989d-e7fd-4781-b285-ae9f22fc9bf4/volumes" Jan 27 16:07:04 crc kubenswrapper[4966]: I0127 16:07:04.544015 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4704b7a-7be4-4303-ab29-d9123555133f" path="/var/lib/kubelet/pods/d4704b7a-7be4-4303-ab29-d9123555133f/volumes" Jan 27 16:07:04 crc kubenswrapper[4966]: I0127 16:07:04.605738 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:07:04 crc kubenswrapper[4966]: I0127 16:07:04.617580 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d12fc36-1730-412d-97bd-92054dca1544","Type":"ContainerStarted","Data":"c2d52f07a9679b39885d60b03907050af8c2916b259e7d540af9d561d00b72b0"} Jan 27 16:07:05 crc kubenswrapper[4966]: I0127 16:07:05.639177 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d12fc36-1730-412d-97bd-92054dca1544","Type":"ContainerStarted","Data":"2134a3af8d04600963d002d91338ab5a40344ad60ccc93888e65599e30ef9535"} Jan 27 16:07:06 crc kubenswrapper[4966]: I0127 16:07:06.655195 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d12fc36-1730-412d-97bd-92054dca1544","Type":"ContainerStarted","Data":"ebd1456fc5046bb9a9f95289911cc64527ba30458174108632770debfbfe4d14"} Jan 27 16:07:08 crc kubenswrapper[4966]: I0127 16:07:08.683003 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d12fc36-1730-412d-97bd-92054dca1544","Type":"ContainerStarted","Data":"a16d0d39f9f479f864bc0517b76193d6891af009f9545e7cf351cda3ecbfbae4"} Jan 27 16:07:09 crc kubenswrapper[4966]: I0127 16:07:09.697241 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d12fc36-1730-412d-97bd-92054dca1544","Type":"ContainerStarted","Data":"188d167816ceaf59541de38b4193f172ed587f5f14de0f95ea87f76cb53050ca"} Jan 27 16:07:09 crc kubenswrapper[4966]: I0127 16:07:09.697862 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 16:07:09 crc kubenswrapper[4966]: I0127 16:07:09.730501 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.018775066 podStartE2EDuration="6.730477321s" podCreationTimestamp="2026-01-27 16:07:03 +0000 UTC" firstStartedPulling="2026-01-27 16:07:04.60697248 +0000 UTC m=+1490.909765968" lastFinishedPulling="2026-01-27 16:07:09.318674735 +0000 UTC m=+1495.621468223" observedRunningTime="2026-01-27 16:07:09.718779555 +0000 UTC m=+1496.021573073" watchObservedRunningTime="2026-01-27 16:07:09.730477321 +0000 UTC m=+1496.033270809" Jan 27 16:07:10 crc kubenswrapper[4966]: I0127 16:07:10.119439 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:07:10 crc kubenswrapper[4966]: I0127 16:07:10.119507 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:07:10 crc kubenswrapper[4966]: I0127 16:07:10.723327 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kxq52" podUID="b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" containerName="registry-server" probeResult="failure" output=< Jan 27 16:07:10 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 16:07:10 crc kubenswrapper[4966]: > Jan 27 16:07:11 crc kubenswrapper[4966]: I0127 16:07:11.543285 4966 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podae145432-f4ac-4937-a71d-5c871832c20a"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podae145432-f4ac-4937-a71d-5c871832c20a] : Timed out while waiting for systemd to remove kubepods-besteffort-podae145432_f4ac_4937_a71d_5c871832c20a.slice" Jan 27 16:07:11 crc kubenswrapper[4966]: I0127 16:07:11.687105 4966 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod856ac2bf-cc02-4921-84ff-947e5b947997"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod856ac2bf-cc02-4921-84ff-947e5b947997] : Timed out while waiting for systemd to remove kubepods-besteffort-pod856ac2bf_cc02_4921_84ff_947e5b947997.slice" Jan 27 16:07:15 crc kubenswrapper[4966]: I0127 16:07:15.759979 4966 generic.go:334] "Generic (PLEG): container finished" podID="b2352ad0-f232-4d71-bb33-4bc933b12ca6" containerID="6076e27de99e7a8620e839aa9ebb4c9b7fd2ea23a74f1df77ecaaf9403f6eb1e" exitCode=0 Jan 27 16:07:15 crc kubenswrapper[4966]: I0127 16:07:15.760739 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-sd9k4" event={"ID":"b2352ad0-f232-4d71-bb33-4bc933b12ca6","Type":"ContainerDied","Data":"6076e27de99e7a8620e839aa9ebb4c9b7fd2ea23a74f1df77ecaaf9403f6eb1e"} Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.290477 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-sd9k4" Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.323773 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2352ad0-f232-4d71-bb33-4bc933b12ca6-scripts\") pod \"b2352ad0-f232-4d71-bb33-4bc933b12ca6\" (UID: \"b2352ad0-f232-4d71-bb33-4bc933b12ca6\") " Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.324399 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2352ad0-f232-4d71-bb33-4bc933b12ca6-combined-ca-bundle\") pod \"b2352ad0-f232-4d71-bb33-4bc933b12ca6\" (UID: \"b2352ad0-f232-4d71-bb33-4bc933b12ca6\") " Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.325646 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jsd2\" (UniqueName: \"kubernetes.io/projected/b2352ad0-f232-4d71-bb33-4bc933b12ca6-kube-api-access-8jsd2\") pod \"b2352ad0-f232-4d71-bb33-4bc933b12ca6\" (UID: \"b2352ad0-f232-4d71-bb33-4bc933b12ca6\") " Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.325818 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2352ad0-f232-4d71-bb33-4bc933b12ca6-config-data\") pod \"b2352ad0-f232-4d71-bb33-4bc933b12ca6\" (UID: \"b2352ad0-f232-4d71-bb33-4bc933b12ca6\") " Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.334142 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2352ad0-f232-4d71-bb33-4bc933b12ca6-scripts" (OuterVolumeSpecName: "scripts") pod "b2352ad0-f232-4d71-bb33-4bc933b12ca6" (UID: "b2352ad0-f232-4d71-bb33-4bc933b12ca6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.341961 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2352ad0-f232-4d71-bb33-4bc933b12ca6-kube-api-access-8jsd2" (OuterVolumeSpecName: "kube-api-access-8jsd2") pod "b2352ad0-f232-4d71-bb33-4bc933b12ca6" (UID: "b2352ad0-f232-4d71-bb33-4bc933b12ca6"). InnerVolumeSpecName "kube-api-access-8jsd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.370204 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2352ad0-f232-4d71-bb33-4bc933b12ca6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2352ad0-f232-4d71-bb33-4bc933b12ca6" (UID: "b2352ad0-f232-4d71-bb33-4bc933b12ca6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.389007 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2352ad0-f232-4d71-bb33-4bc933b12ca6-config-data" (OuterVolumeSpecName: "config-data") pod "b2352ad0-f232-4d71-bb33-4bc933b12ca6" (UID: "b2352ad0-f232-4d71-bb33-4bc933b12ca6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.430087 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jsd2\" (UniqueName: \"kubernetes.io/projected/b2352ad0-f232-4d71-bb33-4bc933b12ca6-kube-api-access-8jsd2\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.430121 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2352ad0-f232-4d71-bb33-4bc933b12ca6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.430135 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2352ad0-f232-4d71-bb33-4bc933b12ca6-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.430144 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2352ad0-f232-4d71-bb33-4bc933b12ca6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.784031 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-sd9k4" event={"ID":"b2352ad0-f232-4d71-bb33-4bc933b12ca6","Type":"ContainerDied","Data":"67ca1ad3ecb5611069b79e6746b60e30a4133390766e5316c9bbe2addf6f73c6"} Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.784071 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67ca1ad3ecb5611069b79e6746b60e30a4133390766e5316c9bbe2addf6f73c6" Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.784124 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-sd9k4" Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.903389 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 16:07:17 crc kubenswrapper[4966]: E0127 16:07:17.904114 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2352ad0-f232-4d71-bb33-4bc933b12ca6" containerName="nova-cell0-conductor-db-sync" Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.904138 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2352ad0-f232-4d71-bb33-4bc933b12ca6" containerName="nova-cell0-conductor-db-sync" Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.904458 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2352ad0-f232-4d71-bb33-4bc933b12ca6" containerName="nova-cell0-conductor-db-sync" Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.905567 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.907846 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-g9f4k" Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.908174 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.927439 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.941087 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea7f80cc-7f25-4546-b728-b483e4690acd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ea7f80cc-7f25-4546-b728-b483e4690acd\") " pod="openstack/nova-cell0-conductor-0" Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.941246 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntmqb\" (UniqueName: \"kubernetes.io/projected/ea7f80cc-7f25-4546-b728-b483e4690acd-kube-api-access-ntmqb\") pod \"nova-cell0-conductor-0\" (UID: \"ea7f80cc-7f25-4546-b728-b483e4690acd\") " pod="openstack/nova-cell0-conductor-0" Jan 27 16:07:17 crc kubenswrapper[4966]: I0127 16:07:17.941284 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7f80cc-7f25-4546-b728-b483e4690acd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ea7f80cc-7f25-4546-b728-b483e4690acd\") " pod="openstack/nova-cell0-conductor-0" Jan 27 16:07:18 crc kubenswrapper[4966]: I0127 16:07:18.043289 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea7f80cc-7f25-4546-b728-b483e4690acd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ea7f80cc-7f25-4546-b728-b483e4690acd\") " pod="openstack/nova-cell0-conductor-0" Jan 27 16:07:18 crc kubenswrapper[4966]: I0127 16:07:18.043443 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntmqb\" (UniqueName: \"kubernetes.io/projected/ea7f80cc-7f25-4546-b728-b483e4690acd-kube-api-access-ntmqb\") pod \"nova-cell0-conductor-0\" (UID: \"ea7f80cc-7f25-4546-b728-b483e4690acd\") " pod="openstack/nova-cell0-conductor-0" Jan 27 16:07:18 crc kubenswrapper[4966]: I0127 16:07:18.043490 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7f80cc-7f25-4546-b728-b483e4690acd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ea7f80cc-7f25-4546-b728-b483e4690acd\") " pod="openstack/nova-cell0-conductor-0" Jan 27 16:07:18 crc kubenswrapper[4966]: I0127 16:07:18.048788 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7f80cc-7f25-4546-b728-b483e4690acd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ea7f80cc-7f25-4546-b728-b483e4690acd\") " pod="openstack/nova-cell0-conductor-0" Jan 27 16:07:18 crc kubenswrapper[4966]: I0127 16:07:18.049518 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea7f80cc-7f25-4546-b728-b483e4690acd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ea7f80cc-7f25-4546-b728-b483e4690acd\") " pod="openstack/nova-cell0-conductor-0" Jan 27 16:07:18 crc kubenswrapper[4966]: I0127 16:07:18.070809 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntmqb\" (UniqueName: \"kubernetes.io/projected/ea7f80cc-7f25-4546-b728-b483e4690acd-kube-api-access-ntmqb\") pod \"nova-cell0-conductor-0\" (UID: \"ea7f80cc-7f25-4546-b728-b483e4690acd\") " pod="openstack/nova-cell0-conductor-0" Jan 27 16:07:18 crc kubenswrapper[4966]: I0127 16:07:18.222931 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 16:07:18 crc kubenswrapper[4966]: I0127 16:07:18.554730 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:07:18 crc kubenswrapper[4966]: I0127 16:07:18.555251 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d12fc36-1730-412d-97bd-92054dca1544" containerName="ceilometer-central-agent" containerID="cri-o://2134a3af8d04600963d002d91338ab5a40344ad60ccc93888e65599e30ef9535" gracePeriod=30 Jan 27 16:07:18 crc kubenswrapper[4966]: I0127 16:07:18.555304 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d12fc36-1730-412d-97bd-92054dca1544" containerName="proxy-httpd" containerID="cri-o://188d167816ceaf59541de38b4193f172ed587f5f14de0f95ea87f76cb53050ca" gracePeriod=30 Jan 27 16:07:18 crc kubenswrapper[4966]: I0127 16:07:18.555299 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d12fc36-1730-412d-97bd-92054dca1544" containerName="sg-core" containerID="cri-o://a16d0d39f9f479f864bc0517b76193d6891af009f9545e7cf351cda3ecbfbae4" gracePeriod=30 Jan 27 16:07:18 crc kubenswrapper[4966]: I0127 16:07:18.555355 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d12fc36-1730-412d-97bd-92054dca1544" containerName="ceilometer-notification-agent" containerID="cri-o://ebd1456fc5046bb9a9f95289911cc64527ba30458174108632770debfbfe4d14" gracePeriod=30 Jan 27 16:07:18 crc kubenswrapper[4966]: I0127 16:07:18.732665 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 16:07:18 crc kubenswrapper[4966]: I0127 16:07:18.801317 4966 generic.go:334] "Generic (PLEG): container finished" podID="2d12fc36-1730-412d-97bd-92054dca1544" containerID="188d167816ceaf59541de38b4193f172ed587f5f14de0f95ea87f76cb53050ca" exitCode=0 Jan 27 16:07:18 crc kubenswrapper[4966]: I0127 16:07:18.801598 4966 generic.go:334] "Generic (PLEG): container finished" podID="2d12fc36-1730-412d-97bd-92054dca1544" containerID="a16d0d39f9f479f864bc0517b76193d6891af009f9545e7cf351cda3ecbfbae4" exitCode=2 Jan 27 16:07:18 crc kubenswrapper[4966]: I0127 16:07:18.801387 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d12fc36-1730-412d-97bd-92054dca1544","Type":"ContainerDied","Data":"188d167816ceaf59541de38b4193f172ed587f5f14de0f95ea87f76cb53050ca"} Jan 27 16:07:18 crc kubenswrapper[4966]: I0127 16:07:18.801648 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d12fc36-1730-412d-97bd-92054dca1544","Type":"ContainerDied","Data":"a16d0d39f9f479f864bc0517b76193d6891af009f9545e7cf351cda3ecbfbae4"} Jan 27 16:07:18 crc kubenswrapper[4966]: I0127 16:07:18.803620 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ea7f80cc-7f25-4546-b728-b483e4690acd","Type":"ContainerStarted","Data":"8db0de5f2f72a42450683fab73873ed70693174864a75f3a691f5d01b492fe8b"} Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.543562 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.592874 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t57pm\" (UniqueName: \"kubernetes.io/projected/2d12fc36-1730-412d-97bd-92054dca1544-kube-api-access-t57pm\") pod \"2d12fc36-1730-412d-97bd-92054dca1544\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.593004 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-combined-ca-bundle\") pod \"2d12fc36-1730-412d-97bd-92054dca1544\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.593069 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-scripts\") pod \"2d12fc36-1730-412d-97bd-92054dca1544\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.593103 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d12fc36-1730-412d-97bd-92054dca1544-run-httpd\") pod \"2d12fc36-1730-412d-97bd-92054dca1544\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.593158 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-config-data\") pod \"2d12fc36-1730-412d-97bd-92054dca1544\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.593196 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-sg-core-conf-yaml\") pod \"2d12fc36-1730-412d-97bd-92054dca1544\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.593307 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d12fc36-1730-412d-97bd-92054dca1544-log-httpd\") pod \"2d12fc36-1730-412d-97bd-92054dca1544\" (UID: \"2d12fc36-1730-412d-97bd-92054dca1544\") " Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.594478 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d12fc36-1730-412d-97bd-92054dca1544-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2d12fc36-1730-412d-97bd-92054dca1544" (UID: "2d12fc36-1730-412d-97bd-92054dca1544"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.594765 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d12fc36-1730-412d-97bd-92054dca1544-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2d12fc36-1730-412d-97bd-92054dca1544" (UID: "2d12fc36-1730-412d-97bd-92054dca1544"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.619049 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-scripts" (OuterVolumeSpecName: "scripts") pod "2d12fc36-1730-412d-97bd-92054dca1544" (UID: "2d12fc36-1730-412d-97bd-92054dca1544"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.621105 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d12fc36-1730-412d-97bd-92054dca1544-kube-api-access-t57pm" (OuterVolumeSpecName: "kube-api-access-t57pm") pod "2d12fc36-1730-412d-97bd-92054dca1544" (UID: "2d12fc36-1730-412d-97bd-92054dca1544"). InnerVolumeSpecName "kube-api-access-t57pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.658311 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-lfwmv"] Jan 27 16:07:19 crc kubenswrapper[4966]: E0127 16:07:19.658919 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d12fc36-1730-412d-97bd-92054dca1544" containerName="sg-core" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.658941 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d12fc36-1730-412d-97bd-92054dca1544" containerName="sg-core" Jan 27 16:07:19 crc kubenswrapper[4966]: E0127 16:07:19.658957 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d12fc36-1730-412d-97bd-92054dca1544" containerName="ceilometer-notification-agent" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.658963 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d12fc36-1730-412d-97bd-92054dca1544" containerName="ceilometer-notification-agent" Jan 27 16:07:19 crc kubenswrapper[4966]: E0127 16:07:19.658979 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d12fc36-1730-412d-97bd-92054dca1544" containerName="proxy-httpd" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.658988 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d12fc36-1730-412d-97bd-92054dca1544" containerName="proxy-httpd" Jan 27 16:07:19 crc kubenswrapper[4966]: E0127 16:07:19.659038 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d12fc36-1730-412d-97bd-92054dca1544" containerName="ceilometer-central-agent" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.659045 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d12fc36-1730-412d-97bd-92054dca1544" containerName="ceilometer-central-agent" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.659276 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d12fc36-1730-412d-97bd-92054dca1544" containerName="ceilometer-notification-agent" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.659289 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d12fc36-1730-412d-97bd-92054dca1544" containerName="sg-core" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.659304 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d12fc36-1730-412d-97bd-92054dca1544" containerName="proxy-httpd" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.659326 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d12fc36-1730-412d-97bd-92054dca1544" containerName="ceilometer-central-agent" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.660299 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-lfwmv" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.676478 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-lfwmv"] Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.679816 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2d12fc36-1730-412d-97bd-92054dca1544" (UID: "2d12fc36-1730-412d-97bd-92054dca1544"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.695977 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb334693-8f68-43f6-9d39-925d1dcd1e03-operator-scripts\") pod \"aodh-db-create-lfwmv\" (UID: \"bb334693-8f68-43f6-9d39-925d1dcd1e03\") " pod="openstack/aodh-db-create-lfwmv" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.696350 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m79v\" (UniqueName: \"kubernetes.io/projected/bb334693-8f68-43f6-9d39-925d1dcd1e03-kube-api-access-4m79v\") pod \"aodh-db-create-lfwmv\" (UID: \"bb334693-8f68-43f6-9d39-925d1dcd1e03\") " pod="openstack/aodh-db-create-lfwmv" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.696629 4966 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d12fc36-1730-412d-97bd-92054dca1544-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.696643 4966 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.696674 4966 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d12fc36-1730-412d-97bd-92054dca1544-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.696683 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t57pm\" (UniqueName: \"kubernetes.io/projected/2d12fc36-1730-412d-97bd-92054dca1544-kube-api-access-t57pm\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.696692 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.762919 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-36ca-account-create-update-vctkf"] Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.764379 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-36ca-account-create-update-vctkf" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.766188 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.772237 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-config-data" (OuterVolumeSpecName: "config-data") pod "2d12fc36-1730-412d-97bd-92054dca1544" (UID: "2d12fc36-1730-412d-97bd-92054dca1544"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.772623 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-36ca-account-create-update-vctkf"] Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.798799 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb334693-8f68-43f6-9d39-925d1dcd1e03-operator-scripts\") pod \"aodh-db-create-lfwmv\" (UID: \"bb334693-8f68-43f6-9d39-925d1dcd1e03\") " pod="openstack/aodh-db-create-lfwmv" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.798928 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cp7q\" (UniqueName: \"kubernetes.io/projected/1b6c82fc-6625-40e2-ade7-e4c47f978ac8-kube-api-access-4cp7q\") pod \"aodh-36ca-account-create-update-vctkf\" (UID: \"1b6c82fc-6625-40e2-ade7-e4c47f978ac8\") " pod="openstack/aodh-36ca-account-create-update-vctkf" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.798965 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m79v\" (UniqueName: \"kubernetes.io/projected/bb334693-8f68-43f6-9d39-925d1dcd1e03-kube-api-access-4m79v\") pod \"aodh-db-create-lfwmv\" (UID: \"bb334693-8f68-43f6-9d39-925d1dcd1e03\") " pod="openstack/aodh-db-create-lfwmv" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.799121 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b6c82fc-6625-40e2-ade7-e4c47f978ac8-operator-scripts\") pod \"aodh-36ca-account-create-update-vctkf\" (UID: \"1b6c82fc-6625-40e2-ade7-e4c47f978ac8\") " pod="openstack/aodh-36ca-account-create-update-vctkf" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.799358 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.799720 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb334693-8f68-43f6-9d39-925d1dcd1e03-operator-scripts\") pod \"aodh-db-create-lfwmv\" (UID: \"bb334693-8f68-43f6-9d39-925d1dcd1e03\") " pod="openstack/aodh-db-create-lfwmv" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.811635 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d12fc36-1730-412d-97bd-92054dca1544" (UID: "2d12fc36-1730-412d-97bd-92054dca1544"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.814301 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m79v\" (UniqueName: \"kubernetes.io/projected/bb334693-8f68-43f6-9d39-925d1dcd1e03-kube-api-access-4m79v\") pod \"aodh-db-create-lfwmv\" (UID: \"bb334693-8f68-43f6-9d39-925d1dcd1e03\") " pod="openstack/aodh-db-create-lfwmv" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.817960 4966 generic.go:334] "Generic (PLEG): container finished" podID="2d12fc36-1730-412d-97bd-92054dca1544" containerID="ebd1456fc5046bb9a9f95289911cc64527ba30458174108632770debfbfe4d14" exitCode=0 Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.818005 4966 generic.go:334] "Generic (PLEG): container finished" podID="2d12fc36-1730-412d-97bd-92054dca1544" containerID="2134a3af8d04600963d002d91338ab5a40344ad60ccc93888e65599e30ef9535" exitCode=0 Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.818054 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d12fc36-1730-412d-97bd-92054dca1544","Type":"ContainerDied","Data":"ebd1456fc5046bb9a9f95289911cc64527ba30458174108632770debfbfe4d14"} Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.818103 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d12fc36-1730-412d-97bd-92054dca1544","Type":"ContainerDied","Data":"2134a3af8d04600963d002d91338ab5a40344ad60ccc93888e65599e30ef9535"} Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.818116 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d12fc36-1730-412d-97bd-92054dca1544","Type":"ContainerDied","Data":"c2d52f07a9679b39885d60b03907050af8c2916b259e7d540af9d561d00b72b0"} Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.818131 4966 scope.go:117] "RemoveContainer" containerID="188d167816ceaf59541de38b4193f172ed587f5f14de0f95ea87f76cb53050ca" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.818237 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.821144 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ea7f80cc-7f25-4546-b728-b483e4690acd","Type":"ContainerStarted","Data":"4d243c6e2be0191b1221b48dd22459909ca8ad2ef6370ac88440dae60c5339ab"} Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.822664 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.852136 4966 scope.go:117] "RemoveContainer" containerID="a16d0d39f9f479f864bc0517b76193d6891af009f9545e7cf351cda3ecbfbae4" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.859236 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.8592028369999998 podStartE2EDuration="2.859202837s" podCreationTimestamp="2026-01-27 16:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:07:19.837710063 +0000 UTC m=+1506.140503551" watchObservedRunningTime="2026-01-27 16:07:19.859202837 +0000 UTC m=+1506.161996325" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.890107 4966 scope.go:117] "RemoveContainer" containerID="ebd1456fc5046bb9a9f95289911cc64527ba30458174108632770debfbfe4d14" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.917328 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cp7q\" (UniqueName: \"kubernetes.io/projected/1b6c82fc-6625-40e2-ade7-e4c47f978ac8-kube-api-access-4cp7q\") pod \"aodh-36ca-account-create-update-vctkf\" (UID: \"1b6c82fc-6625-40e2-ade7-e4c47f978ac8\") " pod="openstack/aodh-36ca-account-create-update-vctkf" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.919209 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b6c82fc-6625-40e2-ade7-e4c47f978ac8-operator-scripts\") pod \"aodh-36ca-account-create-update-vctkf\" (UID: \"1b6c82fc-6625-40e2-ade7-e4c47f978ac8\") " pod="openstack/aodh-36ca-account-create-update-vctkf" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.919549 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d12fc36-1730-412d-97bd-92054dca1544-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.920764 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b6c82fc-6625-40e2-ade7-e4c47f978ac8-operator-scripts\") pod \"aodh-36ca-account-create-update-vctkf\" (UID: \"1b6c82fc-6625-40e2-ade7-e4c47f978ac8\") " pod="openstack/aodh-36ca-account-create-update-vctkf" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.935109 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cp7q\" (UniqueName: \"kubernetes.io/projected/1b6c82fc-6625-40e2-ade7-e4c47f978ac8-kube-api-access-4cp7q\") pod \"aodh-36ca-account-create-update-vctkf\" (UID: \"1b6c82fc-6625-40e2-ade7-e4c47f978ac8\") " pod="openstack/aodh-36ca-account-create-update-vctkf" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.939408 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.941036 4966 scope.go:117] "RemoveContainer" containerID="2134a3af8d04600963d002d91338ab5a40344ad60ccc93888e65599e30ef9535" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.953684 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.972661 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.973197 4966 scope.go:117] "RemoveContainer" containerID="188d167816ceaf59541de38b4193f172ed587f5f14de0f95ea87f76cb53050ca" Jan 27 16:07:19 crc kubenswrapper[4966]: E0127 16:07:19.973576 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"188d167816ceaf59541de38b4193f172ed587f5f14de0f95ea87f76cb53050ca\": container with ID starting with 188d167816ceaf59541de38b4193f172ed587f5f14de0f95ea87f76cb53050ca not found: ID does not exist" containerID="188d167816ceaf59541de38b4193f172ed587f5f14de0f95ea87f76cb53050ca" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.973611 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"188d167816ceaf59541de38b4193f172ed587f5f14de0f95ea87f76cb53050ca"} err="failed to get container status \"188d167816ceaf59541de38b4193f172ed587f5f14de0f95ea87f76cb53050ca\": rpc error: code = NotFound desc = could not find container \"188d167816ceaf59541de38b4193f172ed587f5f14de0f95ea87f76cb53050ca\": container with ID starting with 188d167816ceaf59541de38b4193f172ed587f5f14de0f95ea87f76cb53050ca not found: ID does not exist" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.973638 4966 scope.go:117] "RemoveContainer" containerID="a16d0d39f9f479f864bc0517b76193d6891af009f9545e7cf351cda3ecbfbae4" Jan 27 16:07:19 crc kubenswrapper[4966]: E0127 16:07:19.973827 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a16d0d39f9f479f864bc0517b76193d6891af009f9545e7cf351cda3ecbfbae4\": container with ID starting with a16d0d39f9f479f864bc0517b76193d6891af009f9545e7cf351cda3ecbfbae4 not found: ID does not exist" containerID="a16d0d39f9f479f864bc0517b76193d6891af009f9545e7cf351cda3ecbfbae4" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.973848 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a16d0d39f9f479f864bc0517b76193d6891af009f9545e7cf351cda3ecbfbae4"} err="failed to get container status \"a16d0d39f9f479f864bc0517b76193d6891af009f9545e7cf351cda3ecbfbae4\": rpc error: code = NotFound desc = could not find container \"a16d0d39f9f479f864bc0517b76193d6891af009f9545e7cf351cda3ecbfbae4\": container with ID starting with a16d0d39f9f479f864bc0517b76193d6891af009f9545e7cf351cda3ecbfbae4 not found: ID does not exist" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.973862 4966 scope.go:117] "RemoveContainer" containerID="ebd1456fc5046bb9a9f95289911cc64527ba30458174108632770debfbfe4d14" Jan 27 16:07:19 crc kubenswrapper[4966]: E0127 16:07:19.974262 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebd1456fc5046bb9a9f95289911cc64527ba30458174108632770debfbfe4d14\": container with ID starting with ebd1456fc5046bb9a9f95289911cc64527ba30458174108632770debfbfe4d14 not found: ID does not exist" containerID="ebd1456fc5046bb9a9f95289911cc64527ba30458174108632770debfbfe4d14" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.974283 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebd1456fc5046bb9a9f95289911cc64527ba30458174108632770debfbfe4d14"} err="failed to get container status \"ebd1456fc5046bb9a9f95289911cc64527ba30458174108632770debfbfe4d14\": rpc error: code = NotFound desc = could not find container \"ebd1456fc5046bb9a9f95289911cc64527ba30458174108632770debfbfe4d14\": container with ID starting with ebd1456fc5046bb9a9f95289911cc64527ba30458174108632770debfbfe4d14 not found: ID does not exist" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.974298 4966 scope.go:117] "RemoveContainer" containerID="2134a3af8d04600963d002d91338ab5a40344ad60ccc93888e65599e30ef9535" Jan 27 16:07:19 crc kubenswrapper[4966]: E0127 16:07:19.974501 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2134a3af8d04600963d002d91338ab5a40344ad60ccc93888e65599e30ef9535\": container with ID starting with 2134a3af8d04600963d002d91338ab5a40344ad60ccc93888e65599e30ef9535 not found: ID does not exist" containerID="2134a3af8d04600963d002d91338ab5a40344ad60ccc93888e65599e30ef9535" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.974524 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2134a3af8d04600963d002d91338ab5a40344ad60ccc93888e65599e30ef9535"} err="failed to get container status \"2134a3af8d04600963d002d91338ab5a40344ad60ccc93888e65599e30ef9535\": rpc error: code = NotFound desc = could not find container \"2134a3af8d04600963d002d91338ab5a40344ad60ccc93888e65599e30ef9535\": container with ID starting with 2134a3af8d04600963d002d91338ab5a40344ad60ccc93888e65599e30ef9535 not found: ID does not exist" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.974538 4966 scope.go:117] "RemoveContainer" containerID="188d167816ceaf59541de38b4193f172ed587f5f14de0f95ea87f76cb53050ca" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.974707 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"188d167816ceaf59541de38b4193f172ed587f5f14de0f95ea87f76cb53050ca"} err="failed to get container status \"188d167816ceaf59541de38b4193f172ed587f5f14de0f95ea87f76cb53050ca\": rpc error: code = NotFound desc = could not find container \"188d167816ceaf59541de38b4193f172ed587f5f14de0f95ea87f76cb53050ca\": container with ID starting with 188d167816ceaf59541de38b4193f172ed587f5f14de0f95ea87f76cb53050ca not found: ID does not exist" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.974729 4966 scope.go:117] "RemoveContainer" containerID="a16d0d39f9f479f864bc0517b76193d6891af009f9545e7cf351cda3ecbfbae4" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.974885 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a16d0d39f9f479f864bc0517b76193d6891af009f9545e7cf351cda3ecbfbae4"} err="failed to get container status \"a16d0d39f9f479f864bc0517b76193d6891af009f9545e7cf351cda3ecbfbae4\": rpc error: code = NotFound desc = could not find container \"a16d0d39f9f479f864bc0517b76193d6891af009f9545e7cf351cda3ecbfbae4\": container with ID starting with a16d0d39f9f479f864bc0517b76193d6891af009f9545e7cf351cda3ecbfbae4 not found: ID does not exist" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.974941 4966 scope.go:117] "RemoveContainer" containerID="ebd1456fc5046bb9a9f95289911cc64527ba30458174108632770debfbfe4d14" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.975101 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebd1456fc5046bb9a9f95289911cc64527ba30458174108632770debfbfe4d14"} err="failed to get container status \"ebd1456fc5046bb9a9f95289911cc64527ba30458174108632770debfbfe4d14\": rpc error: code = NotFound desc = could not find container \"ebd1456fc5046bb9a9f95289911cc64527ba30458174108632770debfbfe4d14\": container with ID starting with ebd1456fc5046bb9a9f95289911cc64527ba30458174108632770debfbfe4d14 not found: ID does not exist" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.975115 4966 scope.go:117] "RemoveContainer" containerID="2134a3af8d04600963d002d91338ab5a40344ad60ccc93888e65599e30ef9535" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.975264 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2134a3af8d04600963d002d91338ab5a40344ad60ccc93888e65599e30ef9535"} err="failed to get container status \"2134a3af8d04600963d002d91338ab5a40344ad60ccc93888e65599e30ef9535\": rpc error: code = NotFound desc = could not find container \"2134a3af8d04600963d002d91338ab5a40344ad60ccc93888e65599e30ef9535\": container with ID starting with 2134a3af8d04600963d002d91338ab5a40344ad60ccc93888e65599e30ef9535 not found: ID does not exist" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.975404 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.978124 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.978150 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 16:07:19 crc kubenswrapper[4966]: I0127 16:07:19.987517 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.021076 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/667c0142-8c32-4658-ac1c-af787a4845e9-run-httpd\") pod \"ceilometer-0\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.021191 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-scripts\") pod \"ceilometer-0\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.021314 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.021356 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/667c0142-8c32-4658-ac1c-af787a4845e9-log-httpd\") pod \"ceilometer-0\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.021379 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.021395 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-config-data\") pod \"ceilometer-0\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.021421 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5z6n\" (UniqueName: \"kubernetes.io/projected/667c0142-8c32-4658-ac1c-af787a4845e9-kube-api-access-x5z6n\") pod \"ceilometer-0\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.045418 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-lfwmv" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.086417 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-36ca-account-create-update-vctkf" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.123301 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.123641 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/667c0142-8c32-4658-ac1c-af787a4845e9-log-httpd\") pod \"ceilometer-0\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.123789 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.124086 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-config-data\") pod \"ceilometer-0\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.124229 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5z6n\" (UniqueName: \"kubernetes.io/projected/667c0142-8c32-4658-ac1c-af787a4845e9-kube-api-access-x5z6n\") pod \"ceilometer-0\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.124371 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/667c0142-8c32-4658-ac1c-af787a4845e9-run-httpd\") pod \"ceilometer-0\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.124245 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/667c0142-8c32-4658-ac1c-af787a4845e9-log-httpd\") pod \"ceilometer-0\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.124704 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-scripts\") pod \"ceilometer-0\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.124848 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/667c0142-8c32-4658-ac1c-af787a4845e9-run-httpd\") pod \"ceilometer-0\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.127549 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.127997 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.131211 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-scripts\") pod \"ceilometer-0\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.142885 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-config-data\") pod \"ceilometer-0\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.153874 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5z6n\" (UniqueName: \"kubernetes.io/projected/667c0142-8c32-4658-ac1c-af787a4845e9-kube-api-access-x5z6n\") pod \"ceilometer-0\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.295927 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.555880 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d12fc36-1730-412d-97bd-92054dca1544" path="/var/lib/kubelet/pods/2d12fc36-1730-412d-97bd-92054dca1544/volumes" Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.672560 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-lfwmv"] Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.754036 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kxq52" podUID="b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" containerName="registry-server" probeResult="failure" output=< Jan 27 16:07:20 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 16:07:20 crc kubenswrapper[4966]: > Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.788723 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-36ca-account-create-update-vctkf"] Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.837544 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-lfwmv" event={"ID":"bb334693-8f68-43f6-9d39-925d1dcd1e03","Type":"ContainerStarted","Data":"fe945b7bd77dae4723866df2a0194cc4d73a9bd571e18cc32de13e6798713bc2"} Jan 27 16:07:20 crc kubenswrapper[4966]: I0127 16:07:20.842170 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-36ca-account-create-update-vctkf" event={"ID":"1b6c82fc-6625-40e2-ade7-e4c47f978ac8","Type":"ContainerStarted","Data":"bf94655b5cd01dedce2dc49703ea217f0b1ccf7491639770ec8a105b7bb53995"} Jan 27 16:07:21 crc kubenswrapper[4966]: W0127 16:07:21.068473 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod667c0142_8c32_4658_ac1c_af787a4845e9.slice/crio-90427550b73e642365adecff26f608ab994125c20884d53be344e7ab293177b8 WatchSource:0}: Error finding container 90427550b73e642365adecff26f608ab994125c20884d53be344e7ab293177b8: Status 404 returned error can't find the container with id 90427550b73e642365adecff26f608ab994125c20884d53be344e7ab293177b8 Jan 27 16:07:21 crc kubenswrapper[4966]: I0127 16:07:21.077696 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:07:21 crc kubenswrapper[4966]: I0127 16:07:21.855363 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"667c0142-8c32-4658-ac1c-af787a4845e9","Type":"ContainerStarted","Data":"90427550b73e642365adecff26f608ab994125c20884d53be344e7ab293177b8"} Jan 27 16:07:21 crc kubenswrapper[4966]: I0127 16:07:21.861361 4966 generic.go:334] "Generic (PLEG): container finished" podID="bb334693-8f68-43f6-9d39-925d1dcd1e03" containerID="f404ed0f7c03450d64be1fb867a171539600f572c52bf45ae48b5e2694033e1b" exitCode=0 Jan 27 16:07:21 crc kubenswrapper[4966]: I0127 16:07:21.861457 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-lfwmv" event={"ID":"bb334693-8f68-43f6-9d39-925d1dcd1e03","Type":"ContainerDied","Data":"f404ed0f7c03450d64be1fb867a171539600f572c52bf45ae48b5e2694033e1b"} Jan 27 16:07:21 crc kubenswrapper[4966]: I0127 16:07:21.863754 4966 generic.go:334] "Generic (PLEG): container finished" podID="1b6c82fc-6625-40e2-ade7-e4c47f978ac8" containerID="6259c5133188e54c1c26f1b0261906b9283ae4cf74b82bfcbcfe3f690a147dd4" exitCode=0 Jan 27 16:07:21 crc kubenswrapper[4966]: I0127 16:07:21.863811 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-36ca-account-create-update-vctkf" event={"ID":"1b6c82fc-6625-40e2-ade7-e4c47f978ac8","Type":"ContainerDied","Data":"6259c5133188e54c1c26f1b0261906b9283ae4cf74b82bfcbcfe3f690a147dd4"} Jan 27 16:07:22 crc kubenswrapper[4966]: I0127 16:07:22.876924 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"667c0142-8c32-4658-ac1c-af787a4845e9","Type":"ContainerStarted","Data":"4186fc3feb9edf02dd6773e0c75f3488b2bb01b041014268a6d5df251fae99b8"} Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.267579 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.477663 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-36ca-account-create-update-vctkf" Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.483538 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-lfwmv" Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.631728 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cp7q\" (UniqueName: \"kubernetes.io/projected/1b6c82fc-6625-40e2-ade7-e4c47f978ac8-kube-api-access-4cp7q\") pod \"1b6c82fc-6625-40e2-ade7-e4c47f978ac8\" (UID: \"1b6c82fc-6625-40e2-ade7-e4c47f978ac8\") " Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.631989 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb334693-8f68-43f6-9d39-925d1dcd1e03-operator-scripts\") pod \"bb334693-8f68-43f6-9d39-925d1dcd1e03\" (UID: \"bb334693-8f68-43f6-9d39-925d1dcd1e03\") " Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.632097 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m79v\" (UniqueName: \"kubernetes.io/projected/bb334693-8f68-43f6-9d39-925d1dcd1e03-kube-api-access-4m79v\") pod \"bb334693-8f68-43f6-9d39-925d1dcd1e03\" (UID: \"bb334693-8f68-43f6-9d39-925d1dcd1e03\") " Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.632267 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b6c82fc-6625-40e2-ade7-e4c47f978ac8-operator-scripts\") pod \"1b6c82fc-6625-40e2-ade7-e4c47f978ac8\" (UID: \"1b6c82fc-6625-40e2-ade7-e4c47f978ac8\") " Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.632938 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb334693-8f68-43f6-9d39-925d1dcd1e03-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb334693-8f68-43f6-9d39-925d1dcd1e03" (UID: "bb334693-8f68-43f6-9d39-925d1dcd1e03"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.633932 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b6c82fc-6625-40e2-ade7-e4c47f978ac8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1b6c82fc-6625-40e2-ade7-e4c47f978ac8" (UID: "1b6c82fc-6625-40e2-ade7-e4c47f978ac8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.640154 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b6c82fc-6625-40e2-ade7-e4c47f978ac8-kube-api-access-4cp7q" (OuterVolumeSpecName: "kube-api-access-4cp7q") pod "1b6c82fc-6625-40e2-ade7-e4c47f978ac8" (UID: "1b6c82fc-6625-40e2-ade7-e4c47f978ac8"). InnerVolumeSpecName "kube-api-access-4cp7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.642226 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb334693-8f68-43f6-9d39-925d1dcd1e03-kube-api-access-4m79v" (OuterVolumeSpecName: "kube-api-access-4m79v") pod "bb334693-8f68-43f6-9d39-925d1dcd1e03" (UID: "bb334693-8f68-43f6-9d39-925d1dcd1e03"). InnerVolumeSpecName "kube-api-access-4m79v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.735257 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb334693-8f68-43f6-9d39-925d1dcd1e03-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.735320 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m79v\" (UniqueName: \"kubernetes.io/projected/bb334693-8f68-43f6-9d39-925d1dcd1e03-kube-api-access-4m79v\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.735342 4966 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b6c82fc-6625-40e2-ade7-e4c47f978ac8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.735382 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cp7q\" (UniqueName: \"kubernetes.io/projected/1b6c82fc-6625-40e2-ade7-e4c47f978ac8-kube-api-access-4cp7q\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.893961 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-lfwmv" Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.893940 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-lfwmv" event={"ID":"bb334693-8f68-43f6-9d39-925d1dcd1e03","Type":"ContainerDied","Data":"fe945b7bd77dae4723866df2a0194cc4d73a9bd571e18cc32de13e6798713bc2"} Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.894604 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe945b7bd77dae4723866df2a0194cc4d73a9bd571e18cc32de13e6798713bc2" Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.896569 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-36ca-account-create-update-vctkf" event={"ID":"1b6c82fc-6625-40e2-ade7-e4c47f978ac8","Type":"ContainerDied","Data":"bf94655b5cd01dedce2dc49703ea217f0b1ccf7491639770ec8a105b7bb53995"} Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.896615 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf94655b5cd01dedce2dc49703ea217f0b1ccf7491639770ec8a105b7bb53995" Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.896613 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-36ca-account-create-update-vctkf" Jan 27 16:07:23 crc kubenswrapper[4966]: I0127 16:07:23.899508 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"667c0142-8c32-4658-ac1c-af787a4845e9","Type":"ContainerStarted","Data":"3365e8457ad57ae865804abf5566c0c69c9762074e1baea212afec698a86c817"} Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.105811 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-l5fz8"] Jan 27 16:07:24 crc kubenswrapper[4966]: E0127 16:07:24.106331 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb334693-8f68-43f6-9d39-925d1dcd1e03" containerName="mariadb-database-create" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.106358 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb334693-8f68-43f6-9d39-925d1dcd1e03" containerName="mariadb-database-create" Jan 27 16:07:24 crc kubenswrapper[4966]: E0127 16:07:24.106396 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b6c82fc-6625-40e2-ade7-e4c47f978ac8" containerName="mariadb-account-create-update" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.106403 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b6c82fc-6625-40e2-ade7-e4c47f978ac8" containerName="mariadb-account-create-update" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.106616 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b6c82fc-6625-40e2-ade7-e4c47f978ac8" containerName="mariadb-account-create-update" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.106642 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb334693-8f68-43f6-9d39-925d1dcd1e03" containerName="mariadb-database-create" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.107430 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-l5fz8" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.110551 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.110613 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.124266 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-l5fz8"] Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.247014 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-scripts\") pod \"nova-cell0-cell-mapping-l5fz8\" (UID: \"f6b603ed-ab2c-4dc4-ae4c-0990f706529f\") " pod="openstack/nova-cell0-cell-mapping-l5fz8" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.247084 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-config-data\") pod \"nova-cell0-cell-mapping-l5fz8\" (UID: \"f6b603ed-ab2c-4dc4-ae4c-0990f706529f\") " pod="openstack/nova-cell0-cell-mapping-l5fz8" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.247128 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-l5fz8\" (UID: \"f6b603ed-ab2c-4dc4-ae4c-0990f706529f\") " pod="openstack/nova-cell0-cell-mapping-l5fz8" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.247320 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lxfp\" (UniqueName: \"kubernetes.io/projected/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-kube-api-access-4lxfp\") pod \"nova-cell0-cell-mapping-l5fz8\" (UID: \"f6b603ed-ab2c-4dc4-ae4c-0990f706529f\") " pod="openstack/nova-cell0-cell-mapping-l5fz8" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.349591 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-scripts\") pod \"nova-cell0-cell-mapping-l5fz8\" (UID: \"f6b603ed-ab2c-4dc4-ae4c-0990f706529f\") " pod="openstack/nova-cell0-cell-mapping-l5fz8" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.349655 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-config-data\") pod \"nova-cell0-cell-mapping-l5fz8\" (UID: \"f6b603ed-ab2c-4dc4-ae4c-0990f706529f\") " pod="openstack/nova-cell0-cell-mapping-l5fz8" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.349697 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-l5fz8\" (UID: \"f6b603ed-ab2c-4dc4-ae4c-0990f706529f\") " pod="openstack/nova-cell0-cell-mapping-l5fz8" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.349850 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lxfp\" (UniqueName: \"kubernetes.io/projected/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-kube-api-access-4lxfp\") pod \"nova-cell0-cell-mapping-l5fz8\" (UID: \"f6b603ed-ab2c-4dc4-ae4c-0990f706529f\") " pod="openstack/nova-cell0-cell-mapping-l5fz8" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.354661 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-scripts\") pod \"nova-cell0-cell-mapping-l5fz8\" (UID: \"f6b603ed-ab2c-4dc4-ae4c-0990f706529f\") " pod="openstack/nova-cell0-cell-mapping-l5fz8" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.355068 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-config-data\") pod \"nova-cell0-cell-mapping-l5fz8\" (UID: \"f6b603ed-ab2c-4dc4-ae4c-0990f706529f\") " pod="openstack/nova-cell0-cell-mapping-l5fz8" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.355109 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-l5fz8\" (UID: \"f6b603ed-ab2c-4dc4-ae4c-0990f706529f\") " pod="openstack/nova-cell0-cell-mapping-l5fz8" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.390590 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lxfp\" (UniqueName: \"kubernetes.io/projected/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-kube-api-access-4lxfp\") pod \"nova-cell0-cell-mapping-l5fz8\" (UID: \"f6b603ed-ab2c-4dc4-ae4c-0990f706529f\") " pod="openstack/nova-cell0-cell-mapping-l5fz8" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.409379 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.411947 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.415300 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.446104 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-l5fz8" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.452982 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.510528 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.512338 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.540177 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.580401 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-config-data\") pod \"nova-api-0\" (UID: \"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7\") " pod="openstack/nova-api-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.580493 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-logs\") pod \"nova-api-0\" (UID: \"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7\") " pod="openstack/nova-api-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.580625 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7\") " pod="openstack/nova-api-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.580701 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhscb\" (UniqueName: \"kubernetes.io/projected/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-kube-api-access-rhscb\") pod \"nova-api-0\" (UID: \"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7\") " pod="openstack/nova-api-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.605417 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.607128 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.607147 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.607217 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.628052 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.660789 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-rncxr"] Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.672274 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.682297 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-logs\") pod \"nova-api-0\" (UID: \"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7\") " pod="openstack/nova-api-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.682363 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ded670a-f870-4df0-88b4-330b5f8e2f19-config-data\") pod \"nova-scheduler-0\" (UID: \"1ded670a-f870-4df0-88b4-330b5f8e2f19\") " pod="openstack/nova-scheduler-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.682443 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-config-data\") pod \"nova-metadata-0\" (UID: \"2b7d4ffe-671a-4547-83aa-50f011b9ae7e\") " pod="openstack/nova-metadata-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.682475 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-logs\") pod \"nova-metadata-0\" (UID: \"2b7d4ffe-671a-4547-83aa-50f011b9ae7e\") " pod="openstack/nova-metadata-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.682536 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7\") " pod="openstack/nova-api-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.682575 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtb4k\" (UniqueName: \"kubernetes.io/projected/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-kube-api-access-dtb4k\") pod \"nova-metadata-0\" (UID: \"2b7d4ffe-671a-4547-83aa-50f011b9ae7e\") " pod="openstack/nova-metadata-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.682603 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ded670a-f870-4df0-88b4-330b5f8e2f19-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1ded670a-f870-4df0-88b4-330b5f8e2f19\") " pod="openstack/nova-scheduler-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.682650 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhscb\" (UniqueName: \"kubernetes.io/projected/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-kube-api-access-rhscb\") pod \"nova-api-0\" (UID: \"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7\") " pod="openstack/nova-api-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.682701 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-config-data\") pod \"nova-api-0\" (UID: \"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7\") " pod="openstack/nova-api-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.682716 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2b7d4ffe-671a-4547-83aa-50f011b9ae7e\") " pod="openstack/nova-metadata-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.682753 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhtdc\" (UniqueName: \"kubernetes.io/projected/1ded670a-f870-4df0-88b4-330b5f8e2f19-kube-api-access-bhtdc\") pod \"nova-scheduler-0\" (UID: \"1ded670a-f870-4df0-88b4-330b5f8e2f19\") " pod="openstack/nova-scheduler-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.683299 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-logs\") pod \"nova-api-0\" (UID: \"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7\") " pod="openstack/nova-api-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.696189 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7\") " pod="openstack/nova-api-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.701575 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-config-data\") pod \"nova-api-0\" (UID: \"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7\") " pod="openstack/nova-api-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.715754 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhscb\" (UniqueName: \"kubernetes.io/projected/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-kube-api-access-rhscb\") pod \"nova-api-0\" (UID: \"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7\") " pod="openstack/nova-api-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.737607 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.741410 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.744341 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.761331 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-rncxr"] Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.779341 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.785390 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ded670a-f870-4df0-88b4-330b5f8e2f19-config-data\") pod \"nova-scheduler-0\" (UID: \"1ded670a-f870-4df0-88b4-330b5f8e2f19\") " pod="openstack/nova-scheduler-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.785445 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-rncxr\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.785474 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d2d694-e151-492e-9cb4-8dc614ea1b44-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"69d2d694-e151-492e-9cb4-8dc614ea1b44\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.785507 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d2d694-e151-492e-9cb4-8dc614ea1b44-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"69d2d694-e151-492e-9cb4-8dc614ea1b44\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.785534 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-config-data\") pod \"nova-metadata-0\" (UID: \"2b7d4ffe-671a-4547-83aa-50f011b9ae7e\") " pod="openstack/nova-metadata-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.785561 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-rncxr\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.785601 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-logs\") pod \"nova-metadata-0\" (UID: \"2b7d4ffe-671a-4547-83aa-50f011b9ae7e\") " pod="openstack/nova-metadata-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.785642 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-rncxr\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.785666 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-config\") pod \"dnsmasq-dns-568d7fd7cf-rncxr\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.785702 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7w9k\" (UniqueName: \"kubernetes.io/projected/69d2d694-e151-492e-9cb4-8dc614ea1b44-kube-api-access-g7w9k\") pod \"nova-cell1-novncproxy-0\" (UID: \"69d2d694-e151-492e-9cb4-8dc614ea1b44\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.785735 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtb4k\" (UniqueName: \"kubernetes.io/projected/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-kube-api-access-dtb4k\") pod \"nova-metadata-0\" (UID: \"2b7d4ffe-671a-4547-83aa-50f011b9ae7e\") " pod="openstack/nova-metadata-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.785758 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ded670a-f870-4df0-88b4-330b5f8e2f19-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1ded670a-f870-4df0-88b4-330b5f8e2f19\") " pod="openstack/nova-scheduler-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.785779 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-rncxr\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.785867 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2b7d4ffe-671a-4547-83aa-50f011b9ae7e\") " pod="openstack/nova-metadata-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.785921 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhtdc\" (UniqueName: \"kubernetes.io/projected/1ded670a-f870-4df0-88b4-330b5f8e2f19-kube-api-access-bhtdc\") pod \"nova-scheduler-0\" (UID: \"1ded670a-f870-4df0-88b4-330b5f8e2f19\") " pod="openstack/nova-scheduler-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.785998 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29g2q\" (UniqueName: \"kubernetes.io/projected/44197e72-4dfa-407d-83f4-99884b556b0d-kube-api-access-29g2q\") pod \"dnsmasq-dns-568d7fd7cf-rncxr\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.789336 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-logs\") pod \"nova-metadata-0\" (UID: \"2b7d4ffe-671a-4547-83aa-50f011b9ae7e\") " pod="openstack/nova-metadata-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.789828 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-config-data\") pod \"nova-metadata-0\" (UID: \"2b7d4ffe-671a-4547-83aa-50f011b9ae7e\") " pod="openstack/nova-metadata-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.790262 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ded670a-f870-4df0-88b4-330b5f8e2f19-config-data\") pod \"nova-scheduler-0\" (UID: \"1ded670a-f870-4df0-88b4-330b5f8e2f19\") " pod="openstack/nova-scheduler-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.792021 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ded670a-f870-4df0-88b4-330b5f8e2f19-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1ded670a-f870-4df0-88b4-330b5f8e2f19\") " pod="openstack/nova-scheduler-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.796629 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2b7d4ffe-671a-4547-83aa-50f011b9ae7e\") " pod="openstack/nova-metadata-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.821513 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhtdc\" (UniqueName: \"kubernetes.io/projected/1ded670a-f870-4df0-88b4-330b5f8e2f19-kube-api-access-bhtdc\") pod \"nova-scheduler-0\" (UID: \"1ded670a-f870-4df0-88b4-330b5f8e2f19\") " pod="openstack/nova-scheduler-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.822813 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtb4k\" (UniqueName: \"kubernetes.io/projected/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-kube-api-access-dtb4k\") pod \"nova-metadata-0\" (UID: \"2b7d4ffe-671a-4547-83aa-50f011b9ae7e\") " pod="openstack/nova-metadata-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.894028 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-rncxr\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.894119 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-config\") pod \"dnsmasq-dns-568d7fd7cf-rncxr\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.894210 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7w9k\" (UniqueName: \"kubernetes.io/projected/69d2d694-e151-492e-9cb4-8dc614ea1b44-kube-api-access-g7w9k\") pod \"nova-cell1-novncproxy-0\" (UID: \"69d2d694-e151-492e-9cb4-8dc614ea1b44\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.894317 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-rncxr\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.894689 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29g2q\" (UniqueName: \"kubernetes.io/projected/44197e72-4dfa-407d-83f4-99884b556b0d-kube-api-access-29g2q\") pod \"dnsmasq-dns-568d7fd7cf-rncxr\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.894810 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-rncxr\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.894838 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d2d694-e151-492e-9cb4-8dc614ea1b44-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"69d2d694-e151-492e-9cb4-8dc614ea1b44\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.894883 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d2d694-e151-492e-9cb4-8dc614ea1b44-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"69d2d694-e151-492e-9cb4-8dc614ea1b44\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.894973 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-rncxr\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.901258 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-rncxr\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.901993 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-rncxr\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.907503 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-config\") pod \"dnsmasq-dns-568d7fd7cf-rncxr\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.908592 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-rncxr\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.914176 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-rncxr\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.918714 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.950853 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.954631 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"667c0142-8c32-4658-ac1c-af787a4845e9","Type":"ContainerStarted","Data":"4f050ce223151ea25462de0c1cb44011b0ba917caffd6d101e4ebd4b6941f2f5"} Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.992524 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d2d694-e151-492e-9cb4-8dc614ea1b44-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"69d2d694-e151-492e-9cb4-8dc614ea1b44\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:07:24 crc kubenswrapper[4966]: I0127 16:07:24.993714 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d2d694-e151-492e-9cb4-8dc614ea1b44-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"69d2d694-e151-492e-9cb4-8dc614ea1b44\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:24.997851 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7w9k\" (UniqueName: \"kubernetes.io/projected/69d2d694-e151-492e-9cb4-8dc614ea1b44-kube-api-access-g7w9k\") pod \"nova-cell1-novncproxy-0\" (UID: \"69d2d694-e151-492e-9cb4-8dc614ea1b44\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.006079 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29g2q\" (UniqueName: \"kubernetes.io/projected/44197e72-4dfa-407d-83f4-99884b556b0d-kube-api-access-29g2q\") pod \"dnsmasq-dns-568d7fd7cf-rncxr\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.009748 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.039909 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.089295 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.302976 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-fr6bx"] Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.304972 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-fr6bx" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.309664 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-jknwm" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.309977 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.313658 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.317379 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.356008 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-fr6bx"] Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.411414 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mbzf\" (UniqueName: \"kubernetes.io/projected/0dfee338-4e72-4d66-aed2-e81ce752c4fc-kube-api-access-4mbzf\") pod \"aodh-db-sync-fr6bx\" (UID: \"0dfee338-4e72-4d66-aed2-e81ce752c4fc\") " pod="openstack/aodh-db-sync-fr6bx" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.411616 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dfee338-4e72-4d66-aed2-e81ce752c4fc-combined-ca-bundle\") pod \"aodh-db-sync-fr6bx\" (UID: \"0dfee338-4e72-4d66-aed2-e81ce752c4fc\") " pod="openstack/aodh-db-sync-fr6bx" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.411699 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dfee338-4e72-4d66-aed2-e81ce752c4fc-scripts\") pod \"aodh-db-sync-fr6bx\" (UID: \"0dfee338-4e72-4d66-aed2-e81ce752c4fc\") " pod="openstack/aodh-db-sync-fr6bx" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.411754 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dfee338-4e72-4d66-aed2-e81ce752c4fc-config-data\") pod \"aodh-db-sync-fr6bx\" (UID: \"0dfee338-4e72-4d66-aed2-e81ce752c4fc\") " pod="openstack/aodh-db-sync-fr6bx" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.511118 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-l5fz8"] Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.513698 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dfee338-4e72-4d66-aed2-e81ce752c4fc-combined-ca-bundle\") pod \"aodh-db-sync-fr6bx\" (UID: \"0dfee338-4e72-4d66-aed2-e81ce752c4fc\") " pod="openstack/aodh-db-sync-fr6bx" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.513762 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dfee338-4e72-4d66-aed2-e81ce752c4fc-scripts\") pod \"aodh-db-sync-fr6bx\" (UID: \"0dfee338-4e72-4d66-aed2-e81ce752c4fc\") " pod="openstack/aodh-db-sync-fr6bx" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.513807 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dfee338-4e72-4d66-aed2-e81ce752c4fc-config-data\") pod \"aodh-db-sync-fr6bx\" (UID: \"0dfee338-4e72-4d66-aed2-e81ce752c4fc\") " pod="openstack/aodh-db-sync-fr6bx" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.513863 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mbzf\" (UniqueName: \"kubernetes.io/projected/0dfee338-4e72-4d66-aed2-e81ce752c4fc-kube-api-access-4mbzf\") pod \"aodh-db-sync-fr6bx\" (UID: \"0dfee338-4e72-4d66-aed2-e81ce752c4fc\") " pod="openstack/aodh-db-sync-fr6bx" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.518516 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dfee338-4e72-4d66-aed2-e81ce752c4fc-scripts\") pod \"aodh-db-sync-fr6bx\" (UID: \"0dfee338-4e72-4d66-aed2-e81ce752c4fc\") " pod="openstack/aodh-db-sync-fr6bx" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.519070 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dfee338-4e72-4d66-aed2-e81ce752c4fc-combined-ca-bundle\") pod \"aodh-db-sync-fr6bx\" (UID: \"0dfee338-4e72-4d66-aed2-e81ce752c4fc\") " pod="openstack/aodh-db-sync-fr6bx" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.530286 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dfee338-4e72-4d66-aed2-e81ce752c4fc-config-data\") pod \"aodh-db-sync-fr6bx\" (UID: \"0dfee338-4e72-4d66-aed2-e81ce752c4fc\") " pod="openstack/aodh-db-sync-fr6bx" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.553697 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t2fjt"] Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.556007 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t2fjt" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.575440 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t2fjt"] Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.580718 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.592425 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.610643 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mbzf\" (UniqueName: \"kubernetes.io/projected/0dfee338-4e72-4d66-aed2-e81ce752c4fc-kube-api-access-4mbzf\") pod \"aodh-db-sync-fr6bx\" (UID: \"0dfee338-4e72-4d66-aed2-e81ce752c4fc\") " pod="openstack/aodh-db-sync-fr6bx" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.621979 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e490f34-cd48-4988-9a8e-ea51b08268fc-scripts\") pod \"nova-cell1-conductor-db-sync-t2fjt\" (UID: \"4e490f34-cd48-4988-9a8e-ea51b08268fc\") " pod="openstack/nova-cell1-conductor-db-sync-t2fjt" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.622048 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e490f34-cd48-4988-9a8e-ea51b08268fc-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t2fjt\" (UID: \"4e490f34-cd48-4988-9a8e-ea51b08268fc\") " pod="openstack/nova-cell1-conductor-db-sync-t2fjt" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.622070 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e490f34-cd48-4988-9a8e-ea51b08268fc-config-data\") pod \"nova-cell1-conductor-db-sync-t2fjt\" (UID: \"4e490f34-cd48-4988-9a8e-ea51b08268fc\") " pod="openstack/nova-cell1-conductor-db-sync-t2fjt" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.622096 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcb29\" (UniqueName: \"kubernetes.io/projected/4e490f34-cd48-4988-9a8e-ea51b08268fc-kube-api-access-pcb29\") pod \"nova-cell1-conductor-db-sync-t2fjt\" (UID: \"4e490f34-cd48-4988-9a8e-ea51b08268fc\") " pod="openstack/nova-cell1-conductor-db-sync-t2fjt" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.650839 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-fr6bx" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.724998 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e490f34-cd48-4988-9a8e-ea51b08268fc-scripts\") pod \"nova-cell1-conductor-db-sync-t2fjt\" (UID: \"4e490f34-cd48-4988-9a8e-ea51b08268fc\") " pod="openstack/nova-cell1-conductor-db-sync-t2fjt" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.725082 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e490f34-cd48-4988-9a8e-ea51b08268fc-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t2fjt\" (UID: \"4e490f34-cd48-4988-9a8e-ea51b08268fc\") " pod="openstack/nova-cell1-conductor-db-sync-t2fjt" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.725120 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e490f34-cd48-4988-9a8e-ea51b08268fc-config-data\") pod \"nova-cell1-conductor-db-sync-t2fjt\" (UID: \"4e490f34-cd48-4988-9a8e-ea51b08268fc\") " pod="openstack/nova-cell1-conductor-db-sync-t2fjt" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.725160 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcb29\" (UniqueName: \"kubernetes.io/projected/4e490f34-cd48-4988-9a8e-ea51b08268fc-kube-api-access-pcb29\") pod \"nova-cell1-conductor-db-sync-t2fjt\" (UID: \"4e490f34-cd48-4988-9a8e-ea51b08268fc\") " pod="openstack/nova-cell1-conductor-db-sync-t2fjt" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.738215 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e490f34-cd48-4988-9a8e-ea51b08268fc-config-data\") pod \"nova-cell1-conductor-db-sync-t2fjt\" (UID: \"4e490f34-cd48-4988-9a8e-ea51b08268fc\") " pod="openstack/nova-cell1-conductor-db-sync-t2fjt" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.741372 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e490f34-cd48-4988-9a8e-ea51b08268fc-scripts\") pod \"nova-cell1-conductor-db-sync-t2fjt\" (UID: \"4e490f34-cd48-4988-9a8e-ea51b08268fc\") " pod="openstack/nova-cell1-conductor-db-sync-t2fjt" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.741654 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e490f34-cd48-4988-9a8e-ea51b08268fc-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t2fjt\" (UID: \"4e490f34-cd48-4988-9a8e-ea51b08268fc\") " pod="openstack/nova-cell1-conductor-db-sync-t2fjt" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.751323 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcb29\" (UniqueName: \"kubernetes.io/projected/4e490f34-cd48-4988-9a8e-ea51b08268fc-kube-api-access-pcb29\") pod \"nova-cell1-conductor-db-sync-t2fjt\" (UID: \"4e490f34-cd48-4988-9a8e-ea51b08268fc\") " pod="openstack/nova-cell1-conductor-db-sync-t2fjt" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.900082 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.952046 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t2fjt" Jan 27 16:07:25 crc kubenswrapper[4966]: I0127 16:07:25.998649 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7","Type":"ContainerStarted","Data":"1680feed9dd0525f8a77717944f2929e8003db5182b6fb22029c82ede0fbfdc0"} Jan 27 16:07:26 crc kubenswrapper[4966]: I0127 16:07:26.006126 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-l5fz8" event={"ID":"f6b603ed-ab2c-4dc4-ae4c-0990f706529f","Type":"ContainerStarted","Data":"8d83f6ab5df65fc07c638469c5eec99b2375a4f1b610be430e19145a13eb5aa2"} Jan 27 16:07:26 crc kubenswrapper[4966]: I0127 16:07:26.105619 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 16:07:26 crc kubenswrapper[4966]: I0127 16:07:26.265321 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 16:07:26 crc kubenswrapper[4966]: I0127 16:07:26.462148 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 16:07:26 crc kubenswrapper[4966]: W0127 16:07:26.465141 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b7d4ffe_671a_4547_83aa_50f011b9ae7e.slice/crio-17a2dbcaca7e7288beeb12ebd4c55caaf4b29d46f1e7b608633d4b07108d0497 WatchSource:0}: Error finding container 17a2dbcaca7e7288beeb12ebd4c55caaf4b29d46f1e7b608633d4b07108d0497: Status 404 returned error can't find the container with id 17a2dbcaca7e7288beeb12ebd4c55caaf4b29d46f1e7b608633d4b07108d0497 Jan 27 16:07:26 crc kubenswrapper[4966]: I0127 16:07:26.479293 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-rncxr"] Jan 27 16:07:26 crc kubenswrapper[4966]: I0127 16:07:26.577563 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-fr6bx"] Jan 27 16:07:26 crc kubenswrapper[4966]: I0127 16:07:26.747780 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t2fjt"] Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.049716 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"667c0142-8c32-4658-ac1c-af787a4845e9","Type":"ContainerStarted","Data":"400653b14c9f3ebfc976f795542aad5d6681e97ef8806969d402dddbb8088f99"} Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.050035 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.055868 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-l5fz8" event={"ID":"f6b603ed-ab2c-4dc4-ae4c-0990f706529f","Type":"ContainerStarted","Data":"2ef2fede60daac4b85e698cc31af3aa07a817bee99e42dc9cdeb150190a6132f"} Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.060601 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" event={"ID":"44197e72-4dfa-407d-83f4-99884b556b0d","Type":"ContainerStarted","Data":"92e82760c702a6be2c598c382cdecee1226ea1254432ec29a519df3d30e0818c"} Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.062113 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1ded670a-f870-4df0-88b4-330b5f8e2f19","Type":"ContainerStarted","Data":"42fbd078fef50835c841b731273c2260d64066fa6b33fd27c6ba5dad1d3c65ec"} Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.063262 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"69d2d694-e151-492e-9cb4-8dc614ea1b44","Type":"ContainerStarted","Data":"1426ebce8781a350f84522a1f391dc10e063f5a863febde5ca541cbd6d4e5bd5"} Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.064553 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t2fjt" event={"ID":"4e490f34-cd48-4988-9a8e-ea51b08268fc","Type":"ContainerStarted","Data":"f200fb837ce7057fe2b5f909f5baef1b63d54afe5444388852ae98d97d27e0fd"} Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.077484 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.554063496 podStartE2EDuration="8.077464168s" podCreationTimestamp="2026-01-27 16:07:19 +0000 UTC" firstStartedPulling="2026-01-27 16:07:21.070625572 +0000 UTC m=+1507.373419060" lastFinishedPulling="2026-01-27 16:07:26.594026244 +0000 UTC m=+1512.896819732" observedRunningTime="2026-01-27 16:07:27.072022967 +0000 UTC m=+1513.374816465" watchObservedRunningTime="2026-01-27 16:07:27.077464168 +0000 UTC m=+1513.380257656" Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.082385 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2b7d4ffe-671a-4547-83aa-50f011b9ae7e","Type":"ContainerStarted","Data":"17a2dbcaca7e7288beeb12ebd4c55caaf4b29d46f1e7b608633d4b07108d0497"} Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.088673 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-fr6bx" event={"ID":"0dfee338-4e72-4d66-aed2-e81ce752c4fc","Type":"ContainerStarted","Data":"6c5c17d041a5700f43ee2333177bdd871130b65a8d371532eeb436d3acbdfbe0"} Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.093469 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-l5fz8" podStartSLOduration=3.093450179 podStartE2EDuration="3.093450179s" podCreationTimestamp="2026-01-27 16:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:07:27.088516285 +0000 UTC m=+1513.391309773" watchObservedRunningTime="2026-01-27 16:07:27.093450179 +0000 UTC m=+1513.396243667" Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.545011 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qklj5"] Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.548307 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qklj5" Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.603068 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qklj5"] Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.625430 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4j2r\" (UniqueName: \"kubernetes.io/projected/7a12ee30-16c2-49ec-8473-7ed403256c25-kube-api-access-c4j2r\") pod \"community-operators-qklj5\" (UID: \"7a12ee30-16c2-49ec-8473-7ed403256c25\") " pod="openshift-marketplace/community-operators-qklj5" Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.625512 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a12ee30-16c2-49ec-8473-7ed403256c25-utilities\") pod \"community-operators-qklj5\" (UID: \"7a12ee30-16c2-49ec-8473-7ed403256c25\") " pod="openshift-marketplace/community-operators-qklj5" Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.625589 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a12ee30-16c2-49ec-8473-7ed403256c25-catalog-content\") pod \"community-operators-qklj5\" (UID: \"7a12ee30-16c2-49ec-8473-7ed403256c25\") " pod="openshift-marketplace/community-operators-qklj5" Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.729355 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4j2r\" (UniqueName: \"kubernetes.io/projected/7a12ee30-16c2-49ec-8473-7ed403256c25-kube-api-access-c4j2r\") pod \"community-operators-qklj5\" (UID: \"7a12ee30-16c2-49ec-8473-7ed403256c25\") " pod="openshift-marketplace/community-operators-qklj5" Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.729432 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a12ee30-16c2-49ec-8473-7ed403256c25-utilities\") pod \"community-operators-qklj5\" (UID: \"7a12ee30-16c2-49ec-8473-7ed403256c25\") " pod="openshift-marketplace/community-operators-qklj5" Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.729481 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a12ee30-16c2-49ec-8473-7ed403256c25-catalog-content\") pod \"community-operators-qklj5\" (UID: \"7a12ee30-16c2-49ec-8473-7ed403256c25\") " pod="openshift-marketplace/community-operators-qklj5" Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.730049 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a12ee30-16c2-49ec-8473-7ed403256c25-catalog-content\") pod \"community-operators-qklj5\" (UID: \"7a12ee30-16c2-49ec-8473-7ed403256c25\") " pod="openshift-marketplace/community-operators-qklj5" Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.730151 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a12ee30-16c2-49ec-8473-7ed403256c25-utilities\") pod \"community-operators-qklj5\" (UID: \"7a12ee30-16c2-49ec-8473-7ed403256c25\") " pod="openshift-marketplace/community-operators-qklj5" Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.770406 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4j2r\" (UniqueName: \"kubernetes.io/projected/7a12ee30-16c2-49ec-8473-7ed403256c25-kube-api-access-c4j2r\") pod \"community-operators-qklj5\" (UID: \"7a12ee30-16c2-49ec-8473-7ed403256c25\") " pod="openshift-marketplace/community-operators-qklj5" Jan 27 16:07:27 crc kubenswrapper[4966]: I0127 16:07:27.923796 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qklj5" Jan 27 16:07:28 crc kubenswrapper[4966]: I0127 16:07:28.145194 4966 generic.go:334] "Generic (PLEG): container finished" podID="44197e72-4dfa-407d-83f4-99884b556b0d" containerID="b28ad71f7e61400c157f093c0a0160e729199ab71f7d4ffec19a3a6f8edbb7e1" exitCode=0 Jan 27 16:07:28 crc kubenswrapper[4966]: I0127 16:07:28.145434 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" event={"ID":"44197e72-4dfa-407d-83f4-99884b556b0d","Type":"ContainerDied","Data":"b28ad71f7e61400c157f093c0a0160e729199ab71f7d4ffec19a3a6f8edbb7e1"} Jan 27 16:07:28 crc kubenswrapper[4966]: I0127 16:07:28.183420 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t2fjt" event={"ID":"4e490f34-cd48-4988-9a8e-ea51b08268fc","Type":"ContainerStarted","Data":"72d3d1c8f7e26e88a0d2c3f86a38135af1c493d3a994402eccb79d631ca88c54"} Jan 27 16:07:28 crc kubenswrapper[4966]: I0127 16:07:28.232545 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-t2fjt" podStartSLOduration=3.232519733 podStartE2EDuration="3.232519733s" podCreationTimestamp="2026-01-27 16:07:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:07:28.216210241 +0000 UTC m=+1514.519003729" watchObservedRunningTime="2026-01-27 16:07:28.232519733 +0000 UTC m=+1514.535313221" Jan 27 16:07:28 crc kubenswrapper[4966]: I0127 16:07:28.634312 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 16:07:28 crc kubenswrapper[4966]: I0127 16:07:28.665665 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 16:07:28 crc kubenswrapper[4966]: I0127 16:07:28.762841 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qklj5"] Jan 27 16:07:28 crc kubenswrapper[4966]: W0127 16:07:28.781647 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a12ee30_16c2_49ec_8473_7ed403256c25.slice/crio-190637d3a66e3a32add1b07b83ca8c1db22b54979220e021db1ab3d08b4373f1 WatchSource:0}: Error finding container 190637d3a66e3a32add1b07b83ca8c1db22b54979220e021db1ab3d08b4373f1: Status 404 returned error can't find the container with id 190637d3a66e3a32add1b07b83ca8c1db22b54979220e021db1ab3d08b4373f1 Jan 27 16:07:29 crc kubenswrapper[4966]: I0127 16:07:29.201556 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qklj5" event={"ID":"7a12ee30-16c2-49ec-8473-7ed403256c25","Type":"ContainerStarted","Data":"43ee80b125ee6913f16bcddc1e91fd28414ca4abeaa472ec49e010bbc67615e7"} Jan 27 16:07:29 crc kubenswrapper[4966]: I0127 16:07:29.201806 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qklj5" event={"ID":"7a12ee30-16c2-49ec-8473-7ed403256c25","Type":"ContainerStarted","Data":"190637d3a66e3a32add1b07b83ca8c1db22b54979220e021db1ab3d08b4373f1"} Jan 27 16:07:29 crc kubenswrapper[4966]: I0127 16:07:29.204780 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" event={"ID":"44197e72-4dfa-407d-83f4-99884b556b0d","Type":"ContainerStarted","Data":"7f4e29a18867cb7b1e2eb676620043bf22c377197ca7d06bdc704a49423fee69"} Jan 27 16:07:29 crc kubenswrapper[4966]: I0127 16:07:29.248022 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" podStartSLOduration=5.247998598 podStartE2EDuration="5.247998598s" podCreationTimestamp="2026-01-27 16:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:07:29.242927939 +0000 UTC m=+1515.545721447" watchObservedRunningTime="2026-01-27 16:07:29.247998598 +0000 UTC m=+1515.550792106" Jan 27 16:07:30 crc kubenswrapper[4966]: I0127 16:07:30.044375 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:30 crc kubenswrapper[4966]: I0127 16:07:30.226536 4966 generic.go:334] "Generic (PLEG): container finished" podID="7a12ee30-16c2-49ec-8473-7ed403256c25" containerID="43ee80b125ee6913f16bcddc1e91fd28414ca4abeaa472ec49e010bbc67615e7" exitCode=0 Jan 27 16:07:30 crc kubenswrapper[4966]: I0127 16:07:30.227784 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qklj5" event={"ID":"7a12ee30-16c2-49ec-8473-7ed403256c25","Type":"ContainerDied","Data":"43ee80b125ee6913f16bcddc1e91fd28414ca4abeaa472ec49e010bbc67615e7"} Jan 27 16:07:30 crc kubenswrapper[4966]: I0127 16:07:30.731406 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kxq52" podUID="b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" containerName="registry-server" probeResult="failure" output=< Jan 27 16:07:30 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 16:07:30 crc kubenswrapper[4966]: > Jan 27 16:07:35 crc kubenswrapper[4966]: I0127 16:07:35.045085 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:07:35 crc kubenswrapper[4966]: I0127 16:07:35.121665 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-mlr65"] Jan 27 16:07:35 crc kubenswrapper[4966]: I0127 16:07:35.121971 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" podUID="30e072ee-5227-4c65-8ccd-bc5e01bc5394" containerName="dnsmasq-dns" containerID="cri-o://724a74d414cadbe46ba20c2c8bab4a6aca86a3b58fa76fd899f283e5f952e86d" gracePeriod=10 Jan 27 16:07:35 crc kubenswrapper[4966]: I0127 16:07:35.323385 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1ded670a-f870-4df0-88b4-330b5f8e2f19","Type":"ContainerStarted","Data":"4b4dd59cb715918353aa8a5d4f0cf12e821ebdbc95b934800581e4b05c220618"} Jan 27 16:07:35 crc kubenswrapper[4966]: I0127 16:07:35.326617 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"69d2d694-e151-492e-9cb4-8dc614ea1b44","Type":"ContainerStarted","Data":"a15f9a28a79f7983ad6416efef931f8168d080550d9ba99357d0a514ad900a3a"} Jan 27 16:07:35 crc kubenswrapper[4966]: I0127 16:07:35.327760 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="69d2d694-e151-492e-9cb4-8dc614ea1b44" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a15f9a28a79f7983ad6416efef931f8168d080550d9ba99357d0a514ad900a3a" gracePeriod=30 Jan 27 16:07:35 crc kubenswrapper[4966]: I0127 16:07:35.342207 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2b7d4ffe-671a-4547-83aa-50f011b9ae7e","Type":"ContainerStarted","Data":"d15544ec68c1214c91e7c98ec0eecb3c865f380bcb06367636cb49d2cf1a896c"} Jan 27 16:07:35 crc kubenswrapper[4966]: I0127 16:07:35.342276 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2b7d4ffe-671a-4547-83aa-50f011b9ae7e","Type":"ContainerStarted","Data":"92721b2cf26aa9700b7654467d0af8ac43ce29f2a61db5936d54d8b2cd60b186"} Jan 27 16:07:35 crc kubenswrapper[4966]: I0127 16:07:35.342237 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2b7d4ffe-671a-4547-83aa-50f011b9ae7e" containerName="nova-metadata-log" containerID="cri-o://92721b2cf26aa9700b7654467d0af8ac43ce29f2a61db5936d54d8b2cd60b186" gracePeriod=30 Jan 27 16:07:35 crc kubenswrapper[4966]: I0127 16:07:35.342346 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2b7d4ffe-671a-4547-83aa-50f011b9ae7e" containerName="nova-metadata-metadata" containerID="cri-o://d15544ec68c1214c91e7c98ec0eecb3c865f380bcb06367636cb49d2cf1a896c" gracePeriod=30 Jan 27 16:07:35 crc kubenswrapper[4966]: I0127 16:07:35.351951 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-fr6bx" event={"ID":"0dfee338-4e72-4d66-aed2-e81ce752c4fc","Type":"ContainerStarted","Data":"4e486ef5edbe46aa593b0e15003256cff3449f7f246f575816bfa12f1329b791"} Jan 27 16:07:35 crc kubenswrapper[4966]: I0127 16:07:35.360125 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.281936495 podStartE2EDuration="11.360075577s" podCreationTimestamp="2026-01-27 16:07:24 +0000 UTC" firstStartedPulling="2026-01-27 16:07:26.147149276 +0000 UTC m=+1512.449942764" lastFinishedPulling="2026-01-27 16:07:34.225288358 +0000 UTC m=+1520.528081846" observedRunningTime="2026-01-27 16:07:35.342159035 +0000 UTC m=+1521.644952533" watchObservedRunningTime="2026-01-27 16:07:35.360075577 +0000 UTC m=+1521.662869085" Jan 27 16:07:35 crc kubenswrapper[4966]: I0127 16:07:35.368447 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7","Type":"ContainerStarted","Data":"6fcca92b102978eb79bc98f6f2485e326bf52f743b02157a9ef7e2f8e9244965"} Jan 27 16:07:35 crc kubenswrapper[4966]: I0127 16:07:35.368495 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7","Type":"ContainerStarted","Data":"f7d0d0987b5e10741f73c9a1a8c5efbbeff523de0da56f76e942761f534ee522"} Jan 27 16:07:35 crc kubenswrapper[4966]: I0127 16:07:35.380136 4966 generic.go:334] "Generic (PLEG): container finished" podID="30e072ee-5227-4c65-8ccd-bc5e01bc5394" containerID="724a74d414cadbe46ba20c2c8bab4a6aca86a3b58fa76fd899f283e5f952e86d" exitCode=0 Jan 27 16:07:35 crc kubenswrapper[4966]: I0127 16:07:35.380208 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" event={"ID":"30e072ee-5227-4c65-8ccd-bc5e01bc5394","Type":"ContainerDied","Data":"724a74d414cadbe46ba20c2c8bab4a6aca86a3b58fa76fd899f283e5f952e86d"} Jan 27 16:07:35 crc kubenswrapper[4966]: I0127 16:07:35.387366 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qklj5" event={"ID":"7a12ee30-16c2-49ec-8473-7ed403256c25","Type":"ContainerStarted","Data":"500919781b0ae88c726aa95f689318a1b7db170b8a14f9f44039b51f8d0faa1b"} Jan 27 16:07:35 crc kubenswrapper[4966]: I0127 16:07:35.395684 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.710745905 podStartE2EDuration="11.395641923s" podCreationTimestamp="2026-01-27 16:07:24 +0000 UTC" firstStartedPulling="2026-01-27 16:07:26.53913347 +0000 UTC m=+1512.841926958" lastFinishedPulling="2026-01-27 16:07:34.224029488 +0000 UTC m=+1520.526822976" observedRunningTime="2026-01-27 16:07:35.364284029 +0000 UTC m=+1521.667077517" watchObservedRunningTime="2026-01-27 16:07:35.395641923 +0000 UTC m=+1521.698435401" Jan 27 16:07:35 crc kubenswrapper[4966]: I0127 16:07:35.426292 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.564221888 podStartE2EDuration="11.426263815s" podCreationTimestamp="2026-01-27 16:07:24 +0000 UTC" firstStartedPulling="2026-01-27 16:07:26.361629709 +0000 UTC m=+1512.664423197" lastFinishedPulling="2026-01-27 16:07:34.223671636 +0000 UTC m=+1520.526465124" observedRunningTime="2026-01-27 16:07:35.390698769 +0000 UTC m=+1521.693492277" watchObservedRunningTime="2026-01-27 16:07:35.426263815 +0000 UTC m=+1521.729057303" Jan 27 16:07:35 crc kubenswrapper[4966]: I0127 16:07:35.427881 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.104358793 podStartE2EDuration="11.427870125s" podCreationTimestamp="2026-01-27 16:07:24 +0000 UTC" firstStartedPulling="2026-01-27 16:07:25.900164825 +0000 UTC m=+1512.202958313" lastFinishedPulling="2026-01-27 16:07:34.223676157 +0000 UTC m=+1520.526469645" observedRunningTime="2026-01-27 16:07:35.410990045 +0000 UTC m=+1521.713783533" watchObservedRunningTime="2026-01-27 16:07:35.427870125 +0000 UTC m=+1521.730663603" Jan 27 16:07:35 crc kubenswrapper[4966]: I0127 16:07:35.459405 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-fr6bx" podStartSLOduration=2.70957475 podStartE2EDuration="10.459379584s" podCreationTimestamp="2026-01-27 16:07:25 +0000 UTC" firstStartedPulling="2026-01-27 16:07:26.597457342 +0000 UTC m=+1512.900250830" lastFinishedPulling="2026-01-27 16:07:34.347262176 +0000 UTC m=+1520.650055664" observedRunningTime="2026-01-27 16:07:35.428115532 +0000 UTC m=+1521.730909040" watchObservedRunningTime="2026-01-27 16:07:35.459379584 +0000 UTC m=+1521.762173072" Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.020381 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.186334 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-ovsdbserver-nb\") pod \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.186390 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t52s6\" (UniqueName: \"kubernetes.io/projected/30e072ee-5227-4c65-8ccd-bc5e01bc5394-kube-api-access-t52s6\") pod \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.186452 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-dns-swift-storage-0\") pod \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.186548 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-config\") pod \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.186591 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-ovsdbserver-sb\") pod \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.187216 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-dns-svc\") pod \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\" (UID: \"30e072ee-5227-4c65-8ccd-bc5e01bc5394\") " Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.193325 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30e072ee-5227-4c65-8ccd-bc5e01bc5394-kube-api-access-t52s6" (OuterVolumeSpecName: "kube-api-access-t52s6") pod "30e072ee-5227-4c65-8ccd-bc5e01bc5394" (UID: "30e072ee-5227-4c65-8ccd-bc5e01bc5394"). InnerVolumeSpecName "kube-api-access-t52s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.250283 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-config" (OuterVolumeSpecName: "config") pod "30e072ee-5227-4c65-8ccd-bc5e01bc5394" (UID: "30e072ee-5227-4c65-8ccd-bc5e01bc5394"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.293512 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t52s6\" (UniqueName: \"kubernetes.io/projected/30e072ee-5227-4c65-8ccd-bc5e01bc5394-kube-api-access-t52s6\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.293559 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.327686 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "30e072ee-5227-4c65-8ccd-bc5e01bc5394" (UID: "30e072ee-5227-4c65-8ccd-bc5e01bc5394"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.355046 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "30e072ee-5227-4c65-8ccd-bc5e01bc5394" (UID: "30e072ee-5227-4c65-8ccd-bc5e01bc5394"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.363807 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "30e072ee-5227-4c65-8ccd-bc5e01bc5394" (UID: "30e072ee-5227-4c65-8ccd-bc5e01bc5394"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.372736 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "30e072ee-5227-4c65-8ccd-bc5e01bc5394" (UID: "30e072ee-5227-4c65-8ccd-bc5e01bc5394"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.396026 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.396060 4966 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.396071 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.396081 4966 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30e072ee-5227-4c65-8ccd-bc5e01bc5394-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.401511 4966 generic.go:334] "Generic (PLEG): container finished" podID="2b7d4ffe-671a-4547-83aa-50f011b9ae7e" containerID="92721b2cf26aa9700b7654467d0af8ac43ce29f2a61db5936d54d8b2cd60b186" exitCode=143 Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.401571 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2b7d4ffe-671a-4547-83aa-50f011b9ae7e","Type":"ContainerDied","Data":"92721b2cf26aa9700b7654467d0af8ac43ce29f2a61db5936d54d8b2cd60b186"} Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.404660 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.405265 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-mlr65" event={"ID":"30e072ee-5227-4c65-8ccd-bc5e01bc5394","Type":"ContainerDied","Data":"81422c2d923cc67800aabb2ce129846fa2f578ee9af9adc128a4ba07c530b794"} Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.405310 4966 scope.go:117] "RemoveContainer" containerID="724a74d414cadbe46ba20c2c8bab4a6aca86a3b58fa76fd899f283e5f952e86d" Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.485436 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-mlr65"] Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.499721 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-mlr65"] Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.502570 4966 scope.go:117] "RemoveContainer" containerID="3e0b475de56f30e878607d1fbbe61a9689adf74fb2357f7ea106db898cad61da" Jan 27 16:07:36 crc kubenswrapper[4966]: I0127 16:07:36.539173 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30e072ee-5227-4c65-8ccd-bc5e01bc5394" path="/var/lib/kubelet/pods/30e072ee-5227-4c65-8ccd-bc5e01bc5394/volumes" Jan 27 16:07:37 crc kubenswrapper[4966]: I0127 16:07:37.418669 4966 generic.go:334] "Generic (PLEG): container finished" podID="f6b603ed-ab2c-4dc4-ae4c-0990f706529f" containerID="2ef2fede60daac4b85e698cc31af3aa07a817bee99e42dc9cdeb150190a6132f" exitCode=0 Jan 27 16:07:37 crc kubenswrapper[4966]: I0127 16:07:37.418732 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-l5fz8" event={"ID":"f6b603ed-ab2c-4dc4-ae4c-0990f706529f","Type":"ContainerDied","Data":"2ef2fede60daac4b85e698cc31af3aa07a817bee99e42dc9cdeb150190a6132f"} Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.100428 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qvqgn"] Jan 27 16:07:39 crc kubenswrapper[4966]: E0127 16:07:39.102306 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30e072ee-5227-4c65-8ccd-bc5e01bc5394" containerName="init" Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.102348 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="30e072ee-5227-4c65-8ccd-bc5e01bc5394" containerName="init" Jan 27 16:07:39 crc kubenswrapper[4966]: E0127 16:07:39.102389 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30e072ee-5227-4c65-8ccd-bc5e01bc5394" containerName="dnsmasq-dns" Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.102422 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="30e072ee-5227-4c65-8ccd-bc5e01bc5394" containerName="dnsmasq-dns" Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.102986 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="30e072ee-5227-4c65-8ccd-bc5e01bc5394" containerName="dnsmasq-dns" Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.109282 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qvqgn" Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.113154 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qvqgn"] Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.269299 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6mxf\" (UniqueName: \"kubernetes.io/projected/79463280-8972-4b4c-bfdb-c4909a812a02-kube-api-access-v6mxf\") pod \"certified-operators-qvqgn\" (UID: \"79463280-8972-4b4c-bfdb-c4909a812a02\") " pod="openshift-marketplace/certified-operators-qvqgn" Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.269551 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79463280-8972-4b4c-bfdb-c4909a812a02-catalog-content\") pod \"certified-operators-qvqgn\" (UID: \"79463280-8972-4b4c-bfdb-c4909a812a02\") " pod="openshift-marketplace/certified-operators-qvqgn" Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.269655 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79463280-8972-4b4c-bfdb-c4909a812a02-utilities\") pod \"certified-operators-qvqgn\" (UID: \"79463280-8972-4b4c-bfdb-c4909a812a02\") " pod="openshift-marketplace/certified-operators-qvqgn" Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.371783 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6mxf\" (UniqueName: \"kubernetes.io/projected/79463280-8972-4b4c-bfdb-c4909a812a02-kube-api-access-v6mxf\") pod \"certified-operators-qvqgn\" (UID: \"79463280-8972-4b4c-bfdb-c4909a812a02\") " pod="openshift-marketplace/certified-operators-qvqgn" Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.372011 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79463280-8972-4b4c-bfdb-c4909a812a02-catalog-content\") pod \"certified-operators-qvqgn\" (UID: \"79463280-8972-4b4c-bfdb-c4909a812a02\") " pod="openshift-marketplace/certified-operators-qvqgn" Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.372118 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79463280-8972-4b4c-bfdb-c4909a812a02-utilities\") pod \"certified-operators-qvqgn\" (UID: \"79463280-8972-4b4c-bfdb-c4909a812a02\") " pod="openshift-marketplace/certified-operators-qvqgn" Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.372734 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79463280-8972-4b4c-bfdb-c4909a812a02-catalog-content\") pod \"certified-operators-qvqgn\" (UID: \"79463280-8972-4b4c-bfdb-c4909a812a02\") " pod="openshift-marketplace/certified-operators-qvqgn" Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.372740 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79463280-8972-4b4c-bfdb-c4909a812a02-utilities\") pod \"certified-operators-qvqgn\" (UID: \"79463280-8972-4b4c-bfdb-c4909a812a02\") " pod="openshift-marketplace/certified-operators-qvqgn" Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.391677 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6mxf\" (UniqueName: \"kubernetes.io/projected/79463280-8972-4b4c-bfdb-c4909a812a02-kube-api-access-v6mxf\") pod \"certified-operators-qvqgn\" (UID: \"79463280-8972-4b4c-bfdb-c4909a812a02\") " pod="openshift-marketplace/certified-operators-qvqgn" Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.430752 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qvqgn" Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.894977 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-l5fz8" Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.951688 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.987339 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-combined-ca-bundle\") pod \"f6b603ed-ab2c-4dc4-ae4c-0990f706529f\" (UID: \"f6b603ed-ab2c-4dc4-ae4c-0990f706529f\") " Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.987475 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lxfp\" (UniqueName: \"kubernetes.io/projected/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-kube-api-access-4lxfp\") pod \"f6b603ed-ab2c-4dc4-ae4c-0990f706529f\" (UID: \"f6b603ed-ab2c-4dc4-ae4c-0990f706529f\") " Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.987587 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-config-data\") pod \"f6b603ed-ab2c-4dc4-ae4c-0990f706529f\" (UID: \"f6b603ed-ab2c-4dc4-ae4c-0990f706529f\") " Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.987687 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-scripts\") pod \"f6b603ed-ab2c-4dc4-ae4c-0990f706529f\" (UID: \"f6b603ed-ab2c-4dc4-ae4c-0990f706529f\") " Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.993889 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-kube-api-access-4lxfp" (OuterVolumeSpecName: "kube-api-access-4lxfp") pod "f6b603ed-ab2c-4dc4-ae4c-0990f706529f" (UID: "f6b603ed-ab2c-4dc4-ae4c-0990f706529f"). InnerVolumeSpecName "kube-api-access-4lxfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:07:39 crc kubenswrapper[4966]: I0127 16:07:39.993923 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-scripts" (OuterVolumeSpecName: "scripts") pod "f6b603ed-ab2c-4dc4-ae4c-0990f706529f" (UID: "f6b603ed-ab2c-4dc4-ae4c-0990f706529f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.011411 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.011507 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.021398 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-config-data" (OuterVolumeSpecName: "config-data") pod "f6b603ed-ab2c-4dc4-ae4c-0990f706529f" (UID: "f6b603ed-ab2c-4dc4-ae4c-0990f706529f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.022773 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6b603ed-ab2c-4dc4-ae4c-0990f706529f" (UID: "f6b603ed-ab2c-4dc4-ae4c-0990f706529f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.066274 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qvqgn"] Jan 27 16:07:40 crc kubenswrapper[4966]: W0127 16:07:40.068841 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79463280_8972_4b4c_bfdb_c4909a812a02.slice/crio-0480985c50c7206385c832be6a06bb272da4a41b229dd5af5f4a7f2258377800 WatchSource:0}: Error finding container 0480985c50c7206385c832be6a06bb272da4a41b229dd5af5f4a7f2258377800: Status 404 returned error can't find the container with id 0480985c50c7206385c832be6a06bb272da4a41b229dd5af5f4a7f2258377800 Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.090374 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.090628 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.090662 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lxfp\" (UniqueName: \"kubernetes.io/projected/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-kube-api-access-4lxfp\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.090676 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.090687 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6b603ed-ab2c-4dc4-ae4c-0990f706529f-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.119816 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.120426 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.120475 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.121736 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795"} pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.121801 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" containerID="cri-o://0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" gracePeriod=600 Jan 27 16:07:40 crc kubenswrapper[4966]: E0127 16:07:40.349477 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.458262 4966 generic.go:334] "Generic (PLEG): container finished" podID="79463280-8972-4b4c-bfdb-c4909a812a02" containerID="323ad2851775c4eed890e574191165ed14743a1862fa7121794028db87ba1578" exitCode=0 Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.458320 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvqgn" event={"ID":"79463280-8972-4b4c-bfdb-c4909a812a02","Type":"ContainerDied","Data":"323ad2851775c4eed890e574191165ed14743a1862fa7121794028db87ba1578"} Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.458392 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvqgn" event={"ID":"79463280-8972-4b4c-bfdb-c4909a812a02","Type":"ContainerStarted","Data":"0480985c50c7206385c832be6a06bb272da4a41b229dd5af5f4a7f2258377800"} Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.463509 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-l5fz8" event={"ID":"f6b603ed-ab2c-4dc4-ae4c-0990f706529f","Type":"ContainerDied","Data":"8d83f6ab5df65fc07c638469c5eec99b2375a4f1b610be430e19145a13eb5aa2"} Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.463550 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d83f6ab5df65fc07c638469c5eec99b2375a4f1b610be430e19145a13eb5aa2" Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.463619 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-l5fz8" Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.473228 4966 generic.go:334] "Generic (PLEG): container finished" podID="75889828-fc5d-4516-a3c4-db3affd4f810" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" exitCode=0 Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.473280 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerDied","Data":"0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795"} Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.473322 4966 scope.go:117] "RemoveContainer" containerID="9cad2005eaacff8196cf8bd744c3709abce6b9766c6adb25e972b7211933a53f" Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.474215 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:07:40 crc kubenswrapper[4966]: E0127 16:07:40.474617 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:07:40 crc kubenswrapper[4966]: I0127 16:07:40.723911 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kxq52" podUID="b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" containerName="registry-server" probeResult="failure" output=< Jan 27 16:07:40 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 16:07:40 crc kubenswrapper[4966]: > Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.203968 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.204202 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7" containerName="nova-api-log" containerID="cri-o://f7d0d0987b5e10741f73c9a1a8c5efbbeff523de0da56f76e942761f534ee522" gracePeriod=30 Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.204661 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7" containerName="nova-api-api" containerID="cri-o://6fcca92b102978eb79bc98f6f2485e326bf52f743b02157a9ef7e2f8e9244965" gracePeriod=30 Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.217848 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.218056 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1ded670a-f870-4df0-88b4-330b5f8e2f19" containerName="nova-scheduler-scheduler" containerID="cri-o://4b4dd59cb715918353aa8a5d4f0cf12e821ebdbc95b934800581e4b05c220618" gracePeriod=30 Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.495660 4966 generic.go:334] "Generic (PLEG): container finished" podID="7a12ee30-16c2-49ec-8473-7ed403256c25" containerID="500919781b0ae88c726aa95f689318a1b7db170b8a14f9f44039b51f8d0faa1b" exitCode=0 Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.495762 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qklj5" event={"ID":"7a12ee30-16c2-49ec-8473-7ed403256c25","Type":"ContainerDied","Data":"500919781b0ae88c726aa95f689318a1b7db170b8a14f9f44039b51f8d0faa1b"} Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.503080 4966 generic.go:334] "Generic (PLEG): container finished" podID="0dfee338-4e72-4d66-aed2-e81ce752c4fc" containerID="4e486ef5edbe46aa593b0e15003256cff3449f7f246f575816bfa12f1329b791" exitCode=0 Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.503206 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-fr6bx" event={"ID":"0dfee338-4e72-4d66-aed2-e81ce752c4fc","Type":"ContainerDied","Data":"4e486ef5edbe46aa593b0e15003256cff3449f7f246f575816bfa12f1329b791"} Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.514448 4966 generic.go:334] "Generic (PLEG): container finished" podID="0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7" containerID="6fcca92b102978eb79bc98f6f2485e326bf52f743b02157a9ef7e2f8e9244965" exitCode=0 Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.514481 4966 generic.go:334] "Generic (PLEG): container finished" podID="0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7" containerID="f7d0d0987b5e10741f73c9a1a8c5efbbeff523de0da56f76e942761f534ee522" exitCode=143 Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.514504 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7","Type":"ContainerDied","Data":"6fcca92b102978eb79bc98f6f2485e326bf52f743b02157a9ef7e2f8e9244965"} Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.514621 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7","Type":"ContainerDied","Data":"f7d0d0987b5e10741f73c9a1a8c5efbbeff523de0da56f76e942761f534ee522"} Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.742636 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.839617 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-combined-ca-bundle\") pod \"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7\" (UID: \"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7\") " Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.839694 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhscb\" (UniqueName: \"kubernetes.io/projected/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-kube-api-access-rhscb\") pod \"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7\" (UID: \"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7\") " Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.839720 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-logs\") pod \"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7\" (UID: \"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7\") " Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.839790 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-config-data\") pod \"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7\" (UID: \"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7\") " Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.840107 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-logs" (OuterVolumeSpecName: "logs") pod "0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7" (UID: "0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.840547 4966 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-logs\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.846278 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-kube-api-access-rhscb" (OuterVolumeSpecName: "kube-api-access-rhscb") pod "0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7" (UID: "0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7"). InnerVolumeSpecName "kube-api-access-rhscb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.871177 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7" (UID: "0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.878160 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-config-data" (OuterVolumeSpecName: "config-data") pod "0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7" (UID: "0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.942708 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.942744 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhscb\" (UniqueName: \"kubernetes.io/projected/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-kube-api-access-rhscb\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:41 crc kubenswrapper[4966]: I0127 16:07:41.942756 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.243794 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.351119 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ded670a-f870-4df0-88b4-330b5f8e2f19-combined-ca-bundle\") pod \"1ded670a-f870-4df0-88b4-330b5f8e2f19\" (UID: \"1ded670a-f870-4df0-88b4-330b5f8e2f19\") " Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.351377 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhtdc\" (UniqueName: \"kubernetes.io/projected/1ded670a-f870-4df0-88b4-330b5f8e2f19-kube-api-access-bhtdc\") pod \"1ded670a-f870-4df0-88b4-330b5f8e2f19\" (UID: \"1ded670a-f870-4df0-88b4-330b5f8e2f19\") " Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.351457 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ded670a-f870-4df0-88b4-330b5f8e2f19-config-data\") pod \"1ded670a-f870-4df0-88b4-330b5f8e2f19\" (UID: \"1ded670a-f870-4df0-88b4-330b5f8e2f19\") " Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.357001 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ded670a-f870-4df0-88b4-330b5f8e2f19-kube-api-access-bhtdc" (OuterVolumeSpecName: "kube-api-access-bhtdc") pod "1ded670a-f870-4df0-88b4-330b5f8e2f19" (UID: "1ded670a-f870-4df0-88b4-330b5f8e2f19"). InnerVolumeSpecName "kube-api-access-bhtdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.385779 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ded670a-f870-4df0-88b4-330b5f8e2f19-config-data" (OuterVolumeSpecName: "config-data") pod "1ded670a-f870-4df0-88b4-330b5f8e2f19" (UID: "1ded670a-f870-4df0-88b4-330b5f8e2f19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.385851 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ded670a-f870-4df0-88b4-330b5f8e2f19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ded670a-f870-4df0-88b4-330b5f8e2f19" (UID: "1ded670a-f870-4df0-88b4-330b5f8e2f19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.454239 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhtdc\" (UniqueName: \"kubernetes.io/projected/1ded670a-f870-4df0-88b4-330b5f8e2f19-kube-api-access-bhtdc\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.454275 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ded670a-f870-4df0-88b4-330b5f8e2f19-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.454286 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ded670a-f870-4df0-88b4-330b5f8e2f19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.553017 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7","Type":"ContainerDied","Data":"1680feed9dd0525f8a77717944f2929e8003db5182b6fb22029c82ede0fbfdc0"} Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.553059 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.553096 4966 scope.go:117] "RemoveContainer" containerID="6fcca92b102978eb79bc98f6f2485e326bf52f743b02157a9ef7e2f8e9244965" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.557679 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qklj5" event={"ID":"7a12ee30-16c2-49ec-8473-7ed403256c25","Type":"ContainerStarted","Data":"2ee96eac44e8925e1c04387b4b45f89db23ffe84d74a8040f517cb99477b8d72"} Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.563140 4966 generic.go:334] "Generic (PLEG): container finished" podID="1ded670a-f870-4df0-88b4-330b5f8e2f19" containerID="4b4dd59cb715918353aa8a5d4f0cf12e821ebdbc95b934800581e4b05c220618" exitCode=0 Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.563302 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.564701 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1ded670a-f870-4df0-88b4-330b5f8e2f19","Type":"ContainerDied","Data":"4b4dd59cb715918353aa8a5d4f0cf12e821ebdbc95b934800581e4b05c220618"} Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.564732 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1ded670a-f870-4df0-88b4-330b5f8e2f19","Type":"ContainerDied","Data":"42fbd078fef50835c841b731273c2260d64066fa6b33fd27c6ba5dad1d3c65ec"} Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.566635 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvqgn" event={"ID":"79463280-8972-4b4c-bfdb-c4909a812a02","Type":"ContainerStarted","Data":"61250b183962098966ce3e70459f1c775fe3ed9157020a3bfb2781ae5a007200"} Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.585481 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qklj5" podStartSLOduration=4.073268541 podStartE2EDuration="15.585463001s" podCreationTimestamp="2026-01-27 16:07:27 +0000 UTC" firstStartedPulling="2026-01-27 16:07:30.486034098 +0000 UTC m=+1516.788827586" lastFinishedPulling="2026-01-27 16:07:41.998228558 +0000 UTC m=+1528.301022046" observedRunningTime="2026-01-27 16:07:42.581464665 +0000 UTC m=+1528.884258173" watchObservedRunningTime="2026-01-27 16:07:42.585463001 +0000 UTC m=+1528.888256479" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.632122 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.644575 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.647238 4966 scope.go:117] "RemoveContainer" containerID="f7d0d0987b5e10741f73c9a1a8c5efbbeff523de0da56f76e942761f534ee522" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.662156 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.683047 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.697645 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 16:07:42 crc kubenswrapper[4966]: E0127 16:07:42.698282 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b603ed-ab2c-4dc4-ae4c-0990f706529f" containerName="nova-manage" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.698298 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b603ed-ab2c-4dc4-ae4c-0990f706529f" containerName="nova-manage" Jan 27 16:07:42 crc kubenswrapper[4966]: E0127 16:07:42.698310 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7" containerName="nova-api-api" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.698316 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7" containerName="nova-api-api" Jan 27 16:07:42 crc kubenswrapper[4966]: E0127 16:07:42.698331 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ded670a-f870-4df0-88b4-330b5f8e2f19" containerName="nova-scheduler-scheduler" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.698339 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ded670a-f870-4df0-88b4-330b5f8e2f19" containerName="nova-scheduler-scheduler" Jan 27 16:07:42 crc kubenswrapper[4966]: E0127 16:07:42.698404 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7" containerName="nova-api-log" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.698410 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7" containerName="nova-api-log" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.698716 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7" containerName="nova-api-log" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.698730 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7" containerName="nova-api-api" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.698741 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ded670a-f870-4df0-88b4-330b5f8e2f19" containerName="nova-scheduler-scheduler" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.698754 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6b603ed-ab2c-4dc4-ae4c-0990f706529f" containerName="nova-manage" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.702378 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.705567 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.712959 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.729044 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.731007 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.733578 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.742293 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.751030 4966 scope.go:117] "RemoveContainer" containerID="4b4dd59cb715918353aa8a5d4f0cf12e821ebdbc95b934800581e4b05c220618" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.766223 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dbe16c5-6e68-4e28-9c01-e9558046f377-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4dbe16c5-6e68-4e28-9c01-e9558046f377\") " pod="openstack/nova-api-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.766368 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2slcf\" (UniqueName: \"kubernetes.io/projected/4dbe16c5-6e68-4e28-9c01-e9558046f377-kube-api-access-2slcf\") pod \"nova-api-0\" (UID: \"4dbe16c5-6e68-4e28-9c01-e9558046f377\") " pod="openstack/nova-api-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.766409 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dbe16c5-6e68-4e28-9c01-e9558046f377-logs\") pod \"nova-api-0\" (UID: \"4dbe16c5-6e68-4e28-9c01-e9558046f377\") " pod="openstack/nova-api-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.766589 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dbe16c5-6e68-4e28-9c01-e9558046f377-config-data\") pod \"nova-api-0\" (UID: \"4dbe16c5-6e68-4e28-9c01-e9558046f377\") " pod="openstack/nova-api-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.784439 4966 scope.go:117] "RemoveContainer" containerID="4b4dd59cb715918353aa8a5d4f0cf12e821ebdbc95b934800581e4b05c220618" Jan 27 16:07:42 crc kubenswrapper[4966]: E0127 16:07:42.784974 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b4dd59cb715918353aa8a5d4f0cf12e821ebdbc95b934800581e4b05c220618\": container with ID starting with 4b4dd59cb715918353aa8a5d4f0cf12e821ebdbc95b934800581e4b05c220618 not found: ID does not exist" containerID="4b4dd59cb715918353aa8a5d4f0cf12e821ebdbc95b934800581e4b05c220618" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.785016 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b4dd59cb715918353aa8a5d4f0cf12e821ebdbc95b934800581e4b05c220618"} err="failed to get container status \"4b4dd59cb715918353aa8a5d4f0cf12e821ebdbc95b934800581e4b05c220618\": rpc error: code = NotFound desc = could not find container \"4b4dd59cb715918353aa8a5d4f0cf12e821ebdbc95b934800581e4b05c220618\": container with ID starting with 4b4dd59cb715918353aa8a5d4f0cf12e821ebdbc95b934800581e4b05c220618 not found: ID does not exist" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.869372 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dbe16c5-6e68-4e28-9c01-e9558046f377-config-data\") pod \"nova-api-0\" (UID: \"4dbe16c5-6e68-4e28-9c01-e9558046f377\") " pod="openstack/nova-api-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.869559 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-864qs\" (UniqueName: \"kubernetes.io/projected/ec82113e-c41d-4ee5-906b-3aa78d343e46-kube-api-access-864qs\") pod \"nova-scheduler-0\" (UID: \"ec82113e-c41d-4ee5-906b-3aa78d343e46\") " pod="openstack/nova-scheduler-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.869615 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dbe16c5-6e68-4e28-9c01-e9558046f377-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4dbe16c5-6e68-4e28-9c01-e9558046f377\") " pod="openstack/nova-api-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.869772 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2slcf\" (UniqueName: \"kubernetes.io/projected/4dbe16c5-6e68-4e28-9c01-e9558046f377-kube-api-access-2slcf\") pod \"nova-api-0\" (UID: \"4dbe16c5-6e68-4e28-9c01-e9558046f377\") " pod="openstack/nova-api-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.869829 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec82113e-c41d-4ee5-906b-3aa78d343e46-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ec82113e-c41d-4ee5-906b-3aa78d343e46\") " pod="openstack/nova-scheduler-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.869871 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dbe16c5-6e68-4e28-9c01-e9558046f377-logs\") pod \"nova-api-0\" (UID: \"4dbe16c5-6e68-4e28-9c01-e9558046f377\") " pod="openstack/nova-api-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.870004 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec82113e-c41d-4ee5-906b-3aa78d343e46-config-data\") pod \"nova-scheduler-0\" (UID: \"ec82113e-c41d-4ee5-906b-3aa78d343e46\") " pod="openstack/nova-scheduler-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.870614 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dbe16c5-6e68-4e28-9c01-e9558046f377-logs\") pod \"nova-api-0\" (UID: \"4dbe16c5-6e68-4e28-9c01-e9558046f377\") " pod="openstack/nova-api-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.882579 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dbe16c5-6e68-4e28-9c01-e9558046f377-config-data\") pod \"nova-api-0\" (UID: \"4dbe16c5-6e68-4e28-9c01-e9558046f377\") " pod="openstack/nova-api-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.891834 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dbe16c5-6e68-4e28-9c01-e9558046f377-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4dbe16c5-6e68-4e28-9c01-e9558046f377\") " pod="openstack/nova-api-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.891883 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2slcf\" (UniqueName: \"kubernetes.io/projected/4dbe16c5-6e68-4e28-9c01-e9558046f377-kube-api-access-2slcf\") pod \"nova-api-0\" (UID: \"4dbe16c5-6e68-4e28-9c01-e9558046f377\") " pod="openstack/nova-api-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.972404 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec82113e-c41d-4ee5-906b-3aa78d343e46-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ec82113e-c41d-4ee5-906b-3aa78d343e46\") " pod="openstack/nova-scheduler-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.972507 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec82113e-c41d-4ee5-906b-3aa78d343e46-config-data\") pod \"nova-scheduler-0\" (UID: \"ec82113e-c41d-4ee5-906b-3aa78d343e46\") " pod="openstack/nova-scheduler-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.972710 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-864qs\" (UniqueName: \"kubernetes.io/projected/ec82113e-c41d-4ee5-906b-3aa78d343e46-kube-api-access-864qs\") pod \"nova-scheduler-0\" (UID: \"ec82113e-c41d-4ee5-906b-3aa78d343e46\") " pod="openstack/nova-scheduler-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.976501 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec82113e-c41d-4ee5-906b-3aa78d343e46-config-data\") pod \"nova-scheduler-0\" (UID: \"ec82113e-c41d-4ee5-906b-3aa78d343e46\") " pod="openstack/nova-scheduler-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.977829 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec82113e-c41d-4ee5-906b-3aa78d343e46-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ec82113e-c41d-4ee5-906b-3aa78d343e46\") " pod="openstack/nova-scheduler-0" Jan 27 16:07:42 crc kubenswrapper[4966]: I0127 16:07:42.989405 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-864qs\" (UniqueName: \"kubernetes.io/projected/ec82113e-c41d-4ee5-906b-3aa78d343e46-kube-api-access-864qs\") pod \"nova-scheduler-0\" (UID: \"ec82113e-c41d-4ee5-906b-3aa78d343e46\") " pod="openstack/nova-scheduler-0" Jan 27 16:07:43 crc kubenswrapper[4966]: I0127 16:07:43.032497 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 16:07:43 crc kubenswrapper[4966]: I0127 16:07:43.074830 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 16:07:43 crc kubenswrapper[4966]: I0127 16:07:43.132840 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-fr6bx" Jan 27 16:07:43 crc kubenswrapper[4966]: I0127 16:07:43.279854 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dfee338-4e72-4d66-aed2-e81ce752c4fc-config-data\") pod \"0dfee338-4e72-4d66-aed2-e81ce752c4fc\" (UID: \"0dfee338-4e72-4d66-aed2-e81ce752c4fc\") " Jan 27 16:07:43 crc kubenswrapper[4966]: I0127 16:07:43.280589 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dfee338-4e72-4d66-aed2-e81ce752c4fc-scripts\") pod \"0dfee338-4e72-4d66-aed2-e81ce752c4fc\" (UID: \"0dfee338-4e72-4d66-aed2-e81ce752c4fc\") " Jan 27 16:07:43 crc kubenswrapper[4966]: I0127 16:07:43.280629 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mbzf\" (UniqueName: \"kubernetes.io/projected/0dfee338-4e72-4d66-aed2-e81ce752c4fc-kube-api-access-4mbzf\") pod \"0dfee338-4e72-4d66-aed2-e81ce752c4fc\" (UID: \"0dfee338-4e72-4d66-aed2-e81ce752c4fc\") " Jan 27 16:07:43 crc kubenswrapper[4966]: I0127 16:07:43.280682 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dfee338-4e72-4d66-aed2-e81ce752c4fc-combined-ca-bundle\") pod \"0dfee338-4e72-4d66-aed2-e81ce752c4fc\" (UID: \"0dfee338-4e72-4d66-aed2-e81ce752c4fc\") " Jan 27 16:07:43 crc kubenswrapper[4966]: I0127 16:07:43.286327 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dfee338-4e72-4d66-aed2-e81ce752c4fc-kube-api-access-4mbzf" (OuterVolumeSpecName: "kube-api-access-4mbzf") pod "0dfee338-4e72-4d66-aed2-e81ce752c4fc" (UID: "0dfee338-4e72-4d66-aed2-e81ce752c4fc"). InnerVolumeSpecName "kube-api-access-4mbzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:07:43 crc kubenswrapper[4966]: I0127 16:07:43.288137 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dfee338-4e72-4d66-aed2-e81ce752c4fc-scripts" (OuterVolumeSpecName: "scripts") pod "0dfee338-4e72-4d66-aed2-e81ce752c4fc" (UID: "0dfee338-4e72-4d66-aed2-e81ce752c4fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:43 crc kubenswrapper[4966]: I0127 16:07:43.315664 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dfee338-4e72-4d66-aed2-e81ce752c4fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0dfee338-4e72-4d66-aed2-e81ce752c4fc" (UID: "0dfee338-4e72-4d66-aed2-e81ce752c4fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:43 crc kubenswrapper[4966]: I0127 16:07:43.325081 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dfee338-4e72-4d66-aed2-e81ce752c4fc-config-data" (OuterVolumeSpecName: "config-data") pod "0dfee338-4e72-4d66-aed2-e81ce752c4fc" (UID: "0dfee338-4e72-4d66-aed2-e81ce752c4fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:43 crc kubenswrapper[4966]: I0127 16:07:43.383356 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dfee338-4e72-4d66-aed2-e81ce752c4fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:43 crc kubenswrapper[4966]: I0127 16:07:43.383390 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mbzf\" (UniqueName: \"kubernetes.io/projected/0dfee338-4e72-4d66-aed2-e81ce752c4fc-kube-api-access-4mbzf\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:43 crc kubenswrapper[4966]: I0127 16:07:43.383401 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dfee338-4e72-4d66-aed2-e81ce752c4fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:43 crc kubenswrapper[4966]: I0127 16:07:43.383410 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dfee338-4e72-4d66-aed2-e81ce752c4fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:43 crc kubenswrapper[4966]: I0127 16:07:43.581743 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-fr6bx" event={"ID":"0dfee338-4e72-4d66-aed2-e81ce752c4fc","Type":"ContainerDied","Data":"6c5c17d041a5700f43ee2333177bdd871130b65a8d371532eeb436d3acbdfbe0"} Jan 27 16:07:43 crc kubenswrapper[4966]: I0127 16:07:43.581765 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-fr6bx" Jan 27 16:07:43 crc kubenswrapper[4966]: I0127 16:07:43.581793 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c5c17d041a5700f43ee2333177bdd871130b65a8d371532eeb436d3acbdfbe0" Jan 27 16:07:43 crc kubenswrapper[4966]: I0127 16:07:43.627686 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 16:07:43 crc kubenswrapper[4966]: I0127 16:07:43.651963 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 16:07:44 crc kubenswrapper[4966]: I0127 16:07:44.551795 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7" path="/var/lib/kubelet/pods/0c47eca9-d32b-42c4-b69e-8d2c23ffc4d7/volumes" Jan 27 16:07:44 crc kubenswrapper[4966]: I0127 16:07:44.554619 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ded670a-f870-4df0-88b4-330b5f8e2f19" path="/var/lib/kubelet/pods/1ded670a-f870-4df0-88b4-330b5f8e2f19/volumes" Jan 27 16:07:44 crc kubenswrapper[4966]: I0127 16:07:44.598287 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4dbe16c5-6e68-4e28-9c01-e9558046f377","Type":"ContainerStarted","Data":"8613cdb680c921bc7b5bfc3052cd2909b41e99b146f4d245264471b2685f01d1"} Jan 27 16:07:44 crc kubenswrapper[4966]: I0127 16:07:44.598334 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4dbe16c5-6e68-4e28-9c01-e9558046f377","Type":"ContainerStarted","Data":"fdf71b6f010fcf21783973b32ba42b833c5fbfcf587be85485655c351d0516a6"} Jan 27 16:07:44 crc kubenswrapper[4966]: I0127 16:07:44.598348 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4dbe16c5-6e68-4e28-9c01-e9558046f377","Type":"ContainerStarted","Data":"54afa192f3751ae69722bcf6894374066dd16800d44a18c983f45aeb2e663246"} Jan 27 16:07:44 crc kubenswrapper[4966]: I0127 16:07:44.600105 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ec82113e-c41d-4ee5-906b-3aa78d343e46","Type":"ContainerStarted","Data":"ef049f30f83cb83855f56f0f8dc786de66113f501f66a8621e235a5bf0736608"} Jan 27 16:07:44 crc kubenswrapper[4966]: I0127 16:07:44.600128 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ec82113e-c41d-4ee5-906b-3aa78d343e46","Type":"ContainerStarted","Data":"74297b816a6639dce8e2dfb1b26d35da82c5f708da0ac33319cdc0fc4cf220c5"} Jan 27 16:07:44 crc kubenswrapper[4966]: I0127 16:07:44.619584 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.61956804 podStartE2EDuration="2.61956804s" podCreationTimestamp="2026-01-27 16:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:07:44.616349798 +0000 UTC m=+1530.919143286" watchObservedRunningTime="2026-01-27 16:07:44.61956804 +0000 UTC m=+1530.922361528" Jan 27 16:07:44 crc kubenswrapper[4966]: I0127 16:07:44.644148 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.64412942 podStartE2EDuration="2.64412942s" podCreationTimestamp="2026-01-27 16:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:07:44.638866195 +0000 UTC m=+1530.941659683" watchObservedRunningTime="2026-01-27 16:07:44.64412942 +0000 UTC m=+1530.946922908" Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.582842 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 27 16:07:45 crc kubenswrapper[4966]: E0127 16:07:45.589011 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dfee338-4e72-4d66-aed2-e81ce752c4fc" containerName="aodh-db-sync" Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.589046 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dfee338-4e72-4d66-aed2-e81ce752c4fc" containerName="aodh-db-sync" Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.589458 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dfee338-4e72-4d66-aed2-e81ce752c4fc" containerName="aodh-db-sync" Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.591532 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.595135 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.595278 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.625044 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-jknwm" Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.629258 4966 generic.go:334] "Generic (PLEG): container finished" podID="79463280-8972-4b4c-bfdb-c4909a812a02" containerID="61250b183962098966ce3e70459f1c775fe3ed9157020a3bfb2781ae5a007200" exitCode=0 Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.629963 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvqgn" event={"ID":"79463280-8972-4b4c-bfdb-c4909a812a02","Type":"ContainerDied","Data":"61250b183962098966ce3e70459f1c775fe3ed9157020a3bfb2781ae5a007200"} Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.636485 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9768164-573c-4615-a9b9-4d71b0cea701-config-data\") pod \"aodh-0\" (UID: \"a9768164-573c-4615-a9b9-4d71b0cea701\") " pod="openstack/aodh-0" Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.636742 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrk7w\" (UniqueName: \"kubernetes.io/projected/a9768164-573c-4615-a9b9-4d71b0cea701-kube-api-access-qrk7w\") pod \"aodh-0\" (UID: \"a9768164-573c-4615-a9b9-4d71b0cea701\") " pod="openstack/aodh-0" Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.636774 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9768164-573c-4615-a9b9-4d71b0cea701-scripts\") pod \"aodh-0\" (UID: \"a9768164-573c-4615-a9b9-4d71b0cea701\") " pod="openstack/aodh-0" Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.636948 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9768164-573c-4615-a9b9-4d71b0cea701-combined-ca-bundle\") pod \"aodh-0\" (UID: \"a9768164-573c-4615-a9b9-4d71b0cea701\") " pod="openstack/aodh-0" Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.650326 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.740215 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrk7w\" (UniqueName: \"kubernetes.io/projected/a9768164-573c-4615-a9b9-4d71b0cea701-kube-api-access-qrk7w\") pod \"aodh-0\" (UID: \"a9768164-573c-4615-a9b9-4d71b0cea701\") " pod="openstack/aodh-0" Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.740268 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9768164-573c-4615-a9b9-4d71b0cea701-scripts\") pod \"aodh-0\" (UID: \"a9768164-573c-4615-a9b9-4d71b0cea701\") " pod="openstack/aodh-0" Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.740387 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9768164-573c-4615-a9b9-4d71b0cea701-combined-ca-bundle\") pod \"aodh-0\" (UID: \"a9768164-573c-4615-a9b9-4d71b0cea701\") " pod="openstack/aodh-0" Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.740477 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9768164-573c-4615-a9b9-4d71b0cea701-config-data\") pod \"aodh-0\" (UID: \"a9768164-573c-4615-a9b9-4d71b0cea701\") " pod="openstack/aodh-0" Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.750417 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9768164-573c-4615-a9b9-4d71b0cea701-combined-ca-bundle\") pod \"aodh-0\" (UID: \"a9768164-573c-4615-a9b9-4d71b0cea701\") " pod="openstack/aodh-0" Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.753139 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9768164-573c-4615-a9b9-4d71b0cea701-config-data\") pod \"aodh-0\" (UID: \"a9768164-573c-4615-a9b9-4d71b0cea701\") " pod="openstack/aodh-0" Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.754501 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9768164-573c-4615-a9b9-4d71b0cea701-scripts\") pod \"aodh-0\" (UID: \"a9768164-573c-4615-a9b9-4d71b0cea701\") " pod="openstack/aodh-0" Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.755543 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrk7w\" (UniqueName: \"kubernetes.io/projected/a9768164-573c-4615-a9b9-4d71b0cea701-kube-api-access-qrk7w\") pod \"aodh-0\" (UID: \"a9768164-573c-4615-a9b9-4d71b0cea701\") " pod="openstack/aodh-0" Jan 27 16:07:45 crc kubenswrapper[4966]: I0127 16:07:45.912442 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 16:07:46 crc kubenswrapper[4966]: I0127 16:07:46.486099 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 16:07:46 crc kubenswrapper[4966]: I0127 16:07:46.644340 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a9768164-573c-4615-a9b9-4d71b0cea701","Type":"ContainerStarted","Data":"0460b2cf9db2555fc61e8ee1a12dee0ff54bfeaaa3ce773e1698ba42fb41961a"} Jan 27 16:07:47 crc kubenswrapper[4966]: I0127 16:07:47.754952 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvqgn" event={"ID":"79463280-8972-4b4c-bfdb-c4909a812a02","Type":"ContainerStarted","Data":"309e4aa485a9e2cbd83801ab807116e4576a635bcfa5b05923d1843cea0ab165"} Jan 27 16:07:47 crc kubenswrapper[4966]: I0127 16:07:47.780170 4966 generic.go:334] "Generic (PLEG): container finished" podID="4e490f34-cd48-4988-9a8e-ea51b08268fc" containerID="72d3d1c8f7e26e88a0d2c3f86a38135af1c493d3a994402eccb79d631ca88c54" exitCode=0 Jan 27 16:07:47 crc kubenswrapper[4966]: I0127 16:07:47.780259 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t2fjt" event={"ID":"4e490f34-cd48-4988-9a8e-ea51b08268fc","Type":"ContainerDied","Data":"72d3d1c8f7e26e88a0d2c3f86a38135af1c493d3a994402eccb79d631ca88c54"} Jan 27 16:07:47 crc kubenswrapper[4966]: I0127 16:07:47.799400 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qvqgn" podStartSLOduration=2.628653899 podStartE2EDuration="8.799372238s" podCreationTimestamp="2026-01-27 16:07:39 +0000 UTC" firstStartedPulling="2026-01-27 16:07:40.460676228 +0000 UTC m=+1526.763469726" lastFinishedPulling="2026-01-27 16:07:46.631394567 +0000 UTC m=+1532.934188065" observedRunningTime="2026-01-27 16:07:47.785536714 +0000 UTC m=+1534.088330222" watchObservedRunningTime="2026-01-27 16:07:47.799372238 +0000 UTC m=+1534.102165746" Jan 27 16:07:47 crc kubenswrapper[4966]: I0127 16:07:47.925874 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qklj5" Jan 27 16:07:47 crc kubenswrapper[4966]: I0127 16:07:47.925947 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qklj5" Jan 27 16:07:48 crc kubenswrapper[4966]: I0127 16:07:48.075087 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 16:07:48 crc kubenswrapper[4966]: I0127 16:07:48.514511 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:07:48 crc kubenswrapper[4966]: I0127 16:07:48.515086 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="667c0142-8c32-4658-ac1c-af787a4845e9" containerName="ceilometer-central-agent" containerID="cri-o://4186fc3feb9edf02dd6773e0c75f3488b2bb01b041014268a6d5df251fae99b8" gracePeriod=30 Jan 27 16:07:48 crc kubenswrapper[4966]: I0127 16:07:48.515186 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="667c0142-8c32-4658-ac1c-af787a4845e9" containerName="sg-core" containerID="cri-o://4f050ce223151ea25462de0c1cb44011b0ba917caffd6d101e4ebd4b6941f2f5" gracePeriod=30 Jan 27 16:07:48 crc kubenswrapper[4966]: I0127 16:07:48.515227 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="667c0142-8c32-4658-ac1c-af787a4845e9" containerName="proxy-httpd" containerID="cri-o://400653b14c9f3ebfc976f795542aad5d6681e97ef8806969d402dddbb8088f99" gracePeriod=30 Jan 27 16:07:48 crc kubenswrapper[4966]: I0127 16:07:48.515556 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="667c0142-8c32-4658-ac1c-af787a4845e9" containerName="ceilometer-notification-agent" containerID="cri-o://3365e8457ad57ae865804abf5566c0c69c9762074e1baea212afec698a86c817" gracePeriod=30 Jan 27 16:07:48 crc kubenswrapper[4966]: I0127 16:07:48.530078 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="667c0142-8c32-4658-ac1c-af787a4845e9" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.236:3000/\": EOF" Jan 27 16:07:48 crc kubenswrapper[4966]: I0127 16:07:48.792554 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a9768164-573c-4615-a9b9-4d71b0cea701","Type":"ContainerStarted","Data":"88cab4c7d14bf558976fd78ccd095f6a1720af26d9ddab3a632af696a77e4820"} Jan 27 16:07:48 crc kubenswrapper[4966]: I0127 16:07:48.794733 4966 generic.go:334] "Generic (PLEG): container finished" podID="667c0142-8c32-4658-ac1c-af787a4845e9" containerID="400653b14c9f3ebfc976f795542aad5d6681e97ef8806969d402dddbb8088f99" exitCode=0 Jan 27 16:07:48 crc kubenswrapper[4966]: I0127 16:07:48.794766 4966 generic.go:334] "Generic (PLEG): container finished" podID="667c0142-8c32-4658-ac1c-af787a4845e9" containerID="4f050ce223151ea25462de0c1cb44011b0ba917caffd6d101e4ebd4b6941f2f5" exitCode=2 Jan 27 16:07:48 crc kubenswrapper[4966]: I0127 16:07:48.794795 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"667c0142-8c32-4658-ac1c-af787a4845e9","Type":"ContainerDied","Data":"400653b14c9f3ebfc976f795542aad5d6681e97ef8806969d402dddbb8088f99"} Jan 27 16:07:48 crc kubenswrapper[4966]: I0127 16:07:48.794825 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"667c0142-8c32-4658-ac1c-af787a4845e9","Type":"ContainerDied","Data":"4f050ce223151ea25462de0c1cb44011b0ba917caffd6d101e4ebd4b6941f2f5"} Jan 27 16:07:48 crc kubenswrapper[4966]: I0127 16:07:48.992435 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-qklj5" podUID="7a12ee30-16c2-49ec-8473-7ed403256c25" containerName="registry-server" probeResult="failure" output=< Jan 27 16:07:48 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 16:07:48 crc kubenswrapper[4966]: > Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.411716 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t2fjt" Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.442340 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qvqgn" Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.442670 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qvqgn" Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.573481 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e490f34-cd48-4988-9a8e-ea51b08268fc-combined-ca-bundle\") pod \"4e490f34-cd48-4988-9a8e-ea51b08268fc\" (UID: \"4e490f34-cd48-4988-9a8e-ea51b08268fc\") " Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.573590 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcb29\" (UniqueName: \"kubernetes.io/projected/4e490f34-cd48-4988-9a8e-ea51b08268fc-kube-api-access-pcb29\") pod \"4e490f34-cd48-4988-9a8e-ea51b08268fc\" (UID: \"4e490f34-cd48-4988-9a8e-ea51b08268fc\") " Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.573629 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e490f34-cd48-4988-9a8e-ea51b08268fc-scripts\") pod \"4e490f34-cd48-4988-9a8e-ea51b08268fc\" (UID: \"4e490f34-cd48-4988-9a8e-ea51b08268fc\") " Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.573866 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e490f34-cd48-4988-9a8e-ea51b08268fc-config-data\") pod \"4e490f34-cd48-4988-9a8e-ea51b08268fc\" (UID: \"4e490f34-cd48-4988-9a8e-ea51b08268fc\") " Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.581039 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e490f34-cd48-4988-9a8e-ea51b08268fc-kube-api-access-pcb29" (OuterVolumeSpecName: "kube-api-access-pcb29") pod "4e490f34-cd48-4988-9a8e-ea51b08268fc" (UID: "4e490f34-cd48-4988-9a8e-ea51b08268fc"). InnerVolumeSpecName "kube-api-access-pcb29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.581167 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e490f34-cd48-4988-9a8e-ea51b08268fc-scripts" (OuterVolumeSpecName: "scripts") pod "4e490f34-cd48-4988-9a8e-ea51b08268fc" (UID: "4e490f34-cd48-4988-9a8e-ea51b08268fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.610759 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.622528 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e490f34-cd48-4988-9a8e-ea51b08268fc-config-data" (OuterVolumeSpecName: "config-data") pod "4e490f34-cd48-4988-9a8e-ea51b08268fc" (UID: "4e490f34-cd48-4988-9a8e-ea51b08268fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.630738 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e490f34-cd48-4988-9a8e-ea51b08268fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e490f34-cd48-4988-9a8e-ea51b08268fc" (UID: "4e490f34-cd48-4988-9a8e-ea51b08268fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.677333 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e490f34-cd48-4988-9a8e-ea51b08268fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.677365 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e490f34-cd48-4988-9a8e-ea51b08268fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.677376 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcb29\" (UniqueName: \"kubernetes.io/projected/4e490f34-cd48-4988-9a8e-ea51b08268fc-kube-api-access-pcb29\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.677387 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e490f34-cd48-4988-9a8e-ea51b08268fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.817640 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t2fjt" event={"ID":"4e490f34-cd48-4988-9a8e-ea51b08268fc","Type":"ContainerDied","Data":"f200fb837ce7057fe2b5f909f5baef1b63d54afe5444388852ae98d97d27e0fd"} Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.817910 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f200fb837ce7057fe2b5f909f5baef1b63d54afe5444388852ae98d97d27e0fd" Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.818321 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t2fjt" Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.822529 4966 generic.go:334] "Generic (PLEG): container finished" podID="667c0142-8c32-4658-ac1c-af787a4845e9" containerID="4186fc3feb9edf02dd6773e0c75f3488b2bb01b041014268a6d5df251fae99b8" exitCode=0 Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.824396 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"667c0142-8c32-4658-ac1c-af787a4845e9","Type":"ContainerDied","Data":"4186fc3feb9edf02dd6773e0c75f3488b2bb01b041014268a6d5df251fae99b8"} Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.911806 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 16:07:49 crc kubenswrapper[4966]: E0127 16:07:49.913113 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e490f34-cd48-4988-9a8e-ea51b08268fc" containerName="nova-cell1-conductor-db-sync" Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.913146 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e490f34-cd48-4988-9a8e-ea51b08268fc" containerName="nova-cell1-conductor-db-sync" Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.913606 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e490f34-cd48-4988-9a8e-ea51b08268fc" containerName="nova-cell1-conductor-db-sync" Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.930926 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.931102 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 16:07:49 crc kubenswrapper[4966]: I0127 16:07:49.938238 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 16:07:50 crc kubenswrapper[4966]: I0127 16:07:50.090696 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9dgd\" (UniqueName: \"kubernetes.io/projected/5a04b039-8946-4237-ae6b-1d1ece6927d5-kube-api-access-w9dgd\") pod \"nova-cell1-conductor-0\" (UID: \"5a04b039-8946-4237-ae6b-1d1ece6927d5\") " pod="openstack/nova-cell1-conductor-0" Jan 27 16:07:50 crc kubenswrapper[4966]: I0127 16:07:50.091114 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a04b039-8946-4237-ae6b-1d1ece6927d5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5a04b039-8946-4237-ae6b-1d1ece6927d5\") " pod="openstack/nova-cell1-conductor-0" Jan 27 16:07:50 crc kubenswrapper[4966]: I0127 16:07:50.091286 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a04b039-8946-4237-ae6b-1d1ece6927d5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5a04b039-8946-4237-ae6b-1d1ece6927d5\") " pod="openstack/nova-cell1-conductor-0" Jan 27 16:07:50 crc kubenswrapper[4966]: I0127 16:07:50.193235 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a04b039-8946-4237-ae6b-1d1ece6927d5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5a04b039-8946-4237-ae6b-1d1ece6927d5\") " pod="openstack/nova-cell1-conductor-0" Jan 27 16:07:50 crc kubenswrapper[4966]: I0127 16:07:50.193327 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a04b039-8946-4237-ae6b-1d1ece6927d5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5a04b039-8946-4237-ae6b-1d1ece6927d5\") " pod="openstack/nova-cell1-conductor-0" Jan 27 16:07:50 crc kubenswrapper[4966]: I0127 16:07:50.193424 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9dgd\" (UniqueName: \"kubernetes.io/projected/5a04b039-8946-4237-ae6b-1d1ece6927d5-kube-api-access-w9dgd\") pod \"nova-cell1-conductor-0\" (UID: \"5a04b039-8946-4237-ae6b-1d1ece6927d5\") " pod="openstack/nova-cell1-conductor-0" Jan 27 16:07:50 crc kubenswrapper[4966]: I0127 16:07:50.197651 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a04b039-8946-4237-ae6b-1d1ece6927d5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5a04b039-8946-4237-ae6b-1d1ece6927d5\") " pod="openstack/nova-cell1-conductor-0" Jan 27 16:07:50 crc kubenswrapper[4966]: I0127 16:07:50.198296 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a04b039-8946-4237-ae6b-1d1ece6927d5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5a04b039-8946-4237-ae6b-1d1ece6927d5\") " pod="openstack/nova-cell1-conductor-0" Jan 27 16:07:50 crc kubenswrapper[4966]: I0127 16:07:50.214373 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9dgd\" (UniqueName: \"kubernetes.io/projected/5a04b039-8946-4237-ae6b-1d1ece6927d5-kube-api-access-w9dgd\") pod \"nova-cell1-conductor-0\" (UID: \"5a04b039-8946-4237-ae6b-1d1ece6927d5\") " pod="openstack/nova-cell1-conductor-0" Jan 27 16:07:50 crc kubenswrapper[4966]: I0127 16:07:50.261729 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 16:07:50 crc kubenswrapper[4966]: I0127 16:07:50.298057 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="667c0142-8c32-4658-ac1c-af787a4845e9" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.236:3000/\": dial tcp 10.217.0.236:3000: connect: connection refused" Jan 27 16:07:50 crc kubenswrapper[4966]: I0127 16:07:50.527443 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qvqgn" podUID="79463280-8972-4b4c-bfdb-c4909a812a02" containerName="registry-server" probeResult="failure" output=< Jan 27 16:07:50 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 16:07:50 crc kubenswrapper[4966]: > Jan 27 16:07:50 crc kubenswrapper[4966]: I0127 16:07:50.713043 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kxq52" podUID="b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" containerName="registry-server" probeResult="failure" output=< Jan 27 16:07:50 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 16:07:50 crc kubenswrapper[4966]: > Jan 27 16:07:50 crc kubenswrapper[4966]: I0127 16:07:50.786151 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 16:07:50 crc kubenswrapper[4966]: W0127 16:07:50.789006 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a04b039_8946_4237_ae6b_1d1ece6927d5.slice/crio-fe3d3a8d4f663945f152a6a0cfa8309f14da497cf1392db66941e25222b0b44a WatchSource:0}: Error finding container fe3d3a8d4f663945f152a6a0cfa8309f14da497cf1392db66941e25222b0b44a: Status 404 returned error can't find the container with id fe3d3a8d4f663945f152a6a0cfa8309f14da497cf1392db66941e25222b0b44a Jan 27 16:07:50 crc kubenswrapper[4966]: I0127 16:07:50.843438 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5a04b039-8946-4237-ae6b-1d1ece6927d5","Type":"ContainerStarted","Data":"fe3d3a8d4f663945f152a6a0cfa8309f14da497cf1392db66941e25222b0b44a"} Jan 27 16:07:51 crc kubenswrapper[4966]: I0127 16:07:51.858280 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5a04b039-8946-4237-ae6b-1d1ece6927d5","Type":"ContainerStarted","Data":"7c95571ee5fbe5014addd7d33d51008f20542021db746c08d8ec1bea915c755a"} Jan 27 16:07:51 crc kubenswrapper[4966]: I0127 16:07:51.858639 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 16:07:51 crc kubenswrapper[4966]: I0127 16:07:51.861787 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a9768164-573c-4615-a9b9-4d71b0cea701","Type":"ContainerStarted","Data":"43787ccad6529b763426ed96bbc63a6149dd9352d960104293d83edf8e9b4245"} Jan 27 16:07:52 crc kubenswrapper[4966]: I0127 16:07:52.521363 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:07:52 crc kubenswrapper[4966]: E0127 16:07:52.521908 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:07:52 crc kubenswrapper[4966]: I0127 16:07:52.894795 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a9768164-573c-4615-a9b9-4d71b0cea701","Type":"ContainerStarted","Data":"b60bd98f21daeb40e755029ea9e53db91a807ae010dc0a5142ed635ede9729b5"} Jan 27 16:07:52 crc kubenswrapper[4966]: I0127 16:07:52.897160 4966 generic.go:334] "Generic (PLEG): container finished" podID="667c0142-8c32-4658-ac1c-af787a4845e9" containerID="3365e8457ad57ae865804abf5566c0c69c9762074e1baea212afec698a86c817" exitCode=0 Jan 27 16:07:52 crc kubenswrapper[4966]: I0127 16:07:52.898246 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"667c0142-8c32-4658-ac1c-af787a4845e9","Type":"ContainerDied","Data":"3365e8457ad57ae865804abf5566c0c69c9762074e1baea212afec698a86c817"} Jan 27 16:07:53 crc kubenswrapper[4966]: I0127 16:07:53.034066 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 16:07:53 crc kubenswrapper[4966]: I0127 16:07:53.034108 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 16:07:53 crc kubenswrapper[4966]: I0127 16:07:53.075602 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 16:07:53 crc kubenswrapper[4966]: I0127 16:07:53.154284 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 16:07:53 crc kubenswrapper[4966]: I0127 16:07:53.187840 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=4.187790083 podStartE2EDuration="4.187790083s" podCreationTimestamp="2026-01-27 16:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:07:51.882774591 +0000 UTC m=+1538.185568079" watchObservedRunningTime="2026-01-27 16:07:53.187790083 +0000 UTC m=+1539.490583571" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.592376 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.760757 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-combined-ca-bundle\") pod \"667c0142-8c32-4658-ac1c-af787a4845e9\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.761815 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-scripts\") pod \"667c0142-8c32-4658-ac1c-af787a4845e9\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.761971 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/667c0142-8c32-4658-ac1c-af787a4845e9-run-httpd\") pod \"667c0142-8c32-4658-ac1c-af787a4845e9\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.762019 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5z6n\" (UniqueName: \"kubernetes.io/projected/667c0142-8c32-4658-ac1c-af787a4845e9-kube-api-access-x5z6n\") pod \"667c0142-8c32-4658-ac1c-af787a4845e9\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.762044 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-config-data\") pod \"667c0142-8c32-4658-ac1c-af787a4845e9\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.762108 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-sg-core-conf-yaml\") pod \"667c0142-8c32-4658-ac1c-af787a4845e9\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.762162 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/667c0142-8c32-4658-ac1c-af787a4845e9-log-httpd\") pod \"667c0142-8c32-4658-ac1c-af787a4845e9\" (UID: \"667c0142-8c32-4658-ac1c-af787a4845e9\") " Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.763967 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/667c0142-8c32-4658-ac1c-af787a4845e9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "667c0142-8c32-4658-ac1c-af787a4845e9" (UID: "667c0142-8c32-4658-ac1c-af787a4845e9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.764799 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/667c0142-8c32-4658-ac1c-af787a4845e9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "667c0142-8c32-4658-ac1c-af787a4845e9" (UID: "667c0142-8c32-4658-ac1c-af787a4845e9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.768763 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/667c0142-8c32-4658-ac1c-af787a4845e9-kube-api-access-x5z6n" (OuterVolumeSpecName: "kube-api-access-x5z6n") pod "667c0142-8c32-4658-ac1c-af787a4845e9" (UID: "667c0142-8c32-4658-ac1c-af787a4845e9"). InnerVolumeSpecName "kube-api-access-x5z6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.777996 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-scripts" (OuterVolumeSpecName: "scripts") pod "667c0142-8c32-4658-ac1c-af787a4845e9" (UID: "667c0142-8c32-4658-ac1c-af787a4845e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.841202 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "667c0142-8c32-4658-ac1c-af787a4845e9" (UID: "667c0142-8c32-4658-ac1c-af787a4845e9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.865722 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.865754 4966 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/667c0142-8c32-4658-ac1c-af787a4845e9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.865768 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5z6n\" (UniqueName: \"kubernetes.io/projected/667c0142-8c32-4658-ac1c-af787a4845e9-kube-api-access-x5z6n\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.865780 4966 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.865793 4966 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/667c0142-8c32-4658-ac1c-af787a4845e9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.902832 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "667c0142-8c32-4658-ac1c-af787a4845e9" (UID: "667c0142-8c32-4658-ac1c-af787a4845e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.915127 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"667c0142-8c32-4658-ac1c-af787a4845e9","Type":"ContainerDied","Data":"90427550b73e642365adecff26f608ab994125c20884d53be344e7ab293177b8"} Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.915181 4966 scope.go:117] "RemoveContainer" containerID="400653b14c9f3ebfc976f795542aad5d6681e97ef8806969d402dddbb8088f99" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.915193 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.960578 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:53.969724 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.024403 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-config-data" (OuterVolumeSpecName: "config-data") pod "667c0142-8c32-4658-ac1c-af787a4845e9" (UID: "667c0142-8c32-4658-ac1c-af787a4845e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.077026 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/667c0142-8c32-4658-ac1c-af787a4845e9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.116114 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4dbe16c5-6e68-4e28-9c01-e9558046f377" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.247:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.116507 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4dbe16c5-6e68-4e28-9c01-e9558046f377" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.247:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.256604 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.271534 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.282218 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:07:54 crc kubenswrapper[4966]: E0127 16:07:54.282719 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="667c0142-8c32-4658-ac1c-af787a4845e9" containerName="sg-core" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.282732 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="667c0142-8c32-4658-ac1c-af787a4845e9" containerName="sg-core" Jan 27 16:07:54 crc kubenswrapper[4966]: E0127 16:07:54.282768 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="667c0142-8c32-4658-ac1c-af787a4845e9" containerName="proxy-httpd" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.282775 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="667c0142-8c32-4658-ac1c-af787a4845e9" containerName="proxy-httpd" Jan 27 16:07:54 crc kubenswrapper[4966]: E0127 16:07:54.282788 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="667c0142-8c32-4658-ac1c-af787a4845e9" containerName="ceilometer-central-agent" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.282794 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="667c0142-8c32-4658-ac1c-af787a4845e9" containerName="ceilometer-central-agent" Jan 27 16:07:54 crc kubenswrapper[4966]: E0127 16:07:54.282814 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="667c0142-8c32-4658-ac1c-af787a4845e9" containerName="ceilometer-notification-agent" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.282820 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="667c0142-8c32-4658-ac1c-af787a4845e9" containerName="ceilometer-notification-agent" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.283055 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="667c0142-8c32-4658-ac1c-af787a4845e9" containerName="ceilometer-notification-agent" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.283073 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="667c0142-8c32-4658-ac1c-af787a4845e9" containerName="ceilometer-central-agent" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.283083 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="667c0142-8c32-4658-ac1c-af787a4845e9" containerName="proxy-httpd" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.283107 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="667c0142-8c32-4658-ac1c-af787a4845e9" containerName="sg-core" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.285173 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.293090 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.293353 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.319222 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.384562 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5173799b-684f-4c79-8cff-9e490bf9e44d-run-httpd\") pod \"ceilometer-0\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.384618 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-scripts\") pod \"ceilometer-0\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.384671 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.384720 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.384752 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-config-data\") pod \"ceilometer-0\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.384783 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5173799b-684f-4c79-8cff-9e490bf9e44d-log-httpd\") pod \"ceilometer-0\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.384801 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqb5g\" (UniqueName: \"kubernetes.io/projected/5173799b-684f-4c79-8cff-9e490bf9e44d-kube-api-access-kqb5g\") pod \"ceilometer-0\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.486946 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.487003 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-config-data\") pod \"ceilometer-0\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.487043 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5173799b-684f-4c79-8cff-9e490bf9e44d-log-httpd\") pod \"ceilometer-0\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.487060 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqb5g\" (UniqueName: \"kubernetes.io/projected/5173799b-684f-4c79-8cff-9e490bf9e44d-kube-api-access-kqb5g\") pod \"ceilometer-0\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.487181 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5173799b-684f-4c79-8cff-9e490bf9e44d-run-httpd\") pod \"ceilometer-0\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.487214 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-scripts\") pod \"ceilometer-0\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.487262 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.487585 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5173799b-684f-4c79-8cff-9e490bf9e44d-log-httpd\") pod \"ceilometer-0\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.487860 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5173799b-684f-4c79-8cff-9e490bf9e44d-run-httpd\") pod \"ceilometer-0\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.493091 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.493606 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.494130 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-config-data\") pod \"ceilometer-0\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.494772 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-scripts\") pod \"ceilometer-0\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.542765 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqb5g\" (UniqueName: \"kubernetes.io/projected/5173799b-684f-4c79-8cff-9e490bf9e44d-kube-api-access-kqb5g\") pod \"ceilometer-0\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.572532 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="667c0142-8c32-4658-ac1c-af787a4845e9" path="/var/lib/kubelet/pods/667c0142-8c32-4658-ac1c-af787a4845e9/volumes" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.625587 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.634435 4966 scope.go:117] "RemoveContainer" containerID="4f050ce223151ea25462de0c1cb44011b0ba917caffd6d101e4ebd4b6941f2f5" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.693443 4966 scope.go:117] "RemoveContainer" containerID="3365e8457ad57ae865804abf5566c0c69c9762074e1baea212afec698a86c817" Jan 27 16:07:54 crc kubenswrapper[4966]: I0127 16:07:54.719276 4966 scope.go:117] "RemoveContainer" containerID="4186fc3feb9edf02dd6773e0c75f3488b2bb01b041014268a6d5df251fae99b8" Jan 27 16:07:55 crc kubenswrapper[4966]: I0127 16:07:55.297549 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 16:07:55 crc kubenswrapper[4966]: I0127 16:07:55.331336 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:07:55 crc kubenswrapper[4966]: I0127 16:07:55.971597 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5173799b-684f-4c79-8cff-9e490bf9e44d","Type":"ContainerStarted","Data":"6725fce46c8cc25e2a3a4c3bef728bc631ed43b80936cd25c9e6e80e981f5b5f"} Jan 27 16:07:55 crc kubenswrapper[4966]: I0127 16:07:55.975053 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a9768164-573c-4615-a9b9-4d71b0cea701","Type":"ContainerStarted","Data":"2d33ca77dede2ca63795090b237040bf292a9cec30351c8a84375ea55cfa16f7"} Jan 27 16:07:55 crc kubenswrapper[4966]: I0127 16:07:55.975288 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="a9768164-573c-4615-a9b9-4d71b0cea701" containerName="aodh-api" containerID="cri-o://88cab4c7d14bf558976fd78ccd095f6a1720af26d9ddab3a632af696a77e4820" gracePeriod=30 Jan 27 16:07:55 crc kubenswrapper[4966]: I0127 16:07:55.975813 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="a9768164-573c-4615-a9b9-4d71b0cea701" containerName="aodh-listener" containerID="cri-o://2d33ca77dede2ca63795090b237040bf292a9cec30351c8a84375ea55cfa16f7" gracePeriod=30 Jan 27 16:07:55 crc kubenswrapper[4966]: I0127 16:07:55.975884 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="a9768164-573c-4615-a9b9-4d71b0cea701" containerName="aodh-notifier" containerID="cri-o://b60bd98f21daeb40e755029ea9e53db91a807ae010dc0a5142ed635ede9729b5" gracePeriod=30 Jan 27 16:07:55 crc kubenswrapper[4966]: I0127 16:07:55.975983 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="a9768164-573c-4615-a9b9-4d71b0cea701" containerName="aodh-evaluator" containerID="cri-o://43787ccad6529b763426ed96bbc63a6149dd9352d960104293d83edf8e9b4245" gracePeriod=30 Jan 27 16:07:56 crc kubenswrapper[4966]: I0127 16:07:56.011912 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.893202584 podStartE2EDuration="11.011874447s" podCreationTimestamp="2026-01-27 16:07:45 +0000 UTC" firstStartedPulling="2026-01-27 16:07:46.630563232 +0000 UTC m=+1532.933356730" lastFinishedPulling="2026-01-27 16:07:54.749235105 +0000 UTC m=+1541.052028593" observedRunningTime="2026-01-27 16:07:56.001661546 +0000 UTC m=+1542.304455054" watchObservedRunningTime="2026-01-27 16:07:56.011874447 +0000 UTC m=+1542.314667935" Jan 27 16:07:56 crc kubenswrapper[4966]: I0127 16:07:56.992284 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5173799b-684f-4c79-8cff-9e490bf9e44d","Type":"ContainerStarted","Data":"e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc"} Jan 27 16:07:56 crc kubenswrapper[4966]: I0127 16:07:56.992742 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5173799b-684f-4c79-8cff-9e490bf9e44d","Type":"ContainerStarted","Data":"0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a"} Jan 27 16:07:56 crc kubenswrapper[4966]: I0127 16:07:56.995196 4966 generic.go:334] "Generic (PLEG): container finished" podID="a9768164-573c-4615-a9b9-4d71b0cea701" containerID="b60bd98f21daeb40e755029ea9e53db91a807ae010dc0a5142ed635ede9729b5" exitCode=0 Jan 27 16:07:56 crc kubenswrapper[4966]: I0127 16:07:56.995220 4966 generic.go:334] "Generic (PLEG): container finished" podID="a9768164-573c-4615-a9b9-4d71b0cea701" containerID="43787ccad6529b763426ed96bbc63a6149dd9352d960104293d83edf8e9b4245" exitCode=0 Jan 27 16:07:56 crc kubenswrapper[4966]: I0127 16:07:56.995227 4966 generic.go:334] "Generic (PLEG): container finished" podID="a9768164-573c-4615-a9b9-4d71b0cea701" containerID="88cab4c7d14bf558976fd78ccd095f6a1720af26d9ddab3a632af696a77e4820" exitCode=0 Jan 27 16:07:56 crc kubenswrapper[4966]: I0127 16:07:56.995240 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a9768164-573c-4615-a9b9-4d71b0cea701","Type":"ContainerDied","Data":"b60bd98f21daeb40e755029ea9e53db91a807ae010dc0a5142ed635ede9729b5"} Jan 27 16:07:56 crc kubenswrapper[4966]: I0127 16:07:56.995255 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a9768164-573c-4615-a9b9-4d71b0cea701","Type":"ContainerDied","Data":"43787ccad6529b763426ed96bbc63a6149dd9352d960104293d83edf8e9b4245"} Jan 27 16:07:56 crc kubenswrapper[4966]: I0127 16:07:56.995263 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a9768164-573c-4615-a9b9-4d71b0cea701","Type":"ContainerDied","Data":"88cab4c7d14bf558976fd78ccd095f6a1720af26d9ddab3a632af696a77e4820"} Jan 27 16:07:57 crc kubenswrapper[4966]: I0127 16:07:57.989351 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qklj5" Jan 27 16:07:58 crc kubenswrapper[4966]: I0127 16:07:58.024205 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5173799b-684f-4c79-8cff-9e490bf9e44d","Type":"ContainerStarted","Data":"f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00"} Jan 27 16:07:58 crc kubenswrapper[4966]: I0127 16:07:58.040999 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qklj5" Jan 27 16:07:58 crc kubenswrapper[4966]: I0127 16:07:58.722670 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qklj5"] Jan 27 16:07:59 crc kubenswrapper[4966]: I0127 16:07:59.039503 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qklj5" podUID="7a12ee30-16c2-49ec-8473-7ed403256c25" containerName="registry-server" containerID="cri-o://2ee96eac44e8925e1c04387b4b45f89db23ffe84d74a8040f517cb99477b8d72" gracePeriod=2 Jan 27 16:07:59 crc kubenswrapper[4966]: I0127 16:07:59.496797 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qvqgn" Jan 27 16:07:59 crc kubenswrapper[4966]: I0127 16:07:59.522839 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qklj5" Jan 27 16:07:59 crc kubenswrapper[4966]: I0127 16:07:59.594566 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qvqgn" Jan 27 16:07:59 crc kubenswrapper[4966]: I0127 16:07:59.646959 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a12ee30-16c2-49ec-8473-7ed403256c25-utilities\") pod \"7a12ee30-16c2-49ec-8473-7ed403256c25\" (UID: \"7a12ee30-16c2-49ec-8473-7ed403256c25\") " Jan 27 16:07:59 crc kubenswrapper[4966]: I0127 16:07:59.647190 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4j2r\" (UniqueName: \"kubernetes.io/projected/7a12ee30-16c2-49ec-8473-7ed403256c25-kube-api-access-c4j2r\") pod \"7a12ee30-16c2-49ec-8473-7ed403256c25\" (UID: \"7a12ee30-16c2-49ec-8473-7ed403256c25\") " Jan 27 16:07:59 crc kubenswrapper[4966]: I0127 16:07:59.647345 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a12ee30-16c2-49ec-8473-7ed403256c25-catalog-content\") pod \"7a12ee30-16c2-49ec-8473-7ed403256c25\" (UID: \"7a12ee30-16c2-49ec-8473-7ed403256c25\") " Jan 27 16:07:59 crc kubenswrapper[4966]: I0127 16:07:59.648427 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a12ee30-16c2-49ec-8473-7ed403256c25-utilities" (OuterVolumeSpecName: "utilities") pod "7a12ee30-16c2-49ec-8473-7ed403256c25" (UID: "7a12ee30-16c2-49ec-8473-7ed403256c25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:07:59 crc kubenswrapper[4966]: I0127 16:07:59.652500 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a12ee30-16c2-49ec-8473-7ed403256c25-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:59 crc kubenswrapper[4966]: I0127 16:07:59.656050 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a12ee30-16c2-49ec-8473-7ed403256c25-kube-api-access-c4j2r" (OuterVolumeSpecName: "kube-api-access-c4j2r") pod "7a12ee30-16c2-49ec-8473-7ed403256c25" (UID: "7a12ee30-16c2-49ec-8473-7ed403256c25"). InnerVolumeSpecName "kube-api-access-c4j2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:07:59 crc kubenswrapper[4966]: I0127 16:07:59.703945 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a12ee30-16c2-49ec-8473-7ed403256c25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a12ee30-16c2-49ec-8473-7ed403256c25" (UID: "7a12ee30-16c2-49ec-8473-7ed403256c25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:07:59 crc kubenswrapper[4966]: I0127 16:07:59.723124 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kxq52" Jan 27 16:07:59 crc kubenswrapper[4966]: I0127 16:07:59.755452 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a12ee30-16c2-49ec-8473-7ed403256c25-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:59 crc kubenswrapper[4966]: I0127 16:07:59.755494 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4j2r\" (UniqueName: \"kubernetes.io/projected/7a12ee30-16c2-49ec-8473-7ed403256c25-kube-api-access-c4j2r\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:59 crc kubenswrapper[4966]: I0127 16:07:59.782213 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kxq52" Jan 27 16:08:00 crc kubenswrapper[4966]: I0127 16:08:00.052916 4966 generic.go:334] "Generic (PLEG): container finished" podID="7a12ee30-16c2-49ec-8473-7ed403256c25" containerID="2ee96eac44e8925e1c04387b4b45f89db23ffe84d74a8040f517cb99477b8d72" exitCode=0 Jan 27 16:08:00 crc kubenswrapper[4966]: I0127 16:08:00.053016 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qklj5" Jan 27 16:08:00 crc kubenswrapper[4966]: I0127 16:08:00.053014 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qklj5" event={"ID":"7a12ee30-16c2-49ec-8473-7ed403256c25","Type":"ContainerDied","Data":"2ee96eac44e8925e1c04387b4b45f89db23ffe84d74a8040f517cb99477b8d72"} Jan 27 16:08:00 crc kubenswrapper[4966]: I0127 16:08:00.053100 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qklj5" event={"ID":"7a12ee30-16c2-49ec-8473-7ed403256c25","Type":"ContainerDied","Data":"190637d3a66e3a32add1b07b83ca8c1db22b54979220e021db1ab3d08b4373f1"} Jan 27 16:08:00 crc kubenswrapper[4966]: I0127 16:08:00.053149 4966 scope.go:117] "RemoveContainer" containerID="2ee96eac44e8925e1c04387b4b45f89db23ffe84d74a8040f517cb99477b8d72" Jan 27 16:08:00 crc kubenswrapper[4966]: I0127 16:08:00.058206 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5173799b-684f-4c79-8cff-9e490bf9e44d","Type":"ContainerStarted","Data":"7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554"} Jan 27 16:08:00 crc kubenswrapper[4966]: I0127 16:08:00.081740 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.635817521 podStartE2EDuration="6.081721853s" podCreationTimestamp="2026-01-27 16:07:54 +0000 UTC" firstStartedPulling="2026-01-27 16:07:55.35506178 +0000 UTC m=+1541.657855268" lastFinishedPulling="2026-01-27 16:07:58.800966102 +0000 UTC m=+1545.103759600" observedRunningTime="2026-01-27 16:08:00.077314375 +0000 UTC m=+1546.380107893" watchObservedRunningTime="2026-01-27 16:08:00.081721853 +0000 UTC m=+1546.384515341" Jan 27 16:08:00 crc kubenswrapper[4966]: I0127 16:08:00.083638 4966 scope.go:117] "RemoveContainer" containerID="500919781b0ae88c726aa95f689318a1b7db170b8a14f9f44039b51f8d0faa1b" Jan 27 16:08:00 crc kubenswrapper[4966]: I0127 16:08:00.118856 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qklj5"] Jan 27 16:08:00 crc kubenswrapper[4966]: I0127 16:08:00.135123 4966 scope.go:117] "RemoveContainer" containerID="43ee80b125ee6913f16bcddc1e91fd28414ca4abeaa472ec49e010bbc67615e7" Jan 27 16:08:00 crc kubenswrapper[4966]: I0127 16:08:00.138474 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qklj5"] Jan 27 16:08:00 crc kubenswrapper[4966]: I0127 16:08:00.185813 4966 scope.go:117] "RemoveContainer" containerID="2ee96eac44e8925e1c04387b4b45f89db23ffe84d74a8040f517cb99477b8d72" Jan 27 16:08:00 crc kubenswrapper[4966]: E0127 16:08:00.186619 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ee96eac44e8925e1c04387b4b45f89db23ffe84d74a8040f517cb99477b8d72\": container with ID starting with 2ee96eac44e8925e1c04387b4b45f89db23ffe84d74a8040f517cb99477b8d72 not found: ID does not exist" containerID="2ee96eac44e8925e1c04387b4b45f89db23ffe84d74a8040f517cb99477b8d72" Jan 27 16:08:00 crc kubenswrapper[4966]: I0127 16:08:00.186680 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ee96eac44e8925e1c04387b4b45f89db23ffe84d74a8040f517cb99477b8d72"} err="failed to get container status \"2ee96eac44e8925e1c04387b4b45f89db23ffe84d74a8040f517cb99477b8d72\": rpc error: code = NotFound desc = could not find container \"2ee96eac44e8925e1c04387b4b45f89db23ffe84d74a8040f517cb99477b8d72\": container with ID starting with 2ee96eac44e8925e1c04387b4b45f89db23ffe84d74a8040f517cb99477b8d72 not found: ID does not exist" Jan 27 16:08:00 crc kubenswrapper[4966]: I0127 16:08:00.186716 4966 scope.go:117] "RemoveContainer" containerID="500919781b0ae88c726aa95f689318a1b7db170b8a14f9f44039b51f8d0faa1b" Jan 27 16:08:00 crc kubenswrapper[4966]: E0127 16:08:00.187689 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"500919781b0ae88c726aa95f689318a1b7db170b8a14f9f44039b51f8d0faa1b\": container with ID starting with 500919781b0ae88c726aa95f689318a1b7db170b8a14f9f44039b51f8d0faa1b not found: ID does not exist" containerID="500919781b0ae88c726aa95f689318a1b7db170b8a14f9f44039b51f8d0faa1b" Jan 27 16:08:00 crc kubenswrapper[4966]: I0127 16:08:00.187757 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500919781b0ae88c726aa95f689318a1b7db170b8a14f9f44039b51f8d0faa1b"} err="failed to get container status \"500919781b0ae88c726aa95f689318a1b7db170b8a14f9f44039b51f8d0faa1b\": rpc error: code = NotFound desc = could not find container \"500919781b0ae88c726aa95f689318a1b7db170b8a14f9f44039b51f8d0faa1b\": container with ID starting with 500919781b0ae88c726aa95f689318a1b7db170b8a14f9f44039b51f8d0faa1b not found: ID does not exist" Jan 27 16:08:00 crc kubenswrapper[4966]: I0127 16:08:00.187800 4966 scope.go:117] "RemoveContainer" containerID="43ee80b125ee6913f16bcddc1e91fd28414ca4abeaa472ec49e010bbc67615e7" Jan 27 16:08:00 crc kubenswrapper[4966]: E0127 16:08:00.188499 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43ee80b125ee6913f16bcddc1e91fd28414ca4abeaa472ec49e010bbc67615e7\": container with ID starting with 43ee80b125ee6913f16bcddc1e91fd28414ca4abeaa472ec49e010bbc67615e7 not found: ID does not exist" containerID="43ee80b125ee6913f16bcddc1e91fd28414ca4abeaa472ec49e010bbc67615e7" Jan 27 16:08:00 crc kubenswrapper[4966]: I0127 16:08:00.188574 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43ee80b125ee6913f16bcddc1e91fd28414ca4abeaa472ec49e010bbc67615e7"} err="failed to get container status \"43ee80b125ee6913f16bcddc1e91fd28414ca4abeaa472ec49e010bbc67615e7\": rpc error: code = NotFound desc = could not find container \"43ee80b125ee6913f16bcddc1e91fd28414ca4abeaa472ec49e010bbc67615e7\": container with ID starting with 43ee80b125ee6913f16bcddc1e91fd28414ca4abeaa472ec49e010bbc67615e7 not found: ID does not exist" Jan 27 16:08:00 crc kubenswrapper[4966]: I0127 16:08:00.537236 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a12ee30-16c2-49ec-8473-7ed403256c25" path="/var/lib/kubelet/pods/7a12ee30-16c2-49ec-8473-7ed403256c25/volumes" Jan 27 16:08:01 crc kubenswrapper[4966]: I0127 16:08:01.071316 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 16:08:01 crc kubenswrapper[4966]: I0127 16:08:01.918521 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qvqgn"] Jan 27 16:08:01 crc kubenswrapper[4966]: I0127 16:08:01.919031 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qvqgn" podUID="79463280-8972-4b4c-bfdb-c4909a812a02" containerName="registry-server" containerID="cri-o://309e4aa485a9e2cbd83801ab807116e4576a635bcfa5b05923d1843cea0ab165" gracePeriod=2 Jan 27 16:08:02 crc kubenswrapper[4966]: I0127 16:08:02.087359 4966 generic.go:334] "Generic (PLEG): container finished" podID="79463280-8972-4b4c-bfdb-c4909a812a02" containerID="309e4aa485a9e2cbd83801ab807116e4576a635bcfa5b05923d1843cea0ab165" exitCode=0 Jan 27 16:08:02 crc kubenswrapper[4966]: I0127 16:08:02.087463 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvqgn" event={"ID":"79463280-8972-4b4c-bfdb-c4909a812a02","Type":"ContainerDied","Data":"309e4aa485a9e2cbd83801ab807116e4576a635bcfa5b05923d1843cea0ab165"} Jan 27 16:08:02 crc kubenswrapper[4966]: I0127 16:08:02.501789 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qvqgn" Jan 27 16:08:02 crc kubenswrapper[4966]: I0127 16:08:02.527883 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79463280-8972-4b4c-bfdb-c4909a812a02-catalog-content\") pod \"79463280-8972-4b4c-bfdb-c4909a812a02\" (UID: \"79463280-8972-4b4c-bfdb-c4909a812a02\") " Jan 27 16:08:02 crc kubenswrapper[4966]: I0127 16:08:02.527944 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79463280-8972-4b4c-bfdb-c4909a812a02-utilities\") pod \"79463280-8972-4b4c-bfdb-c4909a812a02\" (UID: \"79463280-8972-4b4c-bfdb-c4909a812a02\") " Jan 27 16:08:02 crc kubenswrapper[4966]: I0127 16:08:02.528143 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6mxf\" (UniqueName: \"kubernetes.io/projected/79463280-8972-4b4c-bfdb-c4909a812a02-kube-api-access-v6mxf\") pod \"79463280-8972-4b4c-bfdb-c4909a812a02\" (UID: \"79463280-8972-4b4c-bfdb-c4909a812a02\") " Jan 27 16:08:02 crc kubenswrapper[4966]: I0127 16:08:02.529671 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79463280-8972-4b4c-bfdb-c4909a812a02-utilities" (OuterVolumeSpecName: "utilities") pod "79463280-8972-4b4c-bfdb-c4909a812a02" (UID: "79463280-8972-4b4c-bfdb-c4909a812a02"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:08:02 crc kubenswrapper[4966]: I0127 16:08:02.542183 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79463280-8972-4b4c-bfdb-c4909a812a02-kube-api-access-v6mxf" (OuterVolumeSpecName: "kube-api-access-v6mxf") pod "79463280-8972-4b4c-bfdb-c4909a812a02" (UID: "79463280-8972-4b4c-bfdb-c4909a812a02"). InnerVolumeSpecName "kube-api-access-v6mxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:08:02 crc kubenswrapper[4966]: I0127 16:08:02.593754 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79463280-8972-4b4c-bfdb-c4909a812a02-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79463280-8972-4b4c-bfdb-c4909a812a02" (UID: "79463280-8972-4b4c-bfdb-c4909a812a02"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:08:02 crc kubenswrapper[4966]: I0127 16:08:02.631216 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6mxf\" (UniqueName: \"kubernetes.io/projected/79463280-8972-4b4c-bfdb-c4909a812a02-kube-api-access-v6mxf\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:02 crc kubenswrapper[4966]: I0127 16:08:02.631266 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79463280-8972-4b4c-bfdb-c4909a812a02-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:02 crc kubenswrapper[4966]: I0127 16:08:02.631286 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79463280-8972-4b4c-bfdb-c4909a812a02-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.035777 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.036401 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.037500 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.038829 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.100470 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvqgn" event={"ID":"79463280-8972-4b4c-bfdb-c4909a812a02","Type":"ContainerDied","Data":"0480985c50c7206385c832be6a06bb272da4a41b229dd5af5f4a7f2258377800"} Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.100527 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qvqgn" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.100531 4966 scope.go:117] "RemoveContainer" containerID="309e4aa485a9e2cbd83801ab807116e4576a635bcfa5b05923d1843cea0ab165" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.100755 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.108831 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.138353 4966 scope.go:117] "RemoveContainer" containerID="61250b183962098966ce3e70459f1c775fe3ed9157020a3bfb2781ae5a007200" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.212244 4966 scope.go:117] "RemoveContainer" containerID="323ad2851775c4eed890e574191165ed14743a1862fa7121794028db87ba1578" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.248576 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qvqgn"] Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.331733 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qvqgn"] Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.386457 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-6w8s7"] Jan 27 16:08:03 crc kubenswrapper[4966]: E0127 16:08:03.387235 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79463280-8972-4b4c-bfdb-c4909a812a02" containerName="extract-utilities" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.387304 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="79463280-8972-4b4c-bfdb-c4909a812a02" containerName="extract-utilities" Jan 27 16:08:03 crc kubenswrapper[4966]: E0127 16:08:03.387356 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79463280-8972-4b4c-bfdb-c4909a812a02" containerName="extract-content" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.387416 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="79463280-8972-4b4c-bfdb-c4909a812a02" containerName="extract-content" Jan 27 16:08:03 crc kubenswrapper[4966]: E0127 16:08:03.387466 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a12ee30-16c2-49ec-8473-7ed403256c25" containerName="registry-server" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.387513 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a12ee30-16c2-49ec-8473-7ed403256c25" containerName="registry-server" Jan 27 16:08:03 crc kubenswrapper[4966]: E0127 16:08:03.387570 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a12ee30-16c2-49ec-8473-7ed403256c25" containerName="extract-utilities" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.387613 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a12ee30-16c2-49ec-8473-7ed403256c25" containerName="extract-utilities" Jan 27 16:08:03 crc kubenswrapper[4966]: E0127 16:08:03.387664 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a12ee30-16c2-49ec-8473-7ed403256c25" containerName="extract-content" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.387708 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a12ee30-16c2-49ec-8473-7ed403256c25" containerName="extract-content" Jan 27 16:08:03 crc kubenswrapper[4966]: E0127 16:08:03.389555 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79463280-8972-4b4c-bfdb-c4909a812a02" containerName="registry-server" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.389707 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="79463280-8972-4b4c-bfdb-c4909a812a02" containerName="registry-server" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.390396 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a12ee30-16c2-49ec-8473-7ed403256c25" containerName="registry-server" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.390545 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="79463280-8972-4b4c-bfdb-c4909a812a02" containerName="registry-server" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.392303 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.424777 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-6w8s7"] Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.458396 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-6w8s7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.458470 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-6w8s7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.458566 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-6w8s7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.458764 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-6w8s7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.458949 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kpb2\" (UniqueName: \"kubernetes.io/projected/05824f99-475d-4f23-84fa-33b23a3030b7-kube-api-access-4kpb2\") pod \"dnsmasq-dns-f84f9ccf-6w8s7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.458973 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-config\") pod \"dnsmasq-dns-f84f9ccf-6w8s7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.563239 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-6w8s7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.563445 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kpb2\" (UniqueName: \"kubernetes.io/projected/05824f99-475d-4f23-84fa-33b23a3030b7-kube-api-access-4kpb2\") pod \"dnsmasq-dns-f84f9ccf-6w8s7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.563539 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-config\") pod \"dnsmasq-dns-f84f9ccf-6w8s7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.563600 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-6w8s7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.563662 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-6w8s7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.563769 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-6w8s7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.567317 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-6w8s7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.568062 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-6w8s7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.568819 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-6w8s7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.570998 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-config\") pod \"dnsmasq-dns-f84f9ccf-6w8s7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.571123 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-6w8s7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.593027 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kpb2\" (UniqueName: \"kubernetes.io/projected/05824f99-475d-4f23-84fa-33b23a3030b7-kube-api-access-4kpb2\") pod \"dnsmasq-dns-f84f9ccf-6w8s7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:03 crc kubenswrapper[4966]: I0127 16:08:03.726468 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:04 crc kubenswrapper[4966]: I0127 16:08:04.124759 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kxq52"] Jan 27 16:08:04 crc kubenswrapper[4966]: I0127 16:08:04.126619 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kxq52" podUID="b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" containerName="registry-server" containerID="cri-o://7231d6bf02993ae8aa7403697c35960483922ec96248cff0aa08b09f1e2b9459" gracePeriod=2 Jan 27 16:08:04 crc kubenswrapper[4966]: I0127 16:08:04.478176 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-6w8s7"] Jan 27 16:08:04 crc kubenswrapper[4966]: I0127 16:08:04.598062 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:08:04 crc kubenswrapper[4966]: E0127 16:08:04.598934 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:08:04 crc kubenswrapper[4966]: I0127 16:08:04.599146 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79463280-8972-4b4c-bfdb-c4909a812a02" path="/var/lib/kubelet/pods/79463280-8972-4b4c-bfdb-c4909a812a02/volumes" Jan 27 16:08:04 crc kubenswrapper[4966]: I0127 16:08:04.836424 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kxq52" Jan 27 16:08:04 crc kubenswrapper[4966]: I0127 16:08:04.970773 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7e9482a-8aa4-4efa-92cd-8ac34c500eeb-utilities\") pod \"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb\" (UID: \"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb\") " Jan 27 16:08:04 crc kubenswrapper[4966]: I0127 16:08:04.971072 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtfqk\" (UniqueName: \"kubernetes.io/projected/b7e9482a-8aa4-4efa-92cd-8ac34c500eeb-kube-api-access-qtfqk\") pod \"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb\" (UID: \"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb\") " Jan 27 16:08:04 crc kubenswrapper[4966]: I0127 16:08:04.971196 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7e9482a-8aa4-4efa-92cd-8ac34c500eeb-catalog-content\") pod \"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb\" (UID: \"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb\") " Jan 27 16:08:04 crc kubenswrapper[4966]: I0127 16:08:04.971450 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7e9482a-8aa4-4efa-92cd-8ac34c500eeb-utilities" (OuterVolumeSpecName: "utilities") pod "b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" (UID: "b7e9482a-8aa4-4efa-92cd-8ac34c500eeb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:08:04 crc kubenswrapper[4966]: I0127 16:08:04.972057 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7e9482a-8aa4-4efa-92cd-8ac34c500eeb-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:04 crc kubenswrapper[4966]: I0127 16:08:04.977047 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7e9482a-8aa4-4efa-92cd-8ac34c500eeb-kube-api-access-qtfqk" (OuterVolumeSpecName: "kube-api-access-qtfqk") pod "b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" (UID: "b7e9482a-8aa4-4efa-92cd-8ac34c500eeb"). InnerVolumeSpecName "kube-api-access-qtfqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.074621 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtfqk\" (UniqueName: \"kubernetes.io/projected/b7e9482a-8aa4-4efa-92cd-8ac34c500eeb-kube-api-access-qtfqk\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.083078 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7e9482a-8aa4-4efa-92cd-8ac34c500eeb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" (UID: "b7e9482a-8aa4-4efa-92cd-8ac34c500eeb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.137224 4966 generic.go:334] "Generic (PLEG): container finished" podID="b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" containerID="7231d6bf02993ae8aa7403697c35960483922ec96248cff0aa08b09f1e2b9459" exitCode=0 Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.137292 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxq52" event={"ID":"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb","Type":"ContainerDied","Data":"7231d6bf02993ae8aa7403697c35960483922ec96248cff0aa08b09f1e2b9459"} Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.137330 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxq52" event={"ID":"b7e9482a-8aa4-4efa-92cd-8ac34c500eeb","Type":"ContainerDied","Data":"0c07757bae41ae652d530b000539107f1e8639dff404fe518c1980e387f952d8"} Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.137294 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kxq52" Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.137347 4966 scope.go:117] "RemoveContainer" containerID="7231d6bf02993ae8aa7403697c35960483922ec96248cff0aa08b09f1e2b9459" Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.139991 4966 generic.go:334] "Generic (PLEG): container finished" podID="05824f99-475d-4f23-84fa-33b23a3030b7" containerID="23e15de12b5420e6c3ad9ad1150fe0d64f0c7535f06e964579051108d2799b9d" exitCode=0 Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.140027 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" event={"ID":"05824f99-475d-4f23-84fa-33b23a3030b7","Type":"ContainerDied","Data":"23e15de12b5420e6c3ad9ad1150fe0d64f0c7535f06e964579051108d2799b9d"} Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.140081 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" event={"ID":"05824f99-475d-4f23-84fa-33b23a3030b7","Type":"ContainerStarted","Data":"e798561ac868a5113571a29afd5924410061a1040ccfcf721f787a601f1231d9"} Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.164460 4966 scope.go:117] "RemoveContainer" containerID="9eb953e556e73c7e43c680279efe4521dbc36ae9401a988c33287db1ccb06066" Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.178364 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7e9482a-8aa4-4efa-92cd-8ac34c500eeb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.196466 4966 scope.go:117] "RemoveContainer" containerID="de88a3f14568ac8cefd0e2af199e5c16f8bb0a63503e5bb8156305e41ea47143" Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.202016 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kxq52"] Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.213282 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kxq52"] Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.254741 4966 scope.go:117] "RemoveContainer" containerID="7231d6bf02993ae8aa7403697c35960483922ec96248cff0aa08b09f1e2b9459" Jan 27 16:08:05 crc kubenswrapper[4966]: E0127 16:08:05.257382 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7231d6bf02993ae8aa7403697c35960483922ec96248cff0aa08b09f1e2b9459\": container with ID starting with 7231d6bf02993ae8aa7403697c35960483922ec96248cff0aa08b09f1e2b9459 not found: ID does not exist" containerID="7231d6bf02993ae8aa7403697c35960483922ec96248cff0aa08b09f1e2b9459" Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.257439 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7231d6bf02993ae8aa7403697c35960483922ec96248cff0aa08b09f1e2b9459"} err="failed to get container status \"7231d6bf02993ae8aa7403697c35960483922ec96248cff0aa08b09f1e2b9459\": rpc error: code = NotFound desc = could not find container \"7231d6bf02993ae8aa7403697c35960483922ec96248cff0aa08b09f1e2b9459\": container with ID starting with 7231d6bf02993ae8aa7403697c35960483922ec96248cff0aa08b09f1e2b9459 not found: ID does not exist" Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.257462 4966 scope.go:117] "RemoveContainer" containerID="9eb953e556e73c7e43c680279efe4521dbc36ae9401a988c33287db1ccb06066" Jan 27 16:08:05 crc kubenswrapper[4966]: E0127 16:08:05.258032 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eb953e556e73c7e43c680279efe4521dbc36ae9401a988c33287db1ccb06066\": container with ID starting with 9eb953e556e73c7e43c680279efe4521dbc36ae9401a988c33287db1ccb06066 not found: ID does not exist" containerID="9eb953e556e73c7e43c680279efe4521dbc36ae9401a988c33287db1ccb06066" Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.258105 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb953e556e73c7e43c680279efe4521dbc36ae9401a988c33287db1ccb06066"} err="failed to get container status \"9eb953e556e73c7e43c680279efe4521dbc36ae9401a988c33287db1ccb06066\": rpc error: code = NotFound desc = could not find container \"9eb953e556e73c7e43c680279efe4521dbc36ae9401a988c33287db1ccb06066\": container with ID starting with 9eb953e556e73c7e43c680279efe4521dbc36ae9401a988c33287db1ccb06066 not found: ID does not exist" Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.258153 4966 scope.go:117] "RemoveContainer" containerID="de88a3f14568ac8cefd0e2af199e5c16f8bb0a63503e5bb8156305e41ea47143" Jan 27 16:08:05 crc kubenswrapper[4966]: E0127 16:08:05.258581 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de88a3f14568ac8cefd0e2af199e5c16f8bb0a63503e5bb8156305e41ea47143\": container with ID starting with de88a3f14568ac8cefd0e2af199e5c16f8bb0a63503e5bb8156305e41ea47143 not found: ID does not exist" containerID="de88a3f14568ac8cefd0e2af199e5c16f8bb0a63503e5bb8156305e41ea47143" Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.258634 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de88a3f14568ac8cefd0e2af199e5c16f8bb0a63503e5bb8156305e41ea47143"} err="failed to get container status \"de88a3f14568ac8cefd0e2af199e5c16f8bb0a63503e5bb8156305e41ea47143\": rpc error: code = NotFound desc = could not find container \"de88a3f14568ac8cefd0e2af199e5c16f8bb0a63503e5bb8156305e41ea47143\": container with ID starting with de88a3f14568ac8cefd0e2af199e5c16f8bb0a63503e5bb8156305e41ea47143 not found: ID does not exist" Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.946074 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.946617 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5173799b-684f-4c79-8cff-9e490bf9e44d" containerName="ceilometer-central-agent" containerID="cri-o://0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a" gracePeriod=30 Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.946744 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5173799b-684f-4c79-8cff-9e490bf9e44d" containerName="proxy-httpd" containerID="cri-o://7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554" gracePeriod=30 Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.946791 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5173799b-684f-4c79-8cff-9e490bf9e44d" containerName="sg-core" containerID="cri-o://f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00" gracePeriod=30 Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.946825 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5173799b-684f-4c79-8cff-9e490bf9e44d" containerName="ceilometer-notification-agent" containerID="cri-o://e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc" gracePeriod=30 Jan 27 16:08:05 crc kubenswrapper[4966]: I0127 16:08:05.974286 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.004114 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.102749 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-combined-ca-bundle\") pod \"2b7d4ffe-671a-4547-83aa-50f011b9ae7e\" (UID: \"2b7d4ffe-671a-4547-83aa-50f011b9ae7e\") " Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.102821 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7w9k\" (UniqueName: \"kubernetes.io/projected/69d2d694-e151-492e-9cb4-8dc614ea1b44-kube-api-access-g7w9k\") pod \"69d2d694-e151-492e-9cb4-8dc614ea1b44\" (UID: \"69d2d694-e151-492e-9cb4-8dc614ea1b44\") " Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.102942 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d2d694-e151-492e-9cb4-8dc614ea1b44-combined-ca-bundle\") pod \"69d2d694-e151-492e-9cb4-8dc614ea1b44\" (UID: \"69d2d694-e151-492e-9cb4-8dc614ea1b44\") " Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.102983 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtb4k\" (UniqueName: \"kubernetes.io/projected/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-kube-api-access-dtb4k\") pod \"2b7d4ffe-671a-4547-83aa-50f011b9ae7e\" (UID: \"2b7d4ffe-671a-4547-83aa-50f011b9ae7e\") " Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.103025 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-config-data\") pod \"2b7d4ffe-671a-4547-83aa-50f011b9ae7e\" (UID: \"2b7d4ffe-671a-4547-83aa-50f011b9ae7e\") " Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.103161 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d2d694-e151-492e-9cb4-8dc614ea1b44-config-data\") pod \"69d2d694-e151-492e-9cb4-8dc614ea1b44\" (UID: \"69d2d694-e151-492e-9cb4-8dc614ea1b44\") " Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.103217 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-logs\") pod \"2b7d4ffe-671a-4547-83aa-50f011b9ae7e\" (UID: \"2b7d4ffe-671a-4547-83aa-50f011b9ae7e\") " Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.103638 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-logs" (OuterVolumeSpecName: "logs") pod "2b7d4ffe-671a-4547-83aa-50f011b9ae7e" (UID: "2b7d4ffe-671a-4547-83aa-50f011b9ae7e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.103745 4966 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-logs\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.118211 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-kube-api-access-dtb4k" (OuterVolumeSpecName: "kube-api-access-dtb4k") pod "2b7d4ffe-671a-4547-83aa-50f011b9ae7e" (UID: "2b7d4ffe-671a-4547-83aa-50f011b9ae7e"). InnerVolumeSpecName "kube-api-access-dtb4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.124208 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69d2d694-e151-492e-9cb4-8dc614ea1b44-kube-api-access-g7w9k" (OuterVolumeSpecName: "kube-api-access-g7w9k") pod "69d2d694-e151-492e-9cb4-8dc614ea1b44" (UID: "69d2d694-e151-492e-9cb4-8dc614ea1b44"). InnerVolumeSpecName "kube-api-access-g7w9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.126611 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.151987 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b7d4ffe-671a-4547-83aa-50f011b9ae7e" (UID: "2b7d4ffe-671a-4547-83aa-50f011b9ae7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.152773 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-config-data" (OuterVolumeSpecName: "config-data") pod "2b7d4ffe-671a-4547-83aa-50f011b9ae7e" (UID: "2b7d4ffe-671a-4547-83aa-50f011b9ae7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.153734 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69d2d694-e151-492e-9cb4-8dc614ea1b44-config-data" (OuterVolumeSpecName: "config-data") pod "69d2d694-e151-492e-9cb4-8dc614ea1b44" (UID: "69d2d694-e151-492e-9cb4-8dc614ea1b44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.160438 4966 generic.go:334] "Generic (PLEG): container finished" podID="69d2d694-e151-492e-9cb4-8dc614ea1b44" containerID="a15f9a28a79f7983ad6416efef931f8168d080550d9ba99357d0a514ad900a3a" exitCode=137 Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.160541 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"69d2d694-e151-492e-9cb4-8dc614ea1b44","Type":"ContainerDied","Data":"a15f9a28a79f7983ad6416efef931f8168d080550d9ba99357d0a514ad900a3a"} Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.160567 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.160588 4966 scope.go:117] "RemoveContainer" containerID="a15f9a28a79f7983ad6416efef931f8168d080550d9ba99357d0a514ad900a3a" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.160575 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"69d2d694-e151-492e-9cb4-8dc614ea1b44","Type":"ContainerDied","Data":"1426ebce8781a350f84522a1f391dc10e063f5a863febde5ca541cbd6d4e5bd5"} Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.166607 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69d2d694-e151-492e-9cb4-8dc614ea1b44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69d2d694-e151-492e-9cb4-8dc614ea1b44" (UID: "69d2d694-e151-492e-9cb4-8dc614ea1b44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.167249 4966 generic.go:334] "Generic (PLEG): container finished" podID="2b7d4ffe-671a-4547-83aa-50f011b9ae7e" containerID="d15544ec68c1214c91e7c98ec0eecb3c865f380bcb06367636cb49d2cf1a896c" exitCode=137 Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.167314 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2b7d4ffe-671a-4547-83aa-50f011b9ae7e","Type":"ContainerDied","Data":"d15544ec68c1214c91e7c98ec0eecb3c865f380bcb06367636cb49d2cf1a896c"} Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.167345 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2b7d4ffe-671a-4547-83aa-50f011b9ae7e","Type":"ContainerDied","Data":"17a2dbcaca7e7288beeb12ebd4c55caaf4b29d46f1e7b608633d4b07108d0497"} Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.167416 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.177847 4966 generic.go:334] "Generic (PLEG): container finished" podID="5173799b-684f-4c79-8cff-9e490bf9e44d" containerID="f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00" exitCode=2 Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.178115 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5173799b-684f-4c79-8cff-9e490bf9e44d","Type":"ContainerDied","Data":"f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00"} Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.183992 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4dbe16c5-6e68-4e28-9c01-e9558046f377" containerName="nova-api-log" containerID="cri-o://fdf71b6f010fcf21783973b32ba42b833c5fbfcf587be85485655c351d0516a6" gracePeriod=30 Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.184413 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" event={"ID":"05824f99-475d-4f23-84fa-33b23a3030b7","Type":"ContainerStarted","Data":"76722416e8e1d9ef1654f06a700f693a846cb88706cb407c1b1d9aff39b48390"} Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.184555 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4dbe16c5-6e68-4e28-9c01-e9558046f377" containerName="nova-api-api" containerID="cri-o://8613cdb680c921bc7b5bfc3052cd2909b41e99b146f4d245264471b2685f01d1" gracePeriod=30 Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.184996 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.206252 4966 scope.go:117] "RemoveContainer" containerID="a15f9a28a79f7983ad6416efef931f8168d080550d9ba99357d0a514ad900a3a" Jan 27 16:08:06 crc kubenswrapper[4966]: E0127 16:08:06.207150 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a15f9a28a79f7983ad6416efef931f8168d080550d9ba99357d0a514ad900a3a\": container with ID starting with a15f9a28a79f7983ad6416efef931f8168d080550d9ba99357d0a514ad900a3a not found: ID does not exist" containerID="a15f9a28a79f7983ad6416efef931f8168d080550d9ba99357d0a514ad900a3a" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.207185 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a15f9a28a79f7983ad6416efef931f8168d080550d9ba99357d0a514ad900a3a"} err="failed to get container status \"a15f9a28a79f7983ad6416efef931f8168d080550d9ba99357d0a514ad900a3a\": rpc error: code = NotFound desc = could not find container \"a15f9a28a79f7983ad6416efef931f8168d080550d9ba99357d0a514ad900a3a\": container with ID starting with a15f9a28a79f7983ad6416efef931f8168d080550d9ba99357d0a514ad900a3a not found: ID does not exist" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.207210 4966 scope.go:117] "RemoveContainer" containerID="d15544ec68c1214c91e7c98ec0eecb3c865f380bcb06367636cb49d2cf1a896c" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.209519 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.209568 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d2d694-e151-492e-9cb4-8dc614ea1b44-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.209580 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.209598 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7w9k\" (UniqueName: \"kubernetes.io/projected/69d2d694-e151-492e-9cb4-8dc614ea1b44-kube-api-access-g7w9k\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.209612 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d2d694-e151-492e-9cb4-8dc614ea1b44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.209624 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtb4k\" (UniqueName: \"kubernetes.io/projected/2b7d4ffe-671a-4547-83aa-50f011b9ae7e-kube-api-access-dtb4k\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.213230 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.235819 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.280085 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 16:08:06 crc kubenswrapper[4966]: E0127 16:08:06.281172 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7d4ffe-671a-4547-83aa-50f011b9ae7e" containerName="nova-metadata-metadata" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.281191 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7d4ffe-671a-4547-83aa-50f011b9ae7e" containerName="nova-metadata-metadata" Jan 27 16:08:06 crc kubenswrapper[4966]: E0127 16:08:06.281223 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" containerName="registry-server" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.281230 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" containerName="registry-server" Jan 27 16:08:06 crc kubenswrapper[4966]: E0127 16:08:06.281247 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" containerName="extract-utilities" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.281253 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" containerName="extract-utilities" Jan 27 16:08:06 crc kubenswrapper[4966]: E0127 16:08:06.281266 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" containerName="extract-content" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.281271 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" containerName="extract-content" Jan 27 16:08:06 crc kubenswrapper[4966]: E0127 16:08:06.281289 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69d2d694-e151-492e-9cb4-8dc614ea1b44" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.281295 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="69d2d694-e151-492e-9cb4-8dc614ea1b44" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 16:08:06 crc kubenswrapper[4966]: E0127 16:08:06.281306 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7d4ffe-671a-4547-83aa-50f011b9ae7e" containerName="nova-metadata-log" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.281332 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7d4ffe-671a-4547-83aa-50f011b9ae7e" containerName="nova-metadata-log" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.281568 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="69d2d694-e151-492e-9cb4-8dc614ea1b44" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.281590 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b7d4ffe-671a-4547-83aa-50f011b9ae7e" containerName="nova-metadata-log" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.281598 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b7d4ffe-671a-4547-83aa-50f011b9ae7e" containerName="nova-metadata-metadata" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.281608 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" containerName="registry-server" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.284841 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.286407 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" podStartSLOduration=3.286397749 podStartE2EDuration="3.286397749s" podCreationTimestamp="2026-01-27 16:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:08:06.230460733 +0000 UTC m=+1552.533254221" watchObservedRunningTime="2026-01-27 16:08:06.286397749 +0000 UTC m=+1552.589191227" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.289735 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.289846 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.307709 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.331148 4966 scope.go:117] "RemoveContainer" containerID="92721b2cf26aa9700b7654467d0af8ac43ce29f2a61db5936d54d8b2cd60b186" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.414033 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f2adce5-58ba-4306-aad8-cdce724c23d1-logs\") pod \"nova-metadata-0\" (UID: \"3f2adce5-58ba-4306-aad8-cdce724c23d1\") " pod="openstack/nova-metadata-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.414864 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jrmt\" (UniqueName: \"kubernetes.io/projected/3f2adce5-58ba-4306-aad8-cdce724c23d1-kube-api-access-8jrmt\") pod \"nova-metadata-0\" (UID: \"3f2adce5-58ba-4306-aad8-cdce724c23d1\") " pod="openstack/nova-metadata-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.415007 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f2adce5-58ba-4306-aad8-cdce724c23d1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3f2adce5-58ba-4306-aad8-cdce724c23d1\") " pod="openstack/nova-metadata-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.415104 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f2adce5-58ba-4306-aad8-cdce724c23d1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3f2adce5-58ba-4306-aad8-cdce724c23d1\") " pod="openstack/nova-metadata-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.415265 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f2adce5-58ba-4306-aad8-cdce724c23d1-config-data\") pod \"nova-metadata-0\" (UID: \"3f2adce5-58ba-4306-aad8-cdce724c23d1\") " pod="openstack/nova-metadata-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.517116 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f2adce5-58ba-4306-aad8-cdce724c23d1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3f2adce5-58ba-4306-aad8-cdce724c23d1\") " pod="openstack/nova-metadata-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.517209 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f2adce5-58ba-4306-aad8-cdce724c23d1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3f2adce5-58ba-4306-aad8-cdce724c23d1\") " pod="openstack/nova-metadata-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.517294 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f2adce5-58ba-4306-aad8-cdce724c23d1-config-data\") pod \"nova-metadata-0\" (UID: \"3f2adce5-58ba-4306-aad8-cdce724c23d1\") " pod="openstack/nova-metadata-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.517371 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f2adce5-58ba-4306-aad8-cdce724c23d1-logs\") pod \"nova-metadata-0\" (UID: \"3f2adce5-58ba-4306-aad8-cdce724c23d1\") " pod="openstack/nova-metadata-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.517389 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jrmt\" (UniqueName: \"kubernetes.io/projected/3f2adce5-58ba-4306-aad8-cdce724c23d1-kube-api-access-8jrmt\") pod \"nova-metadata-0\" (UID: \"3f2adce5-58ba-4306-aad8-cdce724c23d1\") " pod="openstack/nova-metadata-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.518049 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f2adce5-58ba-4306-aad8-cdce724c23d1-logs\") pod \"nova-metadata-0\" (UID: \"3f2adce5-58ba-4306-aad8-cdce724c23d1\") " pod="openstack/nova-metadata-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.521426 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f2adce5-58ba-4306-aad8-cdce724c23d1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3f2adce5-58ba-4306-aad8-cdce724c23d1\") " pod="openstack/nova-metadata-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.521620 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f2adce5-58ba-4306-aad8-cdce724c23d1-config-data\") pod \"nova-metadata-0\" (UID: \"3f2adce5-58ba-4306-aad8-cdce724c23d1\") " pod="openstack/nova-metadata-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.523286 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f2adce5-58ba-4306-aad8-cdce724c23d1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3f2adce5-58ba-4306-aad8-cdce724c23d1\") " pod="openstack/nova-metadata-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.543002 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b7d4ffe-671a-4547-83aa-50f011b9ae7e" path="/var/lib/kubelet/pods/2b7d4ffe-671a-4547-83aa-50f011b9ae7e/volumes" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.543719 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7e9482a-8aa4-4efa-92cd-8ac34c500eeb" path="/var/lib/kubelet/pods/b7e9482a-8aa4-4efa-92cd-8ac34c500eeb/volumes" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.573844 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jrmt\" (UniqueName: \"kubernetes.io/projected/3f2adce5-58ba-4306-aad8-cdce724c23d1-kube-api-access-8jrmt\") pod \"nova-metadata-0\" (UID: \"3f2adce5-58ba-4306-aad8-cdce724c23d1\") " pod="openstack/nova-metadata-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.612766 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.635549 4966 scope.go:117] "RemoveContainer" containerID="d15544ec68c1214c91e7c98ec0eecb3c865f380bcb06367636cb49d2cf1a896c" Jan 27 16:08:06 crc kubenswrapper[4966]: E0127 16:08:06.636100 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d15544ec68c1214c91e7c98ec0eecb3c865f380bcb06367636cb49d2cf1a896c\": container with ID starting with d15544ec68c1214c91e7c98ec0eecb3c865f380bcb06367636cb49d2cf1a896c not found: ID does not exist" containerID="d15544ec68c1214c91e7c98ec0eecb3c865f380bcb06367636cb49d2cf1a896c" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.636170 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d15544ec68c1214c91e7c98ec0eecb3c865f380bcb06367636cb49d2cf1a896c"} err="failed to get container status \"d15544ec68c1214c91e7c98ec0eecb3c865f380bcb06367636cb49d2cf1a896c\": rpc error: code = NotFound desc = could not find container \"d15544ec68c1214c91e7c98ec0eecb3c865f380bcb06367636cb49d2cf1a896c\": container with ID starting with d15544ec68c1214c91e7c98ec0eecb3c865f380bcb06367636cb49d2cf1a896c not found: ID does not exist" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.636206 4966 scope.go:117] "RemoveContainer" containerID="92721b2cf26aa9700b7654467d0af8ac43ce29f2a61db5936d54d8b2cd60b186" Jan 27 16:08:06 crc kubenswrapper[4966]: E0127 16:08:06.637548 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92721b2cf26aa9700b7654467d0af8ac43ce29f2a61db5936d54d8b2cd60b186\": container with ID starting with 92721b2cf26aa9700b7654467d0af8ac43ce29f2a61db5936d54d8b2cd60b186 not found: ID does not exist" containerID="92721b2cf26aa9700b7654467d0af8ac43ce29f2a61db5936d54d8b2cd60b186" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.637641 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92721b2cf26aa9700b7654467d0af8ac43ce29f2a61db5936d54d8b2cd60b186"} err="failed to get container status \"92721b2cf26aa9700b7654467d0af8ac43ce29f2a61db5936d54d8b2cd60b186\": rpc error: code = NotFound desc = could not find container \"92721b2cf26aa9700b7654467d0af8ac43ce29f2a61db5936d54d8b2cd60b186\": container with ID starting with 92721b2cf26aa9700b7654467d0af8ac43ce29f2a61db5936d54d8b2cd60b186 not found: ID does not exist" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.655847 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.683786 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.699409 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.701749 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.705633 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.705826 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.706054 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.712678 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.823963 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d83cc2a1-a7e1-4a08-be19-acfcabb8bafa-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d83cc2a1-a7e1-4a08-be19-acfcabb8bafa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.824277 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d83cc2a1-a7e1-4a08-be19-acfcabb8bafa-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d83cc2a1-a7e1-4a08-be19-acfcabb8bafa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.824362 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mk48\" (UniqueName: \"kubernetes.io/projected/d83cc2a1-a7e1-4a08-be19-acfcabb8bafa-kube-api-access-2mk48\") pod \"nova-cell1-novncproxy-0\" (UID: \"d83cc2a1-a7e1-4a08-be19-acfcabb8bafa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.824434 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83cc2a1-a7e1-4a08-be19-acfcabb8bafa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d83cc2a1-a7e1-4a08-be19-acfcabb8bafa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.824492 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83cc2a1-a7e1-4a08-be19-acfcabb8bafa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d83cc2a1-a7e1-4a08-be19-acfcabb8bafa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.888606 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.925459 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-scripts\") pod \"5173799b-684f-4c79-8cff-9e490bf9e44d\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.925539 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqb5g\" (UniqueName: \"kubernetes.io/projected/5173799b-684f-4c79-8cff-9e490bf9e44d-kube-api-access-kqb5g\") pod \"5173799b-684f-4c79-8cff-9e490bf9e44d\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.925581 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5173799b-684f-4c79-8cff-9e490bf9e44d-log-httpd\") pod \"5173799b-684f-4c79-8cff-9e490bf9e44d\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.925662 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5173799b-684f-4c79-8cff-9e490bf9e44d-run-httpd\") pod \"5173799b-684f-4c79-8cff-9e490bf9e44d\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.925748 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-sg-core-conf-yaml\") pod \"5173799b-684f-4c79-8cff-9e490bf9e44d\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.925777 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-combined-ca-bundle\") pod \"5173799b-684f-4c79-8cff-9e490bf9e44d\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.925857 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-config-data\") pod \"5173799b-684f-4c79-8cff-9e490bf9e44d\" (UID: \"5173799b-684f-4c79-8cff-9e490bf9e44d\") " Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.926282 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83cc2a1-a7e1-4a08-be19-acfcabb8bafa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d83cc2a1-a7e1-4a08-be19-acfcabb8bafa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.926436 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d83cc2a1-a7e1-4a08-be19-acfcabb8bafa-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d83cc2a1-a7e1-4a08-be19-acfcabb8bafa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.926462 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d83cc2a1-a7e1-4a08-be19-acfcabb8bafa-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d83cc2a1-a7e1-4a08-be19-acfcabb8bafa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.926491 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mk48\" (UniqueName: \"kubernetes.io/projected/d83cc2a1-a7e1-4a08-be19-acfcabb8bafa-kube-api-access-2mk48\") pod \"nova-cell1-novncproxy-0\" (UID: \"d83cc2a1-a7e1-4a08-be19-acfcabb8bafa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.926544 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83cc2a1-a7e1-4a08-be19-acfcabb8bafa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d83cc2a1-a7e1-4a08-be19-acfcabb8bafa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.932198 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83cc2a1-a7e1-4a08-be19-acfcabb8bafa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d83cc2a1-a7e1-4a08-be19-acfcabb8bafa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.932658 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5173799b-684f-4c79-8cff-9e490bf9e44d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5173799b-684f-4c79-8cff-9e490bf9e44d" (UID: "5173799b-684f-4c79-8cff-9e490bf9e44d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.932771 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5173799b-684f-4c79-8cff-9e490bf9e44d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5173799b-684f-4c79-8cff-9e490bf9e44d" (UID: "5173799b-684f-4c79-8cff-9e490bf9e44d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.933653 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d83cc2a1-a7e1-4a08-be19-acfcabb8bafa-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d83cc2a1-a7e1-4a08-be19-acfcabb8bafa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.933859 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83cc2a1-a7e1-4a08-be19-acfcabb8bafa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d83cc2a1-a7e1-4a08-be19-acfcabb8bafa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.934826 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d83cc2a1-a7e1-4a08-be19-acfcabb8bafa-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d83cc2a1-a7e1-4a08-be19-acfcabb8bafa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.936455 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-scripts" (OuterVolumeSpecName: "scripts") pod "5173799b-684f-4c79-8cff-9e490bf9e44d" (UID: "5173799b-684f-4c79-8cff-9e490bf9e44d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.936765 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5173799b-684f-4c79-8cff-9e490bf9e44d-kube-api-access-kqb5g" (OuterVolumeSpecName: "kube-api-access-kqb5g") pod "5173799b-684f-4c79-8cff-9e490bf9e44d" (UID: "5173799b-684f-4c79-8cff-9e490bf9e44d"). InnerVolumeSpecName "kube-api-access-kqb5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.947754 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mk48\" (UniqueName: \"kubernetes.io/projected/d83cc2a1-a7e1-4a08-be19-acfcabb8bafa-kube-api-access-2mk48\") pod \"nova-cell1-novncproxy-0\" (UID: \"d83cc2a1-a7e1-4a08-be19-acfcabb8bafa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:06 crc kubenswrapper[4966]: I0127 16:08:06.963198 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5173799b-684f-4c79-8cff-9e490bf9e44d" (UID: "5173799b-684f-4c79-8cff-9e490bf9e44d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.019354 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5173799b-684f-4c79-8cff-9e490bf9e44d" (UID: "5173799b-684f-4c79-8cff-9e490bf9e44d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.029477 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.029519 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqb5g\" (UniqueName: \"kubernetes.io/projected/5173799b-684f-4c79-8cff-9e490bf9e44d-kube-api-access-kqb5g\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.029536 4966 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5173799b-684f-4c79-8cff-9e490bf9e44d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.029548 4966 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5173799b-684f-4c79-8cff-9e490bf9e44d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.029568 4966 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.029580 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.046714 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.069292 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-config-data" (OuterVolumeSpecName: "config-data") pod "5173799b-684f-4c79-8cff-9e490bf9e44d" (UID: "5173799b-684f-4c79-8cff-9e490bf9e44d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.132091 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5173799b-684f-4c79-8cff-9e490bf9e44d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.137043 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.225813 4966 generic.go:334] "Generic (PLEG): container finished" podID="5173799b-684f-4c79-8cff-9e490bf9e44d" containerID="7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554" exitCode=0 Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.226036 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.226034 4966 generic.go:334] "Generic (PLEG): container finished" podID="5173799b-684f-4c79-8cff-9e490bf9e44d" containerID="e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc" exitCode=0 Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.226567 4966 generic.go:334] "Generic (PLEG): container finished" podID="5173799b-684f-4c79-8cff-9e490bf9e44d" containerID="0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a" exitCode=0 Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.225940 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5173799b-684f-4c79-8cff-9e490bf9e44d","Type":"ContainerDied","Data":"7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554"} Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.226659 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5173799b-684f-4c79-8cff-9e490bf9e44d","Type":"ContainerDied","Data":"e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc"} Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.226674 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5173799b-684f-4c79-8cff-9e490bf9e44d","Type":"ContainerDied","Data":"0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a"} Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.226686 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5173799b-684f-4c79-8cff-9e490bf9e44d","Type":"ContainerDied","Data":"6725fce46c8cc25e2a3a4c3bef728bc631ed43b80936cd25c9e6e80e981f5b5f"} Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.226704 4966 scope.go:117] "RemoveContainer" containerID="7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.230779 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3f2adce5-58ba-4306-aad8-cdce724c23d1","Type":"ContainerStarted","Data":"18a2fc50f03a1091eb2816406e35dbc4d1f2572859d055f7a8670d08b0e67e63"} Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.257346 4966 generic.go:334] "Generic (PLEG): container finished" podID="4dbe16c5-6e68-4e28-9c01-e9558046f377" containerID="fdf71b6f010fcf21783973b32ba42b833c5fbfcf587be85485655c351d0516a6" exitCode=143 Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.259228 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4dbe16c5-6e68-4e28-9c01-e9558046f377","Type":"ContainerDied","Data":"fdf71b6f010fcf21783973b32ba42b833c5fbfcf587be85485655c351d0516a6"} Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.406064 4966 scope.go:117] "RemoveContainer" containerID="f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.422062 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.445985 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.472270 4966 scope.go:117] "RemoveContainer" containerID="e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.480138 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:08:07 crc kubenswrapper[4966]: E0127 16:08:07.480696 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5173799b-684f-4c79-8cff-9e490bf9e44d" containerName="proxy-httpd" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.480712 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="5173799b-684f-4c79-8cff-9e490bf9e44d" containerName="proxy-httpd" Jan 27 16:08:07 crc kubenswrapper[4966]: E0127 16:08:07.480734 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5173799b-684f-4c79-8cff-9e490bf9e44d" containerName="sg-core" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.480742 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="5173799b-684f-4c79-8cff-9e490bf9e44d" containerName="sg-core" Jan 27 16:08:07 crc kubenswrapper[4966]: E0127 16:08:07.480763 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5173799b-684f-4c79-8cff-9e490bf9e44d" containerName="ceilometer-notification-agent" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.480770 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="5173799b-684f-4c79-8cff-9e490bf9e44d" containerName="ceilometer-notification-agent" Jan 27 16:08:07 crc kubenswrapper[4966]: E0127 16:08:07.480797 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5173799b-684f-4c79-8cff-9e490bf9e44d" containerName="ceilometer-central-agent" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.480805 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="5173799b-684f-4c79-8cff-9e490bf9e44d" containerName="ceilometer-central-agent" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.481025 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="5173799b-684f-4c79-8cff-9e490bf9e44d" containerName="ceilometer-central-agent" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.481042 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="5173799b-684f-4c79-8cff-9e490bf9e44d" containerName="ceilometer-notification-agent" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.481058 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="5173799b-684f-4c79-8cff-9e490bf9e44d" containerName="sg-core" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.481066 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="5173799b-684f-4c79-8cff-9e490bf9e44d" containerName="proxy-httpd" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.482964 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.485950 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.493926 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.503544 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.561082 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.561178 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acb5055b-edfd-4093-9b19-6887f5d11239-log-httpd\") pod \"ceilometer-0\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.561212 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdzfg\" (UniqueName: \"kubernetes.io/projected/acb5055b-edfd-4093-9b19-6887f5d11239-kube-api-access-mdzfg\") pod \"ceilometer-0\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.561617 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-scripts\") pod \"ceilometer-0\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.561659 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acb5055b-edfd-4093-9b19-6887f5d11239-run-httpd\") pod \"ceilometer-0\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.561723 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.562045 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-config-data\") pod \"ceilometer-0\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.577754 4966 scope.go:117] "RemoveContainer" containerID="0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.625942 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.630821 4966 scope.go:117] "RemoveContainer" containerID="7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554" Jan 27 16:08:07 crc kubenswrapper[4966]: E0127 16:08:07.631547 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554\": container with ID starting with 7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554 not found: ID does not exist" containerID="7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.631572 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554"} err="failed to get container status \"7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554\": rpc error: code = NotFound desc = could not find container \"7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554\": container with ID starting with 7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554 not found: ID does not exist" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.631593 4966 scope.go:117] "RemoveContainer" containerID="f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00" Jan 27 16:08:07 crc kubenswrapper[4966]: E0127 16:08:07.636820 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00\": container with ID starting with f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00 not found: ID does not exist" containerID="f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.636868 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00"} err="failed to get container status \"f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00\": rpc error: code = NotFound desc = could not find container \"f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00\": container with ID starting with f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00 not found: ID does not exist" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.636978 4966 scope.go:117] "RemoveContainer" containerID="e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc" Jan 27 16:08:07 crc kubenswrapper[4966]: E0127 16:08:07.637346 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc\": container with ID starting with e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc not found: ID does not exist" containerID="e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.637398 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc"} err="failed to get container status \"e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc\": rpc error: code = NotFound desc = could not find container \"e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc\": container with ID starting with e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc not found: ID does not exist" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.637417 4966 scope.go:117] "RemoveContainer" containerID="0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a" Jan 27 16:08:07 crc kubenswrapper[4966]: E0127 16:08:07.637812 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a\": container with ID starting with 0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a not found: ID does not exist" containerID="0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.637830 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a"} err="failed to get container status \"0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a\": rpc error: code = NotFound desc = could not find container \"0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a\": container with ID starting with 0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a not found: ID does not exist" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.637843 4966 scope.go:117] "RemoveContainer" containerID="7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.638155 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554"} err="failed to get container status \"7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554\": rpc error: code = NotFound desc = could not find container \"7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554\": container with ID starting with 7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554 not found: ID does not exist" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.638196 4966 scope.go:117] "RemoveContainer" containerID="f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.638424 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00"} err="failed to get container status \"f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00\": rpc error: code = NotFound desc = could not find container \"f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00\": container with ID starting with f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00 not found: ID does not exist" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.638438 4966 scope.go:117] "RemoveContainer" containerID="e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc" Jan 27 16:08:07 crc kubenswrapper[4966]: W0127 16:08:07.641052 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd83cc2a1_a7e1_4a08_be19_acfcabb8bafa.slice/crio-b065c58bba62f08bfac04b216fcce6398580e315b40d9ca4834f90f3fdb76149 WatchSource:0}: Error finding container b065c58bba62f08bfac04b216fcce6398580e315b40d9ca4834f90f3fdb76149: Status 404 returned error can't find the container with id b065c58bba62f08bfac04b216fcce6398580e315b40d9ca4834f90f3fdb76149 Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.641125 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc"} err="failed to get container status \"e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc\": rpc error: code = NotFound desc = could not find container \"e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc\": container with ID starting with e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc not found: ID does not exist" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.641166 4966 scope.go:117] "RemoveContainer" containerID="0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.641716 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a"} err="failed to get container status \"0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a\": rpc error: code = NotFound desc = could not find container \"0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a\": container with ID starting with 0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a not found: ID does not exist" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.641745 4966 scope.go:117] "RemoveContainer" containerID="7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.643180 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554"} err="failed to get container status \"7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554\": rpc error: code = NotFound desc = could not find container \"7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554\": container with ID starting with 7cc0aad0fb206dc39e463acf28e0f052bc9bf95a1655f0d077255cc740884554 not found: ID does not exist" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.643201 4966 scope.go:117] "RemoveContainer" containerID="f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.644150 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00"} err="failed to get container status \"f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00\": rpc error: code = NotFound desc = could not find container \"f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00\": container with ID starting with f05242c6f46226350edaa6ef4bdd4af10a1890b545c08d1e0c184ab495b34e00 not found: ID does not exist" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.644200 4966 scope.go:117] "RemoveContainer" containerID="e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.644571 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc"} err="failed to get container status \"e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc\": rpc error: code = NotFound desc = could not find container \"e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc\": container with ID starting with e357a7996fa1a886e260f1267664c40645beb76a9a9408b98a002a616cd417fc not found: ID does not exist" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.644595 4966 scope.go:117] "RemoveContainer" containerID="0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.648760 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a"} err="failed to get container status \"0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a\": rpc error: code = NotFound desc = could not find container \"0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a\": container with ID starting with 0a5d533345916473a4ae56186ebaa32cd7d95b8ef976272af4597e18bfe26c8a not found: ID does not exist" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.664526 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.664910 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-config-data\") pod \"ceilometer-0\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.665026 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.665095 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acb5055b-edfd-4093-9b19-6887f5d11239-log-httpd\") pod \"ceilometer-0\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.665122 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdzfg\" (UniqueName: \"kubernetes.io/projected/acb5055b-edfd-4093-9b19-6887f5d11239-kube-api-access-mdzfg\") pod \"ceilometer-0\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.665227 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-scripts\") pod \"ceilometer-0\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.665268 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acb5055b-edfd-4093-9b19-6887f5d11239-run-httpd\") pod \"ceilometer-0\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.667475 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acb5055b-edfd-4093-9b19-6887f5d11239-run-httpd\") pod \"ceilometer-0\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.667932 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acb5055b-edfd-4093-9b19-6887f5d11239-log-httpd\") pod \"ceilometer-0\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.670568 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.670692 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-scripts\") pod \"ceilometer-0\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.674017 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.679490 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-config-data\") pod \"ceilometer-0\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.682529 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdzfg\" (UniqueName: \"kubernetes.io/projected/acb5055b-edfd-4093-9b19-6887f5d11239-kube-api-access-mdzfg\") pod \"ceilometer-0\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " pod="openstack/ceilometer-0" Jan 27 16:08:07 crc kubenswrapper[4966]: I0127 16:08:07.864631 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:08:08 crc kubenswrapper[4966]: I0127 16:08:08.273633 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d83cc2a1-a7e1-4a08-be19-acfcabb8bafa","Type":"ContainerStarted","Data":"e0016da516328b8611a9ebacbacf09c1184baf2586258ef558fd61c2002e68da"} Jan 27 16:08:08 crc kubenswrapper[4966]: I0127 16:08:08.273676 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d83cc2a1-a7e1-4a08-be19-acfcabb8bafa","Type":"ContainerStarted","Data":"b065c58bba62f08bfac04b216fcce6398580e315b40d9ca4834f90f3fdb76149"} Jan 27 16:08:08 crc kubenswrapper[4966]: I0127 16:08:08.291386 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3f2adce5-58ba-4306-aad8-cdce724c23d1","Type":"ContainerStarted","Data":"21a1f9ab6ee9283f1b792c200604275da81a9459d9934bde76ad0a69d5e4a1f0"} Jan 27 16:08:08 crc kubenswrapper[4966]: I0127 16:08:08.291444 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3f2adce5-58ba-4306-aad8-cdce724c23d1","Type":"ContainerStarted","Data":"8740916bed18d8180fbae725988d7a57f976364db24bd05ea0b23e0f512257a1"} Jan 27 16:08:08 crc kubenswrapper[4966]: I0127 16:08:08.297639 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.297625617 podStartE2EDuration="2.297625617s" podCreationTimestamp="2026-01-27 16:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:08:08.293205019 +0000 UTC m=+1554.595998517" watchObservedRunningTime="2026-01-27 16:08:08.297625617 +0000 UTC m=+1554.600419106" Jan 27 16:08:08 crc kubenswrapper[4966]: I0127 16:08:08.318821 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.318801133 podStartE2EDuration="2.318801133s" podCreationTimestamp="2026-01-27 16:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:08:08.311432131 +0000 UTC m=+1554.614225639" watchObservedRunningTime="2026-01-27 16:08:08.318801133 +0000 UTC m=+1554.621594621" Jan 27 16:08:08 crc kubenswrapper[4966]: I0127 16:08:08.370170 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:08:08 crc kubenswrapper[4966]: I0127 16:08:08.547578 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5173799b-684f-4c79-8cff-9e490bf9e44d" path="/var/lib/kubelet/pods/5173799b-684f-4c79-8cff-9e490bf9e44d/volumes" Jan 27 16:08:08 crc kubenswrapper[4966]: I0127 16:08:08.549431 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69d2d694-e151-492e-9cb4-8dc614ea1b44" path="/var/lib/kubelet/pods/69d2d694-e151-492e-9cb4-8dc614ea1b44/volumes" Jan 27 16:08:08 crc kubenswrapper[4966]: I0127 16:08:08.550563 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:08:09 crc kubenswrapper[4966]: I0127 16:08:09.332713 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acb5055b-edfd-4093-9b19-6887f5d11239","Type":"ContainerStarted","Data":"5ada9a99ae966823d951b07f1bd9624a365b4a8089f96fd6e36e2d907516982d"} Jan 27 16:08:09 crc kubenswrapper[4966]: I0127 16:08:09.332767 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acb5055b-edfd-4093-9b19-6887f5d11239","Type":"ContainerStarted","Data":"889eb9cd576873c7ffa127ae781c088d11b1a92b39bbc6c24b824bc5d9d53cfd"} Jan 27 16:08:09 crc kubenswrapper[4966]: I0127 16:08:09.970159 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.022252 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dbe16c5-6e68-4e28-9c01-e9558046f377-config-data\") pod \"4dbe16c5-6e68-4e28-9c01-e9558046f377\" (UID: \"4dbe16c5-6e68-4e28-9c01-e9558046f377\") " Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.022295 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dbe16c5-6e68-4e28-9c01-e9558046f377-combined-ca-bundle\") pod \"4dbe16c5-6e68-4e28-9c01-e9558046f377\" (UID: \"4dbe16c5-6e68-4e28-9c01-e9558046f377\") " Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.022407 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2slcf\" (UniqueName: \"kubernetes.io/projected/4dbe16c5-6e68-4e28-9c01-e9558046f377-kube-api-access-2slcf\") pod \"4dbe16c5-6e68-4e28-9c01-e9558046f377\" (UID: \"4dbe16c5-6e68-4e28-9c01-e9558046f377\") " Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.022533 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dbe16c5-6e68-4e28-9c01-e9558046f377-logs\") pod \"4dbe16c5-6e68-4e28-9c01-e9558046f377\" (UID: \"4dbe16c5-6e68-4e28-9c01-e9558046f377\") " Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.023704 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dbe16c5-6e68-4e28-9c01-e9558046f377-logs" (OuterVolumeSpecName: "logs") pod "4dbe16c5-6e68-4e28-9c01-e9558046f377" (UID: "4dbe16c5-6e68-4e28-9c01-e9558046f377"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.029969 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dbe16c5-6e68-4e28-9c01-e9558046f377-kube-api-access-2slcf" (OuterVolumeSpecName: "kube-api-access-2slcf") pod "4dbe16c5-6e68-4e28-9c01-e9558046f377" (UID: "4dbe16c5-6e68-4e28-9c01-e9558046f377"). InnerVolumeSpecName "kube-api-access-2slcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.056328 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dbe16c5-6e68-4e28-9c01-e9558046f377-config-data" (OuterVolumeSpecName: "config-data") pod "4dbe16c5-6e68-4e28-9c01-e9558046f377" (UID: "4dbe16c5-6e68-4e28-9c01-e9558046f377"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.062947 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dbe16c5-6e68-4e28-9c01-e9558046f377-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4dbe16c5-6e68-4e28-9c01-e9558046f377" (UID: "4dbe16c5-6e68-4e28-9c01-e9558046f377"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.125157 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dbe16c5-6e68-4e28-9c01-e9558046f377-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.125190 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dbe16c5-6e68-4e28-9c01-e9558046f377-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.125202 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2slcf\" (UniqueName: \"kubernetes.io/projected/4dbe16c5-6e68-4e28-9c01-e9558046f377-kube-api-access-2slcf\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.125211 4966 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dbe16c5-6e68-4e28-9c01-e9558046f377-logs\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.344428 4966 generic.go:334] "Generic (PLEG): container finished" podID="4dbe16c5-6e68-4e28-9c01-e9558046f377" containerID="8613cdb680c921bc7b5bfc3052cd2909b41e99b146f4d245264471b2685f01d1" exitCode=0 Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.344520 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.344520 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4dbe16c5-6e68-4e28-9c01-e9558046f377","Type":"ContainerDied","Data":"8613cdb680c921bc7b5bfc3052cd2909b41e99b146f4d245264471b2685f01d1"} Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.344574 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4dbe16c5-6e68-4e28-9c01-e9558046f377","Type":"ContainerDied","Data":"54afa192f3751ae69722bcf6894374066dd16800d44a18c983f45aeb2e663246"} Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.344594 4966 scope.go:117] "RemoveContainer" containerID="8613cdb680c921bc7b5bfc3052cd2909b41e99b146f4d245264471b2685f01d1" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.346751 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acb5055b-edfd-4093-9b19-6887f5d11239","Type":"ContainerStarted","Data":"9b20884395ed877a022c4a17a6ac6966f3b0ce9910c24033cc3a0d17192aaf14"} Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.384951 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.388988 4966 scope.go:117] "RemoveContainer" containerID="fdf71b6f010fcf21783973b32ba42b833c5fbfcf587be85485655c351d0516a6" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.403995 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.423267 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 16:08:10 crc kubenswrapper[4966]: E0127 16:08:10.424035 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dbe16c5-6e68-4e28-9c01-e9558046f377" containerName="nova-api-log" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.424102 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dbe16c5-6e68-4e28-9c01-e9558046f377" containerName="nova-api-log" Jan 27 16:08:10 crc kubenswrapper[4966]: E0127 16:08:10.424175 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dbe16c5-6e68-4e28-9c01-e9558046f377" containerName="nova-api-api" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.424224 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dbe16c5-6e68-4e28-9c01-e9558046f377" containerName="nova-api-api" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.424514 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dbe16c5-6e68-4e28-9c01-e9558046f377" containerName="nova-api-log" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.424592 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dbe16c5-6e68-4e28-9c01-e9558046f377" containerName="nova-api-api" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.425819 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.431378 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.431553 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.431597 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.443566 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.535064 4966 scope.go:117] "RemoveContainer" containerID="8613cdb680c921bc7b5bfc3052cd2909b41e99b146f4d245264471b2685f01d1" Jan 27 16:08:10 crc kubenswrapper[4966]: E0127 16:08:10.539064 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8613cdb680c921bc7b5bfc3052cd2909b41e99b146f4d245264471b2685f01d1\": container with ID starting with 8613cdb680c921bc7b5bfc3052cd2909b41e99b146f4d245264471b2685f01d1 not found: ID does not exist" containerID="8613cdb680c921bc7b5bfc3052cd2909b41e99b146f4d245264471b2685f01d1" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.539110 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8613cdb680c921bc7b5bfc3052cd2909b41e99b146f4d245264471b2685f01d1"} err="failed to get container status \"8613cdb680c921bc7b5bfc3052cd2909b41e99b146f4d245264471b2685f01d1\": rpc error: code = NotFound desc = could not find container \"8613cdb680c921bc7b5bfc3052cd2909b41e99b146f4d245264471b2685f01d1\": container with ID starting with 8613cdb680c921bc7b5bfc3052cd2909b41e99b146f4d245264471b2685f01d1 not found: ID does not exist" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.539138 4966 scope.go:117] "RemoveContainer" containerID="fdf71b6f010fcf21783973b32ba42b833c5fbfcf587be85485655c351d0516a6" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.539770 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-public-tls-certs\") pod \"nova-api-0\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " pod="openstack/nova-api-0" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.539929 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " pod="openstack/nova-api-0" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.540486 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " pod="openstack/nova-api-0" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.540572 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9dv2\" (UniqueName: \"kubernetes.io/projected/3d4768cc-1cd6-460b-a65a-a82cd0154317-kube-api-access-g9dv2\") pod \"nova-api-0\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " pod="openstack/nova-api-0" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.540604 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d4768cc-1cd6-460b-a65a-a82cd0154317-logs\") pod \"nova-api-0\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " pod="openstack/nova-api-0" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.540690 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-config-data\") pod \"nova-api-0\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " pod="openstack/nova-api-0" Jan 27 16:08:10 crc kubenswrapper[4966]: E0127 16:08:10.543179 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdf71b6f010fcf21783973b32ba42b833c5fbfcf587be85485655c351d0516a6\": container with ID starting with fdf71b6f010fcf21783973b32ba42b833c5fbfcf587be85485655c351d0516a6 not found: ID does not exist" containerID="fdf71b6f010fcf21783973b32ba42b833c5fbfcf587be85485655c351d0516a6" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.543232 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdf71b6f010fcf21783973b32ba42b833c5fbfcf587be85485655c351d0516a6"} err="failed to get container status \"fdf71b6f010fcf21783973b32ba42b833c5fbfcf587be85485655c351d0516a6\": rpc error: code = NotFound desc = could not find container \"fdf71b6f010fcf21783973b32ba42b833c5fbfcf587be85485655c351d0516a6\": container with ID starting with fdf71b6f010fcf21783973b32ba42b833c5fbfcf587be85485655c351d0516a6 not found: ID does not exist" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.552048 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dbe16c5-6e68-4e28-9c01-e9558046f377" path="/var/lib/kubelet/pods/4dbe16c5-6e68-4e28-9c01-e9558046f377/volumes" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.642632 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d4768cc-1cd6-460b-a65a-a82cd0154317-logs\") pod \"nova-api-0\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " pod="openstack/nova-api-0" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.642733 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-config-data\") pod \"nova-api-0\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " pod="openstack/nova-api-0" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.642826 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-public-tls-certs\") pod \"nova-api-0\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " pod="openstack/nova-api-0" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.642918 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " pod="openstack/nova-api-0" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.642978 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " pod="openstack/nova-api-0" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.643031 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9dv2\" (UniqueName: \"kubernetes.io/projected/3d4768cc-1cd6-460b-a65a-a82cd0154317-kube-api-access-g9dv2\") pod \"nova-api-0\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " pod="openstack/nova-api-0" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.643047 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d4768cc-1cd6-460b-a65a-a82cd0154317-logs\") pod \"nova-api-0\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " pod="openstack/nova-api-0" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.657806 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-public-tls-certs\") pod \"nova-api-0\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " pod="openstack/nova-api-0" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.657887 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " pod="openstack/nova-api-0" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.658171 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-config-data\") pod \"nova-api-0\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " pod="openstack/nova-api-0" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.658753 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " pod="openstack/nova-api-0" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.666807 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9dv2\" (UniqueName: \"kubernetes.io/projected/3d4768cc-1cd6-460b-a65a-a82cd0154317-kube-api-access-g9dv2\") pod \"nova-api-0\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " pod="openstack/nova-api-0" Jan 27 16:08:10 crc kubenswrapper[4966]: I0127 16:08:10.891653 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 16:08:11 crc kubenswrapper[4966]: I0127 16:08:11.365180 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acb5055b-edfd-4093-9b19-6887f5d11239","Type":"ContainerStarted","Data":"2b19677862df8429a9e8d4ce86cbd99c16676c133dbcb2e63799e4fd534a064b"} Jan 27 16:08:11 crc kubenswrapper[4966]: I0127 16:08:11.365448 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 16:08:11 crc kubenswrapper[4966]: W0127 16:08:11.374422 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d4768cc_1cd6_460b_a65a_a82cd0154317.slice/crio-d00dd0ef50a5e5bbd7c7234f504227e5c77b07178964991f41a81a9b8db95a01 WatchSource:0}: Error finding container d00dd0ef50a5e5bbd7c7234f504227e5c77b07178964991f41a81a9b8db95a01: Status 404 returned error can't find the container with id d00dd0ef50a5e5bbd7c7234f504227e5c77b07178964991f41a81a9b8db95a01 Jan 27 16:08:11 crc kubenswrapper[4966]: I0127 16:08:11.613196 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 16:08:11 crc kubenswrapper[4966]: I0127 16:08:11.613533 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 16:08:12 crc kubenswrapper[4966]: I0127 16:08:12.048005 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:12 crc kubenswrapper[4966]: I0127 16:08:12.379806 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d4768cc-1cd6-460b-a65a-a82cd0154317","Type":"ContainerStarted","Data":"91b520c529bcd20922167cf0a74cca23f86a85ea8ca85364c455570487cfe510"} Jan 27 16:08:12 crc kubenswrapper[4966]: I0127 16:08:12.379850 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d4768cc-1cd6-460b-a65a-a82cd0154317","Type":"ContainerStarted","Data":"c420cc1370e5cf9706b1371a0fd0285359c315ca4f0595aff96cc865fc03e963"} Jan 27 16:08:12 crc kubenswrapper[4966]: I0127 16:08:12.379863 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d4768cc-1cd6-460b-a65a-a82cd0154317","Type":"ContainerStarted","Data":"d00dd0ef50a5e5bbd7c7234f504227e5c77b07178964991f41a81a9b8db95a01"} Jan 27 16:08:12 crc kubenswrapper[4966]: I0127 16:08:12.420709 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.420691545 podStartE2EDuration="2.420691545s" podCreationTimestamp="2026-01-27 16:08:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:08:12.41099136 +0000 UTC m=+1558.713784848" watchObservedRunningTime="2026-01-27 16:08:12.420691545 +0000 UTC m=+1558.723485033" Jan 27 16:08:13 crc kubenswrapper[4966]: I0127 16:08:13.728740 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:08:13 crc kubenswrapper[4966]: I0127 16:08:13.883814 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-rncxr"] Jan 27 16:08:13 crc kubenswrapper[4966]: I0127 16:08:13.884787 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" podUID="44197e72-4dfa-407d-83f4-99884b556b0d" containerName="dnsmasq-dns" containerID="cri-o://7f4e29a18867cb7b1e2eb676620043bf22c377197ca7d06bdc704a49423fee69" gracePeriod=10 Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.409560 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acb5055b-edfd-4093-9b19-6887f5d11239","Type":"ContainerStarted","Data":"5178fde2c812a7eb973b7de817623cd8bc1120d56ad0e5c8086651dce4fd1ce5"} Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.409679 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="acb5055b-edfd-4093-9b19-6887f5d11239" containerName="ceilometer-central-agent" containerID="cri-o://5ada9a99ae966823d951b07f1bd9624a365b4a8089f96fd6e36e2d907516982d" gracePeriod=30 Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.409971 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="acb5055b-edfd-4093-9b19-6887f5d11239" containerName="proxy-httpd" containerID="cri-o://5178fde2c812a7eb973b7de817623cd8bc1120d56ad0e5c8086651dce4fd1ce5" gracePeriod=30 Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.410037 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="acb5055b-edfd-4093-9b19-6887f5d11239" containerName="sg-core" containerID="cri-o://2b19677862df8429a9e8d4ce86cbd99c16676c133dbcb2e63799e4fd534a064b" gracePeriod=30 Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.410072 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.410097 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="acb5055b-edfd-4093-9b19-6887f5d11239" containerName="ceilometer-notification-agent" containerID="cri-o://9b20884395ed877a022c4a17a6ac6966f3b0ce9910c24033cc3a0d17192aaf14" gracePeriod=30 Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.422526 4966 generic.go:334] "Generic (PLEG): container finished" podID="44197e72-4dfa-407d-83f4-99884b556b0d" containerID="7f4e29a18867cb7b1e2eb676620043bf22c377197ca7d06bdc704a49423fee69" exitCode=0 Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.422578 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" event={"ID":"44197e72-4dfa-407d-83f4-99884b556b0d","Type":"ContainerDied","Data":"7f4e29a18867cb7b1e2eb676620043bf22c377197ca7d06bdc704a49423fee69"} Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.438034 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.3017152149999998 podStartE2EDuration="7.438020196s" podCreationTimestamp="2026-01-27 16:08:07 +0000 UTC" firstStartedPulling="2026-01-27 16:08:08.369787633 +0000 UTC m=+1554.672581141" lastFinishedPulling="2026-01-27 16:08:13.506092624 +0000 UTC m=+1559.808886122" observedRunningTime="2026-01-27 16:08:14.434711842 +0000 UTC m=+1560.737505340" watchObservedRunningTime="2026-01-27 16:08:14.438020196 +0000 UTC m=+1560.740813684" Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.487690 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.589957 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-dns-svc\") pod \"44197e72-4dfa-407d-83f4-99884b556b0d\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.589998 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-config\") pod \"44197e72-4dfa-407d-83f4-99884b556b0d\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.590110 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-dns-swift-storage-0\") pod \"44197e72-4dfa-407d-83f4-99884b556b0d\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.590148 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-ovsdbserver-sb\") pod \"44197e72-4dfa-407d-83f4-99884b556b0d\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.590214 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29g2q\" (UniqueName: \"kubernetes.io/projected/44197e72-4dfa-407d-83f4-99884b556b0d-kube-api-access-29g2q\") pod \"44197e72-4dfa-407d-83f4-99884b556b0d\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.590231 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-ovsdbserver-nb\") pod \"44197e72-4dfa-407d-83f4-99884b556b0d\" (UID: \"44197e72-4dfa-407d-83f4-99884b556b0d\") " Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.596112 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44197e72-4dfa-407d-83f4-99884b556b0d-kube-api-access-29g2q" (OuterVolumeSpecName: "kube-api-access-29g2q") pod "44197e72-4dfa-407d-83f4-99884b556b0d" (UID: "44197e72-4dfa-407d-83f4-99884b556b0d"). InnerVolumeSpecName "kube-api-access-29g2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.654746 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-config" (OuterVolumeSpecName: "config") pod "44197e72-4dfa-407d-83f4-99884b556b0d" (UID: "44197e72-4dfa-407d-83f4-99884b556b0d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.655777 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "44197e72-4dfa-407d-83f4-99884b556b0d" (UID: "44197e72-4dfa-407d-83f4-99884b556b0d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.660544 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44197e72-4dfa-407d-83f4-99884b556b0d" (UID: "44197e72-4dfa-407d-83f4-99884b556b0d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.662342 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "44197e72-4dfa-407d-83f4-99884b556b0d" (UID: "44197e72-4dfa-407d-83f4-99884b556b0d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.666543 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "44197e72-4dfa-407d-83f4-99884b556b0d" (UID: "44197e72-4dfa-407d-83f4-99884b556b0d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.693292 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29g2q\" (UniqueName: \"kubernetes.io/projected/44197e72-4dfa-407d-83f4-99884b556b0d-kube-api-access-29g2q\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.693319 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.693330 4966 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.693339 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.693476 4966 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:14 crc kubenswrapper[4966]: I0127 16:08:14.695395 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44197e72-4dfa-407d-83f4-99884b556b0d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.436063 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" event={"ID":"44197e72-4dfa-407d-83f4-99884b556b0d","Type":"ContainerDied","Data":"92e82760c702a6be2c598c382cdecee1226ea1254432ec29a519df3d30e0818c"} Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.436124 4966 scope.go:117] "RemoveContainer" containerID="7f4e29a18867cb7b1e2eb676620043bf22c377197ca7d06bdc704a49423fee69" Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.436143 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-rncxr" Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.445254 4966 generic.go:334] "Generic (PLEG): container finished" podID="acb5055b-edfd-4093-9b19-6887f5d11239" containerID="5178fde2c812a7eb973b7de817623cd8bc1120d56ad0e5c8086651dce4fd1ce5" exitCode=0 Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.445296 4966 generic.go:334] "Generic (PLEG): container finished" podID="acb5055b-edfd-4093-9b19-6887f5d11239" containerID="2b19677862df8429a9e8d4ce86cbd99c16676c133dbcb2e63799e4fd534a064b" exitCode=2 Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.445308 4966 generic.go:334] "Generic (PLEG): container finished" podID="acb5055b-edfd-4093-9b19-6887f5d11239" containerID="9b20884395ed877a022c4a17a6ac6966f3b0ce9910c24033cc3a0d17192aaf14" exitCode=0 Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.445321 4966 generic.go:334] "Generic (PLEG): container finished" podID="acb5055b-edfd-4093-9b19-6887f5d11239" containerID="5ada9a99ae966823d951b07f1bd9624a365b4a8089f96fd6e36e2d907516982d" exitCode=0 Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.445346 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acb5055b-edfd-4093-9b19-6887f5d11239","Type":"ContainerDied","Data":"5178fde2c812a7eb973b7de817623cd8bc1120d56ad0e5c8086651dce4fd1ce5"} Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.445376 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acb5055b-edfd-4093-9b19-6887f5d11239","Type":"ContainerDied","Data":"2b19677862df8429a9e8d4ce86cbd99c16676c133dbcb2e63799e4fd534a064b"} Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.445389 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acb5055b-edfd-4093-9b19-6887f5d11239","Type":"ContainerDied","Data":"9b20884395ed877a022c4a17a6ac6966f3b0ce9910c24033cc3a0d17192aaf14"} Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.445401 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acb5055b-edfd-4093-9b19-6887f5d11239","Type":"ContainerDied","Data":"5ada9a99ae966823d951b07f1bd9624a365b4a8089f96fd6e36e2d907516982d"} Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.475367 4966 scope.go:117] "RemoveContainer" containerID="b28ad71f7e61400c157f093c0a0160e729199ab71f7d4ffec19a3a6f8edbb7e1" Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.500359 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-rncxr"] Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.548508 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-rncxr"] Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.779316 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.951354 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-scripts\") pod \"acb5055b-edfd-4093-9b19-6887f5d11239\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.951529 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-config-data\") pod \"acb5055b-edfd-4093-9b19-6887f5d11239\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.951670 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acb5055b-edfd-4093-9b19-6887f5d11239-log-httpd\") pod \"acb5055b-edfd-4093-9b19-6887f5d11239\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.951718 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdzfg\" (UniqueName: \"kubernetes.io/projected/acb5055b-edfd-4093-9b19-6887f5d11239-kube-api-access-mdzfg\") pod \"acb5055b-edfd-4093-9b19-6887f5d11239\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.951841 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-combined-ca-bundle\") pod \"acb5055b-edfd-4093-9b19-6887f5d11239\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.951952 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-sg-core-conf-yaml\") pod \"acb5055b-edfd-4093-9b19-6887f5d11239\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.952020 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acb5055b-edfd-4093-9b19-6887f5d11239-run-httpd\") pod \"acb5055b-edfd-4093-9b19-6887f5d11239\" (UID: \"acb5055b-edfd-4093-9b19-6887f5d11239\") " Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.953079 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acb5055b-edfd-4093-9b19-6887f5d11239-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "acb5055b-edfd-4093-9b19-6887f5d11239" (UID: "acb5055b-edfd-4093-9b19-6887f5d11239"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.953423 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acb5055b-edfd-4093-9b19-6887f5d11239-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "acb5055b-edfd-4093-9b19-6887f5d11239" (UID: "acb5055b-edfd-4093-9b19-6887f5d11239"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.957151 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-scripts" (OuterVolumeSpecName: "scripts") pod "acb5055b-edfd-4093-9b19-6887f5d11239" (UID: "acb5055b-edfd-4093-9b19-6887f5d11239"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.961720 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acb5055b-edfd-4093-9b19-6887f5d11239-kube-api-access-mdzfg" (OuterVolumeSpecName: "kube-api-access-mdzfg") pod "acb5055b-edfd-4093-9b19-6887f5d11239" (UID: "acb5055b-edfd-4093-9b19-6887f5d11239"). InnerVolumeSpecName "kube-api-access-mdzfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:08:15 crc kubenswrapper[4966]: I0127 16:08:15.996206 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "acb5055b-edfd-4093-9b19-6887f5d11239" (UID: "acb5055b-edfd-4093-9b19-6887f5d11239"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.055661 4966 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.055696 4966 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acb5055b-edfd-4093-9b19-6887f5d11239-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.055709 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.055722 4966 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acb5055b-edfd-4093-9b19-6887f5d11239-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.055736 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdzfg\" (UniqueName: \"kubernetes.io/projected/acb5055b-edfd-4093-9b19-6887f5d11239-kube-api-access-mdzfg\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.083730 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acb5055b-edfd-4093-9b19-6887f5d11239" (UID: "acb5055b-edfd-4093-9b19-6887f5d11239"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.104183 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-config-data" (OuterVolumeSpecName: "config-data") pod "acb5055b-edfd-4093-9b19-6887f5d11239" (UID: "acb5055b-edfd-4093-9b19-6887f5d11239"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.157715 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.157752 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acb5055b-edfd-4093-9b19-6887f5d11239-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.460084 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acb5055b-edfd-4093-9b19-6887f5d11239","Type":"ContainerDied","Data":"889eb9cd576873c7ffa127ae781c088d11b1a92b39bbc6c24b824bc5d9d53cfd"} Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.460153 4966 scope.go:117] "RemoveContainer" containerID="5178fde2c812a7eb973b7de817623cd8bc1120d56ad0e5c8086651dce4fd1ce5" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.460347 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.499776 4966 scope.go:117] "RemoveContainer" containerID="2b19677862df8429a9e8d4ce86cbd99c16676c133dbcb2e63799e4fd534a064b" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.531604 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:08:16 crc kubenswrapper[4966]: E0127 16:08:16.533769 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.560127 4966 scope.go:117] "RemoveContainer" containerID="9b20884395ed877a022c4a17a6ac6966f3b0ce9910c24033cc3a0d17192aaf14" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.580778 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44197e72-4dfa-407d-83f4-99884b556b0d" path="/var/lib/kubelet/pods/44197e72-4dfa-407d-83f4-99884b556b0d/volumes" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.581656 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.595972 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.613657 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.614687 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.625198 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:08:16 crc kubenswrapper[4966]: E0127 16:08:16.626106 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44197e72-4dfa-407d-83f4-99884b556b0d" containerName="dnsmasq-dns" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.626134 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="44197e72-4dfa-407d-83f4-99884b556b0d" containerName="dnsmasq-dns" Jan 27 16:08:16 crc kubenswrapper[4966]: E0127 16:08:16.626161 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acb5055b-edfd-4093-9b19-6887f5d11239" containerName="ceilometer-central-agent" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.626171 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="acb5055b-edfd-4093-9b19-6887f5d11239" containerName="ceilometer-central-agent" Jan 27 16:08:16 crc kubenswrapper[4966]: E0127 16:08:16.626183 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44197e72-4dfa-407d-83f4-99884b556b0d" containerName="init" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.626191 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="44197e72-4dfa-407d-83f4-99884b556b0d" containerName="init" Jan 27 16:08:16 crc kubenswrapper[4966]: E0127 16:08:16.626212 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acb5055b-edfd-4093-9b19-6887f5d11239" containerName="ceilometer-notification-agent" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.626221 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="acb5055b-edfd-4093-9b19-6887f5d11239" containerName="ceilometer-notification-agent" Jan 27 16:08:16 crc kubenswrapper[4966]: E0127 16:08:16.626256 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acb5055b-edfd-4093-9b19-6887f5d11239" containerName="proxy-httpd" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.626265 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="acb5055b-edfd-4093-9b19-6887f5d11239" containerName="proxy-httpd" Jan 27 16:08:16 crc kubenswrapper[4966]: E0127 16:08:16.626279 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acb5055b-edfd-4093-9b19-6887f5d11239" containerName="sg-core" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.626287 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="acb5055b-edfd-4093-9b19-6887f5d11239" containerName="sg-core" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.626585 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="acb5055b-edfd-4093-9b19-6887f5d11239" containerName="ceilometer-central-agent" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.626625 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="acb5055b-edfd-4093-9b19-6887f5d11239" containerName="proxy-httpd" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.626636 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="acb5055b-edfd-4093-9b19-6887f5d11239" containerName="ceilometer-notification-agent" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.626650 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="acb5055b-edfd-4093-9b19-6887f5d11239" containerName="sg-core" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.626665 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="44197e72-4dfa-407d-83f4-99884b556b0d" containerName="dnsmasq-dns" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.631163 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.636314 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.636693 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.644074 4966 scope.go:117] "RemoveContainer" containerID="5ada9a99ae966823d951b07f1bd9624a365b4a8089f96fd6e36e2d907516982d" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.645597 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.780816 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-scripts\") pod \"ceilometer-0\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.781024 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsf75\" (UniqueName: \"kubernetes.io/projected/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-kube-api-access-jsf75\") pod \"ceilometer-0\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.781054 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.781378 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-run-httpd\") pod \"ceilometer-0\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.781803 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-log-httpd\") pod \"ceilometer-0\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.782064 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-config-data\") pod \"ceilometer-0\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.782169 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.884282 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-run-httpd\") pod \"ceilometer-0\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.884719 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-log-httpd\") pod \"ceilometer-0\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.884751 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-run-httpd\") pod \"ceilometer-0\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.884764 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-config-data\") pod \"ceilometer-0\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.884839 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.884985 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-scripts\") pod \"ceilometer-0\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.885097 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsf75\" (UniqueName: \"kubernetes.io/projected/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-kube-api-access-jsf75\") pod \"ceilometer-0\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.885121 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.885309 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-log-httpd\") pod \"ceilometer-0\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.889615 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.890149 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-scripts\") pod \"ceilometer-0\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.893331 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.901486 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-config-data\") pod \"ceilometer-0\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.902321 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsf75\" (UniqueName: \"kubernetes.io/projected/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-kube-api-access-jsf75\") pod \"ceilometer-0\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " pod="openstack/ceilometer-0" Jan 27 16:08:16 crc kubenswrapper[4966]: I0127 16:08:16.951749 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.053405 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.078386 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.422196 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:08:17 crc kubenswrapper[4966]: W0127 16:08:17.422658 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52643db3_c4a6_4c23_9d4c_d1d29a2983a5.slice/crio-c149d267f47258889bd1ed4b5c89a142f0a9a68cca9731b6989a47de7d422180 WatchSource:0}: Error finding container c149d267f47258889bd1ed4b5c89a142f0a9a68cca9731b6989a47de7d422180: Status 404 returned error can't find the container with id c149d267f47258889bd1ed4b5c89a142f0a9a68cca9731b6989a47de7d422180 Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.479496 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52643db3-c4a6-4c23-9d4c-d1d29a2983a5","Type":"ContainerStarted","Data":"c149d267f47258889bd1ed4b5c89a142f0a9a68cca9731b6989a47de7d422180"} Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.504736 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.629121 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3f2adce5-58ba-4306-aad8-cdce724c23d1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.253:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.629146 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3f2adce5-58ba-4306-aad8-cdce724c23d1" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.253:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.671385 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-bt86v"] Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.673187 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bt86v" Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.679633 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.679888 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.688300 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bt86v"] Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.811328 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx8tc\" (UniqueName: \"kubernetes.io/projected/32c99749-de0a-41b0-a8e6-8d4bc6ada807-kube-api-access-jx8tc\") pod \"nova-cell1-cell-mapping-bt86v\" (UID: \"32c99749-de0a-41b0-a8e6-8d4bc6ada807\") " pod="openstack/nova-cell1-cell-mapping-bt86v" Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.811684 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c99749-de0a-41b0-a8e6-8d4bc6ada807-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bt86v\" (UID: \"32c99749-de0a-41b0-a8e6-8d4bc6ada807\") " pod="openstack/nova-cell1-cell-mapping-bt86v" Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.811761 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32c99749-de0a-41b0-a8e6-8d4bc6ada807-scripts\") pod \"nova-cell1-cell-mapping-bt86v\" (UID: \"32c99749-de0a-41b0-a8e6-8d4bc6ada807\") " pod="openstack/nova-cell1-cell-mapping-bt86v" Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.811943 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c99749-de0a-41b0-a8e6-8d4bc6ada807-config-data\") pod \"nova-cell1-cell-mapping-bt86v\" (UID: \"32c99749-de0a-41b0-a8e6-8d4bc6ada807\") " pod="openstack/nova-cell1-cell-mapping-bt86v" Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.914502 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx8tc\" (UniqueName: \"kubernetes.io/projected/32c99749-de0a-41b0-a8e6-8d4bc6ada807-kube-api-access-jx8tc\") pod \"nova-cell1-cell-mapping-bt86v\" (UID: \"32c99749-de0a-41b0-a8e6-8d4bc6ada807\") " pod="openstack/nova-cell1-cell-mapping-bt86v" Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.914682 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c99749-de0a-41b0-a8e6-8d4bc6ada807-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bt86v\" (UID: \"32c99749-de0a-41b0-a8e6-8d4bc6ada807\") " pod="openstack/nova-cell1-cell-mapping-bt86v" Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.914717 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32c99749-de0a-41b0-a8e6-8d4bc6ada807-scripts\") pod \"nova-cell1-cell-mapping-bt86v\" (UID: \"32c99749-de0a-41b0-a8e6-8d4bc6ada807\") " pod="openstack/nova-cell1-cell-mapping-bt86v" Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.914775 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c99749-de0a-41b0-a8e6-8d4bc6ada807-config-data\") pod \"nova-cell1-cell-mapping-bt86v\" (UID: \"32c99749-de0a-41b0-a8e6-8d4bc6ada807\") " pod="openstack/nova-cell1-cell-mapping-bt86v" Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.921446 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c99749-de0a-41b0-a8e6-8d4bc6ada807-config-data\") pod \"nova-cell1-cell-mapping-bt86v\" (UID: \"32c99749-de0a-41b0-a8e6-8d4bc6ada807\") " pod="openstack/nova-cell1-cell-mapping-bt86v" Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.922138 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32c99749-de0a-41b0-a8e6-8d4bc6ada807-scripts\") pod \"nova-cell1-cell-mapping-bt86v\" (UID: \"32c99749-de0a-41b0-a8e6-8d4bc6ada807\") " pod="openstack/nova-cell1-cell-mapping-bt86v" Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.929483 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c99749-de0a-41b0-a8e6-8d4bc6ada807-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bt86v\" (UID: \"32c99749-de0a-41b0-a8e6-8d4bc6ada807\") " pod="openstack/nova-cell1-cell-mapping-bt86v" Jan 27 16:08:17 crc kubenswrapper[4966]: I0127 16:08:17.956664 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx8tc\" (UniqueName: \"kubernetes.io/projected/32c99749-de0a-41b0-a8e6-8d4bc6ada807-kube-api-access-jx8tc\") pod \"nova-cell1-cell-mapping-bt86v\" (UID: \"32c99749-de0a-41b0-a8e6-8d4bc6ada807\") " pod="openstack/nova-cell1-cell-mapping-bt86v" Jan 27 16:08:18 crc kubenswrapper[4966]: I0127 16:08:18.012699 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bt86v" Jan 27 16:08:18 crc kubenswrapper[4966]: I0127 16:08:18.514758 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bt86v"] Jan 27 16:08:18 crc kubenswrapper[4966]: I0127 16:08:18.563874 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acb5055b-edfd-4093-9b19-6887f5d11239" path="/var/lib/kubelet/pods/acb5055b-edfd-4093-9b19-6887f5d11239/volumes" Jan 27 16:08:19 crc kubenswrapper[4966]: I0127 16:08:19.506300 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52643db3-c4a6-4c23-9d4c-d1d29a2983a5","Type":"ContainerStarted","Data":"488f29b0087b9ca63ccee87bf7e0dbd14ae4bf15cf130d6d54bcd6b59067ac24"} Jan 27 16:08:19 crc kubenswrapper[4966]: I0127 16:08:19.506681 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52643db3-c4a6-4c23-9d4c-d1d29a2983a5","Type":"ContainerStarted","Data":"327fb8eb5a30675ba5b018f4f7bb9557e190c305c5fdbe4e2c64ad3b0f92fff1"} Jan 27 16:08:19 crc kubenswrapper[4966]: I0127 16:08:19.508250 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bt86v" event={"ID":"32c99749-de0a-41b0-a8e6-8d4bc6ada807","Type":"ContainerStarted","Data":"62877e76b852eff89dfebb34eae1818ceec73fcc492d846838100c3684171451"} Jan 27 16:08:19 crc kubenswrapper[4966]: I0127 16:08:19.508314 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bt86v" event={"ID":"32c99749-de0a-41b0-a8e6-8d4bc6ada807","Type":"ContainerStarted","Data":"ab96237e947b7270b7bfa31ad04eaaca38f8dcad2e5feccd59bc4a486e7fe4df"} Jan 27 16:08:19 crc kubenswrapper[4966]: I0127 16:08:19.533025 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-bt86v" podStartSLOduration=2.533005121 podStartE2EDuration="2.533005121s" podCreationTimestamp="2026-01-27 16:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:08:19.524110472 +0000 UTC m=+1565.826903970" watchObservedRunningTime="2026-01-27 16:08:19.533005121 +0000 UTC m=+1565.835798619" Jan 27 16:08:20 crc kubenswrapper[4966]: I0127 16:08:20.537396 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52643db3-c4a6-4c23-9d4c-d1d29a2983a5","Type":"ContainerStarted","Data":"257ed30445215a85410a4ee98915e74accfe4d13a6f23946f25fc2b95d359c18"} Jan 27 16:08:20 crc kubenswrapper[4966]: I0127 16:08:20.892797 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 16:08:20 crc kubenswrapper[4966]: I0127 16:08:20.894988 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 16:08:21 crc kubenswrapper[4966]: I0127 16:08:21.904523 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3d4768cc-1cd6-460b-a65a-a82cd0154317" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.0:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 16:08:21 crc kubenswrapper[4966]: I0127 16:08:21.905371 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3d4768cc-1cd6-460b-a65a-a82cd0154317" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.0:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 16:08:22 crc kubenswrapper[4966]: I0127 16:08:22.559449 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52643db3-c4a6-4c23-9d4c-d1d29a2983a5","Type":"ContainerStarted","Data":"4ee2880cb2e27901df81922cc2816e47765d283b3d8b8e714681347d929d1be1"} Jan 27 16:08:22 crc kubenswrapper[4966]: I0127 16:08:22.561456 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 16:08:22 crc kubenswrapper[4966]: I0127 16:08:22.588753 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.694741379 podStartE2EDuration="6.588728366s" podCreationTimestamp="2026-01-27 16:08:16 +0000 UTC" firstStartedPulling="2026-01-27 16:08:17.425271882 +0000 UTC m=+1563.728065400" lastFinishedPulling="2026-01-27 16:08:21.319258889 +0000 UTC m=+1567.622052387" observedRunningTime="2026-01-27 16:08:22.578177384 +0000 UTC m=+1568.880970912" watchObservedRunningTime="2026-01-27 16:08:22.588728366 +0000 UTC m=+1568.891521884" Jan 27 16:08:23 crc kubenswrapper[4966]: I0127 16:08:23.570857 4966 generic.go:334] "Generic (PLEG): container finished" podID="32c99749-de0a-41b0-a8e6-8d4bc6ada807" containerID="62877e76b852eff89dfebb34eae1818ceec73fcc492d846838100c3684171451" exitCode=0 Jan 27 16:08:23 crc kubenswrapper[4966]: I0127 16:08:23.570950 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bt86v" event={"ID":"32c99749-de0a-41b0-a8e6-8d4bc6ada807","Type":"ContainerDied","Data":"62877e76b852eff89dfebb34eae1818ceec73fcc492d846838100c3684171451"} Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.066606 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bt86v" Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.218291 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c99749-de0a-41b0-a8e6-8d4bc6ada807-combined-ca-bundle\") pod \"32c99749-de0a-41b0-a8e6-8d4bc6ada807\" (UID: \"32c99749-de0a-41b0-a8e6-8d4bc6ada807\") " Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.218407 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx8tc\" (UniqueName: \"kubernetes.io/projected/32c99749-de0a-41b0-a8e6-8d4bc6ada807-kube-api-access-jx8tc\") pod \"32c99749-de0a-41b0-a8e6-8d4bc6ada807\" (UID: \"32c99749-de0a-41b0-a8e6-8d4bc6ada807\") " Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.218567 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c99749-de0a-41b0-a8e6-8d4bc6ada807-config-data\") pod \"32c99749-de0a-41b0-a8e6-8d4bc6ada807\" (UID: \"32c99749-de0a-41b0-a8e6-8d4bc6ada807\") " Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.218666 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32c99749-de0a-41b0-a8e6-8d4bc6ada807-scripts\") pod \"32c99749-de0a-41b0-a8e6-8d4bc6ada807\" (UID: \"32c99749-de0a-41b0-a8e6-8d4bc6ada807\") " Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.224840 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32c99749-de0a-41b0-a8e6-8d4bc6ada807-scripts" (OuterVolumeSpecName: "scripts") pod "32c99749-de0a-41b0-a8e6-8d4bc6ada807" (UID: "32c99749-de0a-41b0-a8e6-8d4bc6ada807"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.236093 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32c99749-de0a-41b0-a8e6-8d4bc6ada807-kube-api-access-jx8tc" (OuterVolumeSpecName: "kube-api-access-jx8tc") pod "32c99749-de0a-41b0-a8e6-8d4bc6ada807" (UID: "32c99749-de0a-41b0-a8e6-8d4bc6ada807"). InnerVolumeSpecName "kube-api-access-jx8tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.256714 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32c99749-de0a-41b0-a8e6-8d4bc6ada807-config-data" (OuterVolumeSpecName: "config-data") pod "32c99749-de0a-41b0-a8e6-8d4bc6ada807" (UID: "32c99749-de0a-41b0-a8e6-8d4bc6ada807"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.297463 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32c99749-de0a-41b0-a8e6-8d4bc6ada807-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32c99749-de0a-41b0-a8e6-8d4bc6ada807" (UID: "32c99749-de0a-41b0-a8e6-8d4bc6ada807"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.321388 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c99749-de0a-41b0-a8e6-8d4bc6ada807-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.321417 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32c99749-de0a-41b0-a8e6-8d4bc6ada807-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.321426 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c99749-de0a-41b0-a8e6-8d4bc6ada807-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.321438 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx8tc\" (UniqueName: \"kubernetes.io/projected/32c99749-de0a-41b0-a8e6-8d4bc6ada807-kube-api-access-jx8tc\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.606223 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bt86v" event={"ID":"32c99749-de0a-41b0-a8e6-8d4bc6ada807","Type":"ContainerDied","Data":"ab96237e947b7270b7bfa31ad04eaaca38f8dcad2e5feccd59bc4a486e7fe4df"} Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.606604 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab96237e947b7270b7bfa31ad04eaaca38f8dcad2e5feccd59bc4a486e7fe4df" Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.606482 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bt86v" Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.785376 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.785706 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ec82113e-c41d-4ee5-906b-3aa78d343e46" containerName="nova-scheduler-scheduler" containerID="cri-o://ef049f30f83cb83855f56f0f8dc786de66113f501f66a8621e235a5bf0736608" gracePeriod=30 Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.810214 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.810535 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3d4768cc-1cd6-460b-a65a-a82cd0154317" containerName="nova-api-log" containerID="cri-o://c420cc1370e5cf9706b1371a0fd0285359c315ca4f0595aff96cc865fc03e963" gracePeriod=30 Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.810593 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3d4768cc-1cd6-460b-a65a-a82cd0154317" containerName="nova-api-api" containerID="cri-o://91b520c529bcd20922167cf0a74cca23f86a85ea8ca85364c455570487cfe510" gracePeriod=30 Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.823232 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.823635 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3f2adce5-58ba-4306-aad8-cdce724c23d1" containerName="nova-metadata-log" containerID="cri-o://8740916bed18d8180fbae725988d7a57f976364db24bd05ea0b23e0f512257a1" gracePeriod=30 Jan 27 16:08:25 crc kubenswrapper[4966]: I0127 16:08:25.823723 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3f2adce5-58ba-4306-aad8-cdce724c23d1" containerName="nova-metadata-metadata" containerID="cri-o://21a1f9ab6ee9283f1b792c200604275da81a9459d9934bde76ad0a69d5e4a1f0" gracePeriod=30 Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.569873 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.618487 4966 generic.go:334] "Generic (PLEG): container finished" podID="a9768164-573c-4615-a9b9-4d71b0cea701" containerID="2d33ca77dede2ca63795090b237040bf292a9cec30351c8a84375ea55cfa16f7" exitCode=137 Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.618556 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a9768164-573c-4615-a9b9-4d71b0cea701","Type":"ContainerDied","Data":"2d33ca77dede2ca63795090b237040bf292a9cec30351c8a84375ea55cfa16f7"} Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.618590 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a9768164-573c-4615-a9b9-4d71b0cea701","Type":"ContainerDied","Data":"0460b2cf9db2555fc61e8ee1a12dee0ff54bfeaaa3ce773e1698ba42fb41961a"} Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.618610 4966 scope.go:117] "RemoveContainer" containerID="2d33ca77dede2ca63795090b237040bf292a9cec30351c8a84375ea55cfa16f7" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.618785 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.629393 4966 generic.go:334] "Generic (PLEG): container finished" podID="3f2adce5-58ba-4306-aad8-cdce724c23d1" containerID="8740916bed18d8180fbae725988d7a57f976364db24bd05ea0b23e0f512257a1" exitCode=143 Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.629553 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3f2adce5-58ba-4306-aad8-cdce724c23d1","Type":"ContainerDied","Data":"8740916bed18d8180fbae725988d7a57f976364db24bd05ea0b23e0f512257a1"} Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.631909 4966 generic.go:334] "Generic (PLEG): container finished" podID="3d4768cc-1cd6-460b-a65a-a82cd0154317" containerID="c420cc1370e5cf9706b1371a0fd0285359c315ca4f0595aff96cc865fc03e963" exitCode=143 Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.631952 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d4768cc-1cd6-460b-a65a-a82cd0154317","Type":"ContainerDied","Data":"c420cc1370e5cf9706b1371a0fd0285359c315ca4f0595aff96cc865fc03e963"} Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.650871 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9768164-573c-4615-a9b9-4d71b0cea701-combined-ca-bundle\") pod \"a9768164-573c-4615-a9b9-4d71b0cea701\" (UID: \"a9768164-573c-4615-a9b9-4d71b0cea701\") " Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.651157 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrk7w\" (UniqueName: \"kubernetes.io/projected/a9768164-573c-4615-a9b9-4d71b0cea701-kube-api-access-qrk7w\") pod \"a9768164-573c-4615-a9b9-4d71b0cea701\" (UID: \"a9768164-573c-4615-a9b9-4d71b0cea701\") " Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.651295 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9768164-573c-4615-a9b9-4d71b0cea701-config-data\") pod \"a9768164-573c-4615-a9b9-4d71b0cea701\" (UID: \"a9768164-573c-4615-a9b9-4d71b0cea701\") " Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.651344 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9768164-573c-4615-a9b9-4d71b0cea701-scripts\") pod \"a9768164-573c-4615-a9b9-4d71b0cea701\" (UID: \"a9768164-573c-4615-a9b9-4d71b0cea701\") " Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.653109 4966 scope.go:117] "RemoveContainer" containerID="b60bd98f21daeb40e755029ea9e53db91a807ae010dc0a5142ed635ede9729b5" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.658215 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9768164-573c-4615-a9b9-4d71b0cea701-kube-api-access-qrk7w" (OuterVolumeSpecName: "kube-api-access-qrk7w") pod "a9768164-573c-4615-a9b9-4d71b0cea701" (UID: "a9768164-573c-4615-a9b9-4d71b0cea701"). InnerVolumeSpecName "kube-api-access-qrk7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.658577 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9768164-573c-4615-a9b9-4d71b0cea701-scripts" (OuterVolumeSpecName: "scripts") pod "a9768164-573c-4615-a9b9-4d71b0cea701" (UID: "a9768164-573c-4615-a9b9-4d71b0cea701"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.735520 4966 scope.go:117] "RemoveContainer" containerID="43787ccad6529b763426ed96bbc63a6149dd9352d960104293d83edf8e9b4245" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.754997 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrk7w\" (UniqueName: \"kubernetes.io/projected/a9768164-573c-4615-a9b9-4d71b0cea701-kube-api-access-qrk7w\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.755034 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9768164-573c-4615-a9b9-4d71b0cea701-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.758264 4966 scope.go:117] "RemoveContainer" containerID="88cab4c7d14bf558976fd78ccd095f6a1720af26d9ddab3a632af696a77e4820" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.784631 4966 scope.go:117] "RemoveContainer" containerID="2d33ca77dede2ca63795090b237040bf292a9cec30351c8a84375ea55cfa16f7" Jan 27 16:08:26 crc kubenswrapper[4966]: E0127 16:08:26.785123 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d33ca77dede2ca63795090b237040bf292a9cec30351c8a84375ea55cfa16f7\": container with ID starting with 2d33ca77dede2ca63795090b237040bf292a9cec30351c8a84375ea55cfa16f7 not found: ID does not exist" containerID="2d33ca77dede2ca63795090b237040bf292a9cec30351c8a84375ea55cfa16f7" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.785155 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d33ca77dede2ca63795090b237040bf292a9cec30351c8a84375ea55cfa16f7"} err="failed to get container status \"2d33ca77dede2ca63795090b237040bf292a9cec30351c8a84375ea55cfa16f7\": rpc error: code = NotFound desc = could not find container \"2d33ca77dede2ca63795090b237040bf292a9cec30351c8a84375ea55cfa16f7\": container with ID starting with 2d33ca77dede2ca63795090b237040bf292a9cec30351c8a84375ea55cfa16f7 not found: ID does not exist" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.785178 4966 scope.go:117] "RemoveContainer" containerID="b60bd98f21daeb40e755029ea9e53db91a807ae010dc0a5142ed635ede9729b5" Jan 27 16:08:26 crc kubenswrapper[4966]: E0127 16:08:26.785436 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b60bd98f21daeb40e755029ea9e53db91a807ae010dc0a5142ed635ede9729b5\": container with ID starting with b60bd98f21daeb40e755029ea9e53db91a807ae010dc0a5142ed635ede9729b5 not found: ID does not exist" containerID="b60bd98f21daeb40e755029ea9e53db91a807ae010dc0a5142ed635ede9729b5" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.785473 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b60bd98f21daeb40e755029ea9e53db91a807ae010dc0a5142ed635ede9729b5"} err="failed to get container status \"b60bd98f21daeb40e755029ea9e53db91a807ae010dc0a5142ed635ede9729b5\": rpc error: code = NotFound desc = could not find container \"b60bd98f21daeb40e755029ea9e53db91a807ae010dc0a5142ed635ede9729b5\": container with ID starting with b60bd98f21daeb40e755029ea9e53db91a807ae010dc0a5142ed635ede9729b5 not found: ID does not exist" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.785487 4966 scope.go:117] "RemoveContainer" containerID="43787ccad6529b763426ed96bbc63a6149dd9352d960104293d83edf8e9b4245" Jan 27 16:08:26 crc kubenswrapper[4966]: E0127 16:08:26.785814 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43787ccad6529b763426ed96bbc63a6149dd9352d960104293d83edf8e9b4245\": container with ID starting with 43787ccad6529b763426ed96bbc63a6149dd9352d960104293d83edf8e9b4245 not found: ID does not exist" containerID="43787ccad6529b763426ed96bbc63a6149dd9352d960104293d83edf8e9b4245" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.785834 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43787ccad6529b763426ed96bbc63a6149dd9352d960104293d83edf8e9b4245"} err="failed to get container status \"43787ccad6529b763426ed96bbc63a6149dd9352d960104293d83edf8e9b4245\": rpc error: code = NotFound desc = could not find container \"43787ccad6529b763426ed96bbc63a6149dd9352d960104293d83edf8e9b4245\": container with ID starting with 43787ccad6529b763426ed96bbc63a6149dd9352d960104293d83edf8e9b4245 not found: ID does not exist" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.785858 4966 scope.go:117] "RemoveContainer" containerID="88cab4c7d14bf558976fd78ccd095f6a1720af26d9ddab3a632af696a77e4820" Jan 27 16:08:26 crc kubenswrapper[4966]: E0127 16:08:26.786111 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88cab4c7d14bf558976fd78ccd095f6a1720af26d9ddab3a632af696a77e4820\": container with ID starting with 88cab4c7d14bf558976fd78ccd095f6a1720af26d9ddab3a632af696a77e4820 not found: ID does not exist" containerID="88cab4c7d14bf558976fd78ccd095f6a1720af26d9ddab3a632af696a77e4820" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.786147 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88cab4c7d14bf558976fd78ccd095f6a1720af26d9ddab3a632af696a77e4820"} err="failed to get container status \"88cab4c7d14bf558976fd78ccd095f6a1720af26d9ddab3a632af696a77e4820\": rpc error: code = NotFound desc = could not find container \"88cab4c7d14bf558976fd78ccd095f6a1720af26d9ddab3a632af696a77e4820\": container with ID starting with 88cab4c7d14bf558976fd78ccd095f6a1720af26d9ddab3a632af696a77e4820 not found: ID does not exist" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.829420 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9768164-573c-4615-a9b9-4d71b0cea701-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9768164-573c-4615-a9b9-4d71b0cea701" (UID: "a9768164-573c-4615-a9b9-4d71b0cea701"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.846402 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9768164-573c-4615-a9b9-4d71b0cea701-config-data" (OuterVolumeSpecName: "config-data") pod "a9768164-573c-4615-a9b9-4d71b0cea701" (UID: "a9768164-573c-4615-a9b9-4d71b0cea701"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.857294 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9768164-573c-4615-a9b9-4d71b0cea701-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.857337 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9768164-573c-4615-a9b9-4d71b0cea701-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.966333 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 27 16:08:26 crc kubenswrapper[4966]: I0127 16:08:26.980764 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.001965 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 27 16:08:27 crc kubenswrapper[4966]: E0127 16:08:27.002580 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9768164-573c-4615-a9b9-4d71b0cea701" containerName="aodh-listener" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.002598 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9768164-573c-4615-a9b9-4d71b0cea701" containerName="aodh-listener" Jan 27 16:08:27 crc kubenswrapper[4966]: E0127 16:08:27.002640 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32c99749-de0a-41b0-a8e6-8d4bc6ada807" containerName="nova-manage" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.002650 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="32c99749-de0a-41b0-a8e6-8d4bc6ada807" containerName="nova-manage" Jan 27 16:08:27 crc kubenswrapper[4966]: E0127 16:08:27.002666 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9768164-573c-4615-a9b9-4d71b0cea701" containerName="aodh-evaluator" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.002679 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9768164-573c-4615-a9b9-4d71b0cea701" containerName="aodh-evaluator" Jan 27 16:08:27 crc kubenswrapper[4966]: E0127 16:08:27.002688 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9768164-573c-4615-a9b9-4d71b0cea701" containerName="aodh-notifier" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.002695 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9768164-573c-4615-a9b9-4d71b0cea701" containerName="aodh-notifier" Jan 27 16:08:27 crc kubenswrapper[4966]: E0127 16:08:27.002714 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9768164-573c-4615-a9b9-4d71b0cea701" containerName="aodh-api" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.002719 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9768164-573c-4615-a9b9-4d71b0cea701" containerName="aodh-api" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.002963 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="32c99749-de0a-41b0-a8e6-8d4bc6ada807" containerName="nova-manage" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.002992 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9768164-573c-4615-a9b9-4d71b0cea701" containerName="aodh-evaluator" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.003010 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9768164-573c-4615-a9b9-4d71b0cea701" containerName="aodh-listener" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.003022 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9768164-573c-4615-a9b9-4d71b0cea701" containerName="aodh-notifier" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.003041 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9768164-573c-4615-a9b9-4d71b0cea701" containerName="aodh-api" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.005692 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.024164 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.024683 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.025085 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.024354 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.025488 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-jknwm" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.090966 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.178935 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-internal-tls-certs\") pod \"aodh-0\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " pod="openstack/aodh-0" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.179246 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n565l\" (UniqueName: \"kubernetes.io/projected/4c9455be-e18b-4b63-aa95-b564b865c894-kube-api-access-n565l\") pod \"aodh-0\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " pod="openstack/aodh-0" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.179318 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-scripts\") pod \"aodh-0\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " pod="openstack/aodh-0" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.179589 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-public-tls-certs\") pod \"aodh-0\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " pod="openstack/aodh-0" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.179761 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-config-data\") pod \"aodh-0\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " pod="openstack/aodh-0" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.180046 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-combined-ca-bundle\") pod \"aodh-0\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " pod="openstack/aodh-0" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.282205 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-scripts\") pod \"aodh-0\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " pod="openstack/aodh-0" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.282311 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-public-tls-certs\") pod \"aodh-0\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " pod="openstack/aodh-0" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.282366 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-config-data\") pod \"aodh-0\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " pod="openstack/aodh-0" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.282435 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-combined-ca-bundle\") pod \"aodh-0\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " pod="openstack/aodh-0" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.282462 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-internal-tls-certs\") pod \"aodh-0\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " pod="openstack/aodh-0" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.282516 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n565l\" (UniqueName: \"kubernetes.io/projected/4c9455be-e18b-4b63-aa95-b564b865c894-kube-api-access-n565l\") pod \"aodh-0\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " pod="openstack/aodh-0" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.287243 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-scripts\") pod \"aodh-0\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " pod="openstack/aodh-0" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.287329 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-internal-tls-certs\") pod \"aodh-0\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " pod="openstack/aodh-0" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.288455 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-public-tls-certs\") pod \"aodh-0\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " pod="openstack/aodh-0" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.289351 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-combined-ca-bundle\") pod \"aodh-0\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " pod="openstack/aodh-0" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.292087 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-config-data\") pod \"aodh-0\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " pod="openstack/aodh-0" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.300385 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n565l\" (UniqueName: \"kubernetes.io/projected/4c9455be-e18b-4b63-aa95-b564b865c894-kube-api-access-n565l\") pod \"aodh-0\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " pod="openstack/aodh-0" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.417250 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.521923 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:08:27 crc kubenswrapper[4966]: E0127 16:08:27.522463 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:08:27 crc kubenswrapper[4966]: W0127 16:08:27.881422 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c9455be_e18b_4b63_aa95_b564b865c894.slice/crio-b9fe43d9f4a425f5e5126877ec92e0963be6eb95102ccdbaa4f1e78392517940 WatchSource:0}: Error finding container b9fe43d9f4a425f5e5126877ec92e0963be6eb95102ccdbaa4f1e78392517940: Status 404 returned error can't find the container with id b9fe43d9f4a425f5e5126877ec92e0963be6eb95102ccdbaa4f1e78392517940 Jan 27 16:08:27 crc kubenswrapper[4966]: I0127 16:08:27.883487 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 16:08:28 crc kubenswrapper[4966]: E0127 16:08:28.080582 4966 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ef049f30f83cb83855f56f0f8dc786de66113f501f66a8621e235a5bf0736608" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 16:08:28 crc kubenswrapper[4966]: E0127 16:08:28.084426 4966 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ef049f30f83cb83855f56f0f8dc786de66113f501f66a8621e235a5bf0736608" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 16:08:28 crc kubenswrapper[4966]: E0127 16:08:28.086029 4966 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ef049f30f83cb83855f56f0f8dc786de66113f501f66a8621e235a5bf0736608" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 16:08:28 crc kubenswrapper[4966]: E0127 16:08:28.086137 4966 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ec82113e-c41d-4ee5-906b-3aa78d343e46" containerName="nova-scheduler-scheduler" Jan 27 16:08:28 crc kubenswrapper[4966]: I0127 16:08:28.539501 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9768164-573c-4615-a9b9-4d71b0cea701" path="/var/lib/kubelet/pods/a9768164-573c-4615-a9b9-4d71b0cea701/volumes" Jan 27 16:08:28 crc kubenswrapper[4966]: I0127 16:08:28.673857 4966 generic.go:334] "Generic (PLEG): container finished" podID="ec82113e-c41d-4ee5-906b-3aa78d343e46" containerID="ef049f30f83cb83855f56f0f8dc786de66113f501f66a8621e235a5bf0736608" exitCode=0 Jan 27 16:08:28 crc kubenswrapper[4966]: I0127 16:08:28.673950 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ec82113e-c41d-4ee5-906b-3aa78d343e46","Type":"ContainerDied","Data":"ef049f30f83cb83855f56f0f8dc786de66113f501f66a8621e235a5bf0736608"} Jan 27 16:08:28 crc kubenswrapper[4966]: I0127 16:08:28.675674 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4c9455be-e18b-4b63-aa95-b564b865c894","Type":"ContainerStarted","Data":"b9fe43d9f4a425f5e5126877ec92e0963be6eb95102ccdbaa4f1e78392517940"} Jan 27 16:08:28 crc kubenswrapper[4966]: I0127 16:08:28.707332 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 16:08:28 crc kubenswrapper[4966]: I0127 16:08:28.837382 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec82113e-c41d-4ee5-906b-3aa78d343e46-combined-ca-bundle\") pod \"ec82113e-c41d-4ee5-906b-3aa78d343e46\" (UID: \"ec82113e-c41d-4ee5-906b-3aa78d343e46\") " Jan 27 16:08:28 crc kubenswrapper[4966]: I0127 16:08:28.838078 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec82113e-c41d-4ee5-906b-3aa78d343e46-config-data\") pod \"ec82113e-c41d-4ee5-906b-3aa78d343e46\" (UID: \"ec82113e-c41d-4ee5-906b-3aa78d343e46\") " Jan 27 16:08:28 crc kubenswrapper[4966]: I0127 16:08:28.838391 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-864qs\" (UniqueName: \"kubernetes.io/projected/ec82113e-c41d-4ee5-906b-3aa78d343e46-kube-api-access-864qs\") pod \"ec82113e-c41d-4ee5-906b-3aa78d343e46\" (UID: \"ec82113e-c41d-4ee5-906b-3aa78d343e46\") " Jan 27 16:08:28 crc kubenswrapper[4966]: I0127 16:08:28.849148 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec82113e-c41d-4ee5-906b-3aa78d343e46-kube-api-access-864qs" (OuterVolumeSpecName: "kube-api-access-864qs") pod "ec82113e-c41d-4ee5-906b-3aa78d343e46" (UID: "ec82113e-c41d-4ee5-906b-3aa78d343e46"). InnerVolumeSpecName "kube-api-access-864qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:08:28 crc kubenswrapper[4966]: I0127 16:08:28.890173 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec82113e-c41d-4ee5-906b-3aa78d343e46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec82113e-c41d-4ee5-906b-3aa78d343e46" (UID: "ec82113e-c41d-4ee5-906b-3aa78d343e46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:28 crc kubenswrapper[4966]: I0127 16:08:28.891348 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec82113e-c41d-4ee5-906b-3aa78d343e46-config-data" (OuterVolumeSpecName: "config-data") pod "ec82113e-c41d-4ee5-906b-3aa78d343e46" (UID: "ec82113e-c41d-4ee5-906b-3aa78d343e46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:28 crc kubenswrapper[4966]: I0127 16:08:28.941292 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec82113e-c41d-4ee5-906b-3aa78d343e46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:28 crc kubenswrapper[4966]: I0127 16:08:28.941347 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec82113e-c41d-4ee5-906b-3aa78d343e46-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:28 crc kubenswrapper[4966]: I0127 16:08:28.941360 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-864qs\" (UniqueName: \"kubernetes.io/projected/ec82113e-c41d-4ee5-906b-3aa78d343e46-kube-api-access-864qs\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.506767 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.563983 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.661425 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f2adce5-58ba-4306-aad8-cdce724c23d1-logs\") pod \"3f2adce5-58ba-4306-aad8-cdce724c23d1\" (UID: \"3f2adce5-58ba-4306-aad8-cdce724c23d1\") " Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.661794 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-public-tls-certs\") pod \"3d4768cc-1cd6-460b-a65a-a82cd0154317\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.661801 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f2adce5-58ba-4306-aad8-cdce724c23d1-logs" (OuterVolumeSpecName: "logs") pod "3f2adce5-58ba-4306-aad8-cdce724c23d1" (UID: "3f2adce5-58ba-4306-aad8-cdce724c23d1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.661833 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f2adce5-58ba-4306-aad8-cdce724c23d1-config-data\") pod \"3f2adce5-58ba-4306-aad8-cdce724c23d1\" (UID: \"3f2adce5-58ba-4306-aad8-cdce724c23d1\") " Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.661954 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jrmt\" (UniqueName: \"kubernetes.io/projected/3f2adce5-58ba-4306-aad8-cdce724c23d1-kube-api-access-8jrmt\") pod \"3f2adce5-58ba-4306-aad8-cdce724c23d1\" (UID: \"3f2adce5-58ba-4306-aad8-cdce724c23d1\") " Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.661998 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f2adce5-58ba-4306-aad8-cdce724c23d1-combined-ca-bundle\") pod \"3f2adce5-58ba-4306-aad8-cdce724c23d1\" (UID: \"3f2adce5-58ba-4306-aad8-cdce724c23d1\") " Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.662024 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d4768cc-1cd6-460b-a65a-a82cd0154317-logs\") pod \"3d4768cc-1cd6-460b-a65a-a82cd0154317\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.662062 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-config-data\") pod \"3d4768cc-1cd6-460b-a65a-a82cd0154317\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.662089 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f2adce5-58ba-4306-aad8-cdce724c23d1-nova-metadata-tls-certs\") pod \"3f2adce5-58ba-4306-aad8-cdce724c23d1\" (UID: \"3f2adce5-58ba-4306-aad8-cdce724c23d1\") " Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.662115 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-combined-ca-bundle\") pod \"3d4768cc-1cd6-460b-a65a-a82cd0154317\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.662196 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-internal-tls-certs\") pod \"3d4768cc-1cd6-460b-a65a-a82cd0154317\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.662228 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9dv2\" (UniqueName: \"kubernetes.io/projected/3d4768cc-1cd6-460b-a65a-a82cd0154317-kube-api-access-g9dv2\") pod \"3d4768cc-1cd6-460b-a65a-a82cd0154317\" (UID: \"3d4768cc-1cd6-460b-a65a-a82cd0154317\") " Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.662618 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d4768cc-1cd6-460b-a65a-a82cd0154317-logs" (OuterVolumeSpecName: "logs") pod "3d4768cc-1cd6-460b-a65a-a82cd0154317" (UID: "3d4768cc-1cd6-460b-a65a-a82cd0154317"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.663051 4966 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d4768cc-1cd6-460b-a65a-a82cd0154317-logs\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.663072 4966 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f2adce5-58ba-4306-aad8-cdce724c23d1-logs\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.666928 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f2adce5-58ba-4306-aad8-cdce724c23d1-kube-api-access-8jrmt" (OuterVolumeSpecName: "kube-api-access-8jrmt") pod "3f2adce5-58ba-4306-aad8-cdce724c23d1" (UID: "3f2adce5-58ba-4306-aad8-cdce724c23d1"). InnerVolumeSpecName "kube-api-access-8jrmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.667273 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d4768cc-1cd6-460b-a65a-a82cd0154317-kube-api-access-g9dv2" (OuterVolumeSpecName: "kube-api-access-g9dv2") pod "3d4768cc-1cd6-460b-a65a-a82cd0154317" (UID: "3d4768cc-1cd6-460b-a65a-a82cd0154317"). InnerVolumeSpecName "kube-api-access-g9dv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.688838 4966 generic.go:334] "Generic (PLEG): container finished" podID="3d4768cc-1cd6-460b-a65a-a82cd0154317" containerID="91b520c529bcd20922167cf0a74cca23f86a85ea8ca85364c455570487cfe510" exitCode=0 Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.688883 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.688981 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d4768cc-1cd6-460b-a65a-a82cd0154317","Type":"ContainerDied","Data":"91b520c529bcd20922167cf0a74cca23f86a85ea8ca85364c455570487cfe510"} Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.689025 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d4768cc-1cd6-460b-a65a-a82cd0154317","Type":"ContainerDied","Data":"d00dd0ef50a5e5bbd7c7234f504227e5c77b07178964991f41a81a9b8db95a01"} Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.689053 4966 scope.go:117] "RemoveContainer" containerID="91b520c529bcd20922167cf0a74cca23f86a85ea8ca85364c455570487cfe510" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.692670 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.692762 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ec82113e-c41d-4ee5-906b-3aa78d343e46","Type":"ContainerDied","Data":"74297b816a6639dce8e2dfb1b26d35da82c5f708da0ac33319cdc0fc4cf220c5"} Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.697221 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-config-data" (OuterVolumeSpecName: "config-data") pod "3d4768cc-1cd6-460b-a65a-a82cd0154317" (UID: "3d4768cc-1cd6-460b-a65a-a82cd0154317"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.699107 4966 generic.go:334] "Generic (PLEG): container finished" podID="3f2adce5-58ba-4306-aad8-cdce724c23d1" containerID="21a1f9ab6ee9283f1b792c200604275da81a9459d9934bde76ad0a69d5e4a1f0" exitCode=0 Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.699159 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3f2adce5-58ba-4306-aad8-cdce724c23d1","Type":"ContainerDied","Data":"21a1f9ab6ee9283f1b792c200604275da81a9459d9934bde76ad0a69d5e4a1f0"} Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.699183 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3f2adce5-58ba-4306-aad8-cdce724c23d1","Type":"ContainerDied","Data":"18a2fc50f03a1091eb2816406e35dbc4d1f2572859d055f7a8670d08b0e67e63"} Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.699228 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.704127 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4c9455be-e18b-4b63-aa95-b564b865c894","Type":"ContainerStarted","Data":"81a79cc083a72432c68279e877a519d0de42243d01dd05f0cd6873c816ed34b3"} Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.704162 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4c9455be-e18b-4b63-aa95-b564b865c894","Type":"ContainerStarted","Data":"93c5553a37fa0703ee26e0e9c080dd7ed8ebf27bdf0aa354dbead102f0837361"} Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.706043 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f2adce5-58ba-4306-aad8-cdce724c23d1-config-data" (OuterVolumeSpecName: "config-data") pod "3f2adce5-58ba-4306-aad8-cdce724c23d1" (UID: "3f2adce5-58ba-4306-aad8-cdce724c23d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.712744 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d4768cc-1cd6-460b-a65a-a82cd0154317" (UID: "3d4768cc-1cd6-460b-a65a-a82cd0154317"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.715415 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f2adce5-58ba-4306-aad8-cdce724c23d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f2adce5-58ba-4306-aad8-cdce724c23d1" (UID: "3f2adce5-58ba-4306-aad8-cdce724c23d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.717967 4966 scope.go:117] "RemoveContainer" containerID="c420cc1370e5cf9706b1371a0fd0285359c315ca4f0595aff96cc865fc03e963" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.746736 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.752002 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f2adce5-58ba-4306-aad8-cdce724c23d1-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3f2adce5-58ba-4306-aad8-cdce724c23d1" (UID: "3f2adce5-58ba-4306-aad8-cdce724c23d1"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.758134 4966 scope.go:117] "RemoveContainer" containerID="91b520c529bcd20922167cf0a74cca23f86a85ea8ca85364c455570487cfe510" Jan 27 16:08:29 crc kubenswrapper[4966]: E0127 16:08:29.759428 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91b520c529bcd20922167cf0a74cca23f86a85ea8ca85364c455570487cfe510\": container with ID starting with 91b520c529bcd20922167cf0a74cca23f86a85ea8ca85364c455570487cfe510 not found: ID does not exist" containerID="91b520c529bcd20922167cf0a74cca23f86a85ea8ca85364c455570487cfe510" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.759456 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91b520c529bcd20922167cf0a74cca23f86a85ea8ca85364c455570487cfe510"} err="failed to get container status \"91b520c529bcd20922167cf0a74cca23f86a85ea8ca85364c455570487cfe510\": rpc error: code = NotFound desc = could not find container \"91b520c529bcd20922167cf0a74cca23f86a85ea8ca85364c455570487cfe510\": container with ID starting with 91b520c529bcd20922167cf0a74cca23f86a85ea8ca85364c455570487cfe510 not found: ID does not exist" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.759479 4966 scope.go:117] "RemoveContainer" containerID="c420cc1370e5cf9706b1371a0fd0285359c315ca4f0595aff96cc865fc03e963" Jan 27 16:08:29 crc kubenswrapper[4966]: E0127 16:08:29.760379 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c420cc1370e5cf9706b1371a0fd0285359c315ca4f0595aff96cc865fc03e963\": container with ID starting with c420cc1370e5cf9706b1371a0fd0285359c315ca4f0595aff96cc865fc03e963 not found: ID does not exist" containerID="c420cc1370e5cf9706b1371a0fd0285359c315ca4f0595aff96cc865fc03e963" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.760406 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c420cc1370e5cf9706b1371a0fd0285359c315ca4f0595aff96cc865fc03e963"} err="failed to get container status \"c420cc1370e5cf9706b1371a0fd0285359c315ca4f0595aff96cc865fc03e963\": rpc error: code = NotFound desc = could not find container \"c420cc1370e5cf9706b1371a0fd0285359c315ca4f0595aff96cc865fc03e963\": container with ID starting with c420cc1370e5cf9706b1371a0fd0285359c315ca4f0595aff96cc865fc03e963 not found: ID does not exist" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.760421 4966 scope.go:117] "RemoveContainer" containerID="ef049f30f83cb83855f56f0f8dc786de66113f501f66a8621e235a5bf0736608" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.762559 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3d4768cc-1cd6-460b-a65a-a82cd0154317" (UID: "3d4768cc-1cd6-460b-a65a-a82cd0154317"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.764998 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9dv2\" (UniqueName: \"kubernetes.io/projected/3d4768cc-1cd6-460b-a65a-a82cd0154317-kube-api-access-g9dv2\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.765016 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f2adce5-58ba-4306-aad8-cdce724c23d1-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.765025 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jrmt\" (UniqueName: \"kubernetes.io/projected/3f2adce5-58ba-4306-aad8-cdce724c23d1-kube-api-access-8jrmt\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.765049 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f2adce5-58ba-4306-aad8-cdce724c23d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.765059 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.765068 4966 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f2adce5-58ba-4306-aad8-cdce724c23d1-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.765077 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.765085 4966 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.767945 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.768346 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3d4768cc-1cd6-460b-a65a-a82cd0154317" (UID: "3d4768cc-1cd6-460b-a65a-a82cd0154317"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.779950 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 16:08:29 crc kubenswrapper[4966]: E0127 16:08:29.780397 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d4768cc-1cd6-460b-a65a-a82cd0154317" containerName="nova-api-api" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.780414 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d4768cc-1cd6-460b-a65a-a82cd0154317" containerName="nova-api-api" Jan 27 16:08:29 crc kubenswrapper[4966]: E0127 16:08:29.780435 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec82113e-c41d-4ee5-906b-3aa78d343e46" containerName="nova-scheduler-scheduler" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.780443 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec82113e-c41d-4ee5-906b-3aa78d343e46" containerName="nova-scheduler-scheduler" Jan 27 16:08:29 crc kubenswrapper[4966]: E0127 16:08:29.780498 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f2adce5-58ba-4306-aad8-cdce724c23d1" containerName="nova-metadata-metadata" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.780505 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f2adce5-58ba-4306-aad8-cdce724c23d1" containerName="nova-metadata-metadata" Jan 27 16:08:29 crc kubenswrapper[4966]: E0127 16:08:29.780522 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f2adce5-58ba-4306-aad8-cdce724c23d1" containerName="nova-metadata-log" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.780528 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f2adce5-58ba-4306-aad8-cdce724c23d1" containerName="nova-metadata-log" Jan 27 16:08:29 crc kubenswrapper[4966]: E0127 16:08:29.780540 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d4768cc-1cd6-460b-a65a-a82cd0154317" containerName="nova-api-log" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.780546 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d4768cc-1cd6-460b-a65a-a82cd0154317" containerName="nova-api-log" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.780754 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec82113e-c41d-4ee5-906b-3aa78d343e46" containerName="nova-scheduler-scheduler" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.780769 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f2adce5-58ba-4306-aad8-cdce724c23d1" containerName="nova-metadata-metadata" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.780789 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f2adce5-58ba-4306-aad8-cdce724c23d1" containerName="nova-metadata-log" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.780798 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d4768cc-1cd6-460b-a65a-a82cd0154317" containerName="nova-api-log" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.781517 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d4768cc-1cd6-460b-a65a-a82cd0154317" containerName="nova-api-api" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.782264 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.785787 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.788427 4966 scope.go:117] "RemoveContainer" containerID="21a1f9ab6ee9283f1b792c200604275da81a9459d9934bde76ad0a69d5e4a1f0" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.798467 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.816062 4966 scope.go:117] "RemoveContainer" containerID="8740916bed18d8180fbae725988d7a57f976364db24bd05ea0b23e0f512257a1" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.849679 4966 scope.go:117] "RemoveContainer" containerID="21a1f9ab6ee9283f1b792c200604275da81a9459d9934bde76ad0a69d5e4a1f0" Jan 27 16:08:29 crc kubenswrapper[4966]: E0127 16:08:29.850094 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21a1f9ab6ee9283f1b792c200604275da81a9459d9934bde76ad0a69d5e4a1f0\": container with ID starting with 21a1f9ab6ee9283f1b792c200604275da81a9459d9934bde76ad0a69d5e4a1f0 not found: ID does not exist" containerID="21a1f9ab6ee9283f1b792c200604275da81a9459d9934bde76ad0a69d5e4a1f0" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.850128 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21a1f9ab6ee9283f1b792c200604275da81a9459d9934bde76ad0a69d5e4a1f0"} err="failed to get container status \"21a1f9ab6ee9283f1b792c200604275da81a9459d9934bde76ad0a69d5e4a1f0\": rpc error: code = NotFound desc = could not find container \"21a1f9ab6ee9283f1b792c200604275da81a9459d9934bde76ad0a69d5e4a1f0\": container with ID starting with 21a1f9ab6ee9283f1b792c200604275da81a9459d9934bde76ad0a69d5e4a1f0 not found: ID does not exist" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.850151 4966 scope.go:117] "RemoveContainer" containerID="8740916bed18d8180fbae725988d7a57f976364db24bd05ea0b23e0f512257a1" Jan 27 16:08:29 crc kubenswrapper[4966]: E0127 16:08:29.850477 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8740916bed18d8180fbae725988d7a57f976364db24bd05ea0b23e0f512257a1\": container with ID starting with 8740916bed18d8180fbae725988d7a57f976364db24bd05ea0b23e0f512257a1 not found: ID does not exist" containerID="8740916bed18d8180fbae725988d7a57f976364db24bd05ea0b23e0f512257a1" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.850498 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8740916bed18d8180fbae725988d7a57f976364db24bd05ea0b23e0f512257a1"} err="failed to get container status \"8740916bed18d8180fbae725988d7a57f976364db24bd05ea0b23e0f512257a1\": rpc error: code = NotFound desc = could not find container \"8740916bed18d8180fbae725988d7a57f976364db24bd05ea0b23e0f512257a1\": container with ID starting with 8740916bed18d8180fbae725988d7a57f976364db24bd05ea0b23e0f512257a1 not found: ID does not exist" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.867162 4966 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d4768cc-1cd6-460b-a65a-a82cd0154317-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.969192 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e083fa11-ceea-4516-8fb9-84b13faf4411-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e083fa11-ceea-4516-8fb9-84b13faf4411\") " pod="openstack/nova-scheduler-0" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.969252 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e083fa11-ceea-4516-8fb9-84b13faf4411-config-data\") pod \"nova-scheduler-0\" (UID: \"e083fa11-ceea-4516-8fb9-84b13faf4411\") " pod="openstack/nova-scheduler-0" Jan 27 16:08:29 crc kubenswrapper[4966]: I0127 16:08:29.969418 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k57gv\" (UniqueName: \"kubernetes.io/projected/e083fa11-ceea-4516-8fb9-84b13faf4411-kube-api-access-k57gv\") pod \"nova-scheduler-0\" (UID: \"e083fa11-ceea-4516-8fb9-84b13faf4411\") " pod="openstack/nova-scheduler-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.072018 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e083fa11-ceea-4516-8fb9-84b13faf4411-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e083fa11-ceea-4516-8fb9-84b13faf4411\") " pod="openstack/nova-scheduler-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.072089 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e083fa11-ceea-4516-8fb9-84b13faf4411-config-data\") pod \"nova-scheduler-0\" (UID: \"e083fa11-ceea-4516-8fb9-84b13faf4411\") " pod="openstack/nova-scheduler-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.072309 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k57gv\" (UniqueName: \"kubernetes.io/projected/e083fa11-ceea-4516-8fb9-84b13faf4411-kube-api-access-k57gv\") pod \"nova-scheduler-0\" (UID: \"e083fa11-ceea-4516-8fb9-84b13faf4411\") " pod="openstack/nova-scheduler-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.077592 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e083fa11-ceea-4516-8fb9-84b13faf4411-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e083fa11-ceea-4516-8fb9-84b13faf4411\") " pod="openstack/nova-scheduler-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.077923 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e083fa11-ceea-4516-8fb9-84b13faf4411-config-data\") pod \"nova-scheduler-0\" (UID: \"e083fa11-ceea-4516-8fb9-84b13faf4411\") " pod="openstack/nova-scheduler-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.092853 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k57gv\" (UniqueName: \"kubernetes.io/projected/e083fa11-ceea-4516-8fb9-84b13faf4411-kube-api-access-k57gv\") pod \"nova-scheduler-0\" (UID: \"e083fa11-ceea-4516-8fb9-84b13faf4411\") " pod="openstack/nova-scheduler-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.097021 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.354669 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.384480 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.398011 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.410987 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.423753 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.426143 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.428344 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.428952 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.429985 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.438854 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.447303 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.449469 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.449611 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.460053 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.483882 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.509198 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ddb20-6758-4eff-a0ea-0c0437f990e4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3b8ddb20-6758-4eff-a0ea-0c0437f990e4\") " pod="openstack/nova-api-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.509344 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dffd2675-6c13-4c1e-82c5-d19c859db134-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dffd2675-6c13-4c1e-82c5-d19c859db134\") " pod="openstack/nova-metadata-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.509406 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dffd2675-6c13-4c1e-82c5-d19c859db134-config-data\") pod \"nova-metadata-0\" (UID: \"dffd2675-6c13-4c1e-82c5-d19c859db134\") " pod="openstack/nova-metadata-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.509504 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dffd2675-6c13-4c1e-82c5-d19c859db134-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dffd2675-6c13-4c1e-82c5-d19c859db134\") " pod="openstack/nova-metadata-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.509550 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8ddb20-6758-4eff-a0ea-0c0437f990e4-config-data\") pod \"nova-api-0\" (UID: \"3b8ddb20-6758-4eff-a0ea-0c0437f990e4\") " pod="openstack/nova-api-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.509635 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ddb20-6758-4eff-a0ea-0c0437f990e4-public-tls-certs\") pod \"nova-api-0\" (UID: \"3b8ddb20-6758-4eff-a0ea-0c0437f990e4\") " pod="openstack/nova-api-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.509689 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dffd2675-6c13-4c1e-82c5-d19c859db134-logs\") pod \"nova-metadata-0\" (UID: \"dffd2675-6c13-4c1e-82c5-d19c859db134\") " pod="openstack/nova-metadata-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.509727 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb5pw\" (UniqueName: \"kubernetes.io/projected/3b8ddb20-6758-4eff-a0ea-0c0437f990e4-kube-api-access-qb5pw\") pod \"nova-api-0\" (UID: \"3b8ddb20-6758-4eff-a0ea-0c0437f990e4\") " pod="openstack/nova-api-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.509776 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bdkv\" (UniqueName: \"kubernetes.io/projected/dffd2675-6c13-4c1e-82c5-d19c859db134-kube-api-access-4bdkv\") pod \"nova-metadata-0\" (UID: \"dffd2675-6c13-4c1e-82c5-d19c859db134\") " pod="openstack/nova-metadata-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.509958 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ddb20-6758-4eff-a0ea-0c0437f990e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3b8ddb20-6758-4eff-a0ea-0c0437f990e4\") " pod="openstack/nova-api-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.509986 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ddb20-6758-4eff-a0ea-0c0437f990e4-logs\") pod \"nova-api-0\" (UID: \"3b8ddb20-6758-4eff-a0ea-0c0437f990e4\") " pod="openstack/nova-api-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.541352 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d4768cc-1cd6-460b-a65a-a82cd0154317" path="/var/lib/kubelet/pods/3d4768cc-1cd6-460b-a65a-a82cd0154317/volumes" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.542029 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f2adce5-58ba-4306-aad8-cdce724c23d1" path="/var/lib/kubelet/pods/3f2adce5-58ba-4306-aad8-cdce724c23d1/volumes" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.542721 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec82113e-c41d-4ee5-906b-3aa78d343e46" path="/var/lib/kubelet/pods/ec82113e-c41d-4ee5-906b-3aa78d343e46/volumes" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.612071 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dffd2675-6c13-4c1e-82c5-d19c859db134-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dffd2675-6c13-4c1e-82c5-d19c859db134\") " pod="openstack/nova-metadata-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.612114 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8ddb20-6758-4eff-a0ea-0c0437f990e4-config-data\") pod \"nova-api-0\" (UID: \"3b8ddb20-6758-4eff-a0ea-0c0437f990e4\") " pod="openstack/nova-api-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.612155 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ddb20-6758-4eff-a0ea-0c0437f990e4-public-tls-certs\") pod \"nova-api-0\" (UID: \"3b8ddb20-6758-4eff-a0ea-0c0437f990e4\") " pod="openstack/nova-api-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.612182 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dffd2675-6c13-4c1e-82c5-d19c859db134-logs\") pod \"nova-metadata-0\" (UID: \"dffd2675-6c13-4c1e-82c5-d19c859db134\") " pod="openstack/nova-metadata-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.612198 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb5pw\" (UniqueName: \"kubernetes.io/projected/3b8ddb20-6758-4eff-a0ea-0c0437f990e4-kube-api-access-qb5pw\") pod \"nova-api-0\" (UID: \"3b8ddb20-6758-4eff-a0ea-0c0437f990e4\") " pod="openstack/nova-api-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.612226 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bdkv\" (UniqueName: \"kubernetes.io/projected/dffd2675-6c13-4c1e-82c5-d19c859db134-kube-api-access-4bdkv\") pod \"nova-metadata-0\" (UID: \"dffd2675-6c13-4c1e-82c5-d19c859db134\") " pod="openstack/nova-metadata-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.612301 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ddb20-6758-4eff-a0ea-0c0437f990e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3b8ddb20-6758-4eff-a0ea-0c0437f990e4\") " pod="openstack/nova-api-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.612316 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ddb20-6758-4eff-a0ea-0c0437f990e4-logs\") pod \"nova-api-0\" (UID: \"3b8ddb20-6758-4eff-a0ea-0c0437f990e4\") " pod="openstack/nova-api-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.612388 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ddb20-6758-4eff-a0ea-0c0437f990e4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3b8ddb20-6758-4eff-a0ea-0c0437f990e4\") " pod="openstack/nova-api-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.612445 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dffd2675-6c13-4c1e-82c5-d19c859db134-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dffd2675-6c13-4c1e-82c5-d19c859db134\") " pod="openstack/nova-metadata-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.612470 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dffd2675-6c13-4c1e-82c5-d19c859db134-config-data\") pod \"nova-metadata-0\" (UID: \"dffd2675-6c13-4c1e-82c5-d19c859db134\") " pod="openstack/nova-metadata-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.613784 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ddb20-6758-4eff-a0ea-0c0437f990e4-logs\") pod \"nova-api-0\" (UID: \"3b8ddb20-6758-4eff-a0ea-0c0437f990e4\") " pod="openstack/nova-api-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.615717 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dffd2675-6c13-4c1e-82c5-d19c859db134-logs\") pod \"nova-metadata-0\" (UID: \"dffd2675-6c13-4c1e-82c5-d19c859db134\") " pod="openstack/nova-metadata-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.618482 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dffd2675-6c13-4c1e-82c5-d19c859db134-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dffd2675-6c13-4c1e-82c5-d19c859db134\") " pod="openstack/nova-metadata-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.618628 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ddb20-6758-4eff-a0ea-0c0437f990e4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3b8ddb20-6758-4eff-a0ea-0c0437f990e4\") " pod="openstack/nova-api-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.619016 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8ddb20-6758-4eff-a0ea-0c0437f990e4-config-data\") pod \"nova-api-0\" (UID: \"3b8ddb20-6758-4eff-a0ea-0c0437f990e4\") " pod="openstack/nova-api-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.619569 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ddb20-6758-4eff-a0ea-0c0437f990e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3b8ddb20-6758-4eff-a0ea-0c0437f990e4\") " pod="openstack/nova-api-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.620204 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dffd2675-6c13-4c1e-82c5-d19c859db134-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dffd2675-6c13-4c1e-82c5-d19c859db134\") " pod="openstack/nova-metadata-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.620536 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ddb20-6758-4eff-a0ea-0c0437f990e4-public-tls-certs\") pod \"nova-api-0\" (UID: \"3b8ddb20-6758-4eff-a0ea-0c0437f990e4\") " pod="openstack/nova-api-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.620861 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dffd2675-6c13-4c1e-82c5-d19c859db134-config-data\") pod \"nova-metadata-0\" (UID: \"dffd2675-6c13-4c1e-82c5-d19c859db134\") " pod="openstack/nova-metadata-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.627437 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bdkv\" (UniqueName: \"kubernetes.io/projected/dffd2675-6c13-4c1e-82c5-d19c859db134-kube-api-access-4bdkv\") pod \"nova-metadata-0\" (UID: \"dffd2675-6c13-4c1e-82c5-d19c859db134\") " pod="openstack/nova-metadata-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.629641 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb5pw\" (UniqueName: \"kubernetes.io/projected/3b8ddb20-6758-4eff-a0ea-0c0437f990e4-kube-api-access-qb5pw\") pod \"nova-api-0\" (UID: \"3b8ddb20-6758-4eff-a0ea-0c0437f990e4\") " pod="openstack/nova-api-0" Jan 27 16:08:30 crc kubenswrapper[4966]: W0127 16:08:30.701474 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode083fa11_ceea_4516_8fb9_84b13faf4411.slice/crio-17cefdb4ef2f72f287f2f31e64975499ee24552d6ac805d9cb411c78750e11da WatchSource:0}: Error finding container 17cefdb4ef2f72f287f2f31e64975499ee24552d6ac805d9cb411c78750e11da: Status 404 returned error can't find the container with id 17cefdb4ef2f72f287f2f31e64975499ee24552d6ac805d9cb411c78750e11da Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.704091 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.723735 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4c9455be-e18b-4b63-aa95-b564b865c894","Type":"ContainerStarted","Data":"d82b63dabc7cb4a8248c8832270e6d4c8f3c260b49478782ed0a8a5d3d80e18a"} Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.725454 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e083fa11-ceea-4516-8fb9-84b13faf4411","Type":"ContainerStarted","Data":"17cefdb4ef2f72f287f2f31e64975499ee24552d6ac805d9cb411c78750e11da"} Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.769696 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 16:08:30 crc kubenswrapper[4966]: I0127 16:08:30.781692 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 16:08:31 crc kubenswrapper[4966]: W0127 16:08:31.301561 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b8ddb20_6758_4eff_a0ea_0c0437f990e4.slice/crio-2fd318577e1d3b5b4e132dc9f12b9bd2e727d369531d457bd7668d8e229373dc WatchSource:0}: Error finding container 2fd318577e1d3b5b4e132dc9f12b9bd2e727d369531d457bd7668d8e229373dc: Status 404 returned error can't find the container with id 2fd318577e1d3b5b4e132dc9f12b9bd2e727d369531d457bd7668d8e229373dc Jan 27 16:08:31 crc kubenswrapper[4966]: W0127 16:08:31.308599 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddffd2675_6c13_4c1e_82c5_d19c859db134.slice/crio-e57e59f405cfd583c1e46baa4165e90afcb42185968f4308fb62cf31e21ef0ca WatchSource:0}: Error finding container e57e59f405cfd583c1e46baa4165e90afcb42185968f4308fb62cf31e21ef0ca: Status 404 returned error can't find the container with id e57e59f405cfd583c1e46baa4165e90afcb42185968f4308fb62cf31e21ef0ca Jan 27 16:08:31 crc kubenswrapper[4966]: I0127 16:08:31.312022 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 16:08:31 crc kubenswrapper[4966]: I0127 16:08:31.332082 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 16:08:31 crc kubenswrapper[4966]: I0127 16:08:31.736981 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3b8ddb20-6758-4eff-a0ea-0c0437f990e4","Type":"ContainerStarted","Data":"4f30d3ef694e80dd7050a97be0da2741f7af55b1b19609b31e473823efca166c"} Jan 27 16:08:31 crc kubenswrapper[4966]: I0127 16:08:31.737369 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3b8ddb20-6758-4eff-a0ea-0c0437f990e4","Type":"ContainerStarted","Data":"195239f8a0b52cbd59a262b288ae5f1ce2f03286dcb87e87c93b3a87e2c0fc12"} Jan 27 16:08:31 crc kubenswrapper[4966]: I0127 16:08:31.737381 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3b8ddb20-6758-4eff-a0ea-0c0437f990e4","Type":"ContainerStarted","Data":"2fd318577e1d3b5b4e132dc9f12b9bd2e727d369531d457bd7668d8e229373dc"} Jan 27 16:08:31 crc kubenswrapper[4966]: I0127 16:08:31.740438 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4c9455be-e18b-4b63-aa95-b564b865c894","Type":"ContainerStarted","Data":"5076bce9517105fea3b188931b6f701c0c0d74cb8d4d6bea7bedc42ff87dfe47"} Jan 27 16:08:31 crc kubenswrapper[4966]: I0127 16:08:31.742248 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e083fa11-ceea-4516-8fb9-84b13faf4411","Type":"ContainerStarted","Data":"643dd6606d0ad69895936b2649106fd29595c3582f894e1f1ca38b110365fe51"} Jan 27 16:08:31 crc kubenswrapper[4966]: I0127 16:08:31.744050 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dffd2675-6c13-4c1e-82c5-d19c859db134","Type":"ContainerStarted","Data":"704e9a76af070f7e2bd4dd62e09b3408c8a0ddf0948a2a16c8610dafa4c0bc1b"} Jan 27 16:08:31 crc kubenswrapper[4966]: I0127 16:08:31.744075 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dffd2675-6c13-4c1e-82c5-d19c859db134","Type":"ContainerStarted","Data":"37481aa326a298b85abb92d924b65e2cb95ba4213a2acbb89012f97f59971b3c"} Jan 27 16:08:31 crc kubenswrapper[4966]: I0127 16:08:31.744086 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dffd2675-6c13-4c1e-82c5-d19c859db134","Type":"ContainerStarted","Data":"e57e59f405cfd583c1e46baa4165e90afcb42185968f4308fb62cf31e21ef0ca"} Jan 27 16:08:31 crc kubenswrapper[4966]: I0127 16:08:31.782811 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.782793823 podStartE2EDuration="2.782793823s" podCreationTimestamp="2026-01-27 16:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:08:31.77727515 +0000 UTC m=+1578.080068658" watchObservedRunningTime="2026-01-27 16:08:31.782793823 +0000 UTC m=+1578.085587311" Jan 27 16:08:31 crc kubenswrapper[4966]: I0127 16:08:31.787577 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.787559233 podStartE2EDuration="1.787559233s" podCreationTimestamp="2026-01-27 16:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:08:31.762196887 +0000 UTC m=+1578.064990385" watchObservedRunningTime="2026-01-27 16:08:31.787559233 +0000 UTC m=+1578.090352721" Jan 27 16:08:31 crc kubenswrapper[4966]: I0127 16:08:31.815091 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.590058829 podStartE2EDuration="5.815071157s" podCreationTimestamp="2026-01-27 16:08:26 +0000 UTC" firstStartedPulling="2026-01-27 16:08:27.884804012 +0000 UTC m=+1574.187597500" lastFinishedPulling="2026-01-27 16:08:31.10981634 +0000 UTC m=+1577.412609828" observedRunningTime="2026-01-27 16:08:31.807866271 +0000 UTC m=+1578.110659779" watchObservedRunningTime="2026-01-27 16:08:31.815071157 +0000 UTC m=+1578.117864635" Jan 27 16:08:31 crc kubenswrapper[4966]: I0127 16:08:31.845789 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.84576685 podStartE2EDuration="1.84576685s" podCreationTimestamp="2026-01-27 16:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:08:31.833772543 +0000 UTC m=+1578.136566061" watchObservedRunningTime="2026-01-27 16:08:31.84576685 +0000 UTC m=+1578.148560338" Jan 27 16:08:35 crc kubenswrapper[4966]: I0127 16:08:35.098198 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 16:08:35 crc kubenswrapper[4966]: I0127 16:08:35.782686 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 16:08:35 crc kubenswrapper[4966]: I0127 16:08:35.783164 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 16:08:40 crc kubenswrapper[4966]: I0127 16:08:40.098150 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 16:08:40 crc kubenswrapper[4966]: I0127 16:08:40.144859 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 16:08:40 crc kubenswrapper[4966]: I0127 16:08:40.770341 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 16:08:40 crc kubenswrapper[4966]: I0127 16:08:40.770698 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 16:08:40 crc kubenswrapper[4966]: I0127 16:08:40.784066 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 16:08:40 crc kubenswrapper[4966]: I0127 16:08:40.784138 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 16:08:40 crc kubenswrapper[4966]: I0127 16:08:40.904202 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 16:08:41 crc kubenswrapper[4966]: I0127 16:08:41.790394 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3b8ddb20-6758-4eff-a0ea-0c0437f990e4" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.5:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 16:08:41 crc kubenswrapper[4966]: I0127 16:08:41.790436 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3b8ddb20-6758-4eff-a0ea-0c0437f990e4" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.5:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 16:08:41 crc kubenswrapper[4966]: I0127 16:08:41.807083 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="dffd2675-6c13-4c1e-82c5-d19c859db134" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 16:08:41 crc kubenswrapper[4966]: I0127 16:08:41.807433 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="dffd2675-6c13-4c1e-82c5-d19c859db134" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 16:08:42 crc kubenswrapper[4966]: I0127 16:08:42.525419 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:08:42 crc kubenswrapper[4966]: E0127 16:08:42.526357 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:08:46 crc kubenswrapper[4966]: I0127 16:08:46.962616 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 16:08:50 crc kubenswrapper[4966]: I0127 16:08:50.778711 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 16:08:50 crc kubenswrapper[4966]: I0127 16:08:50.779597 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 16:08:50 crc kubenswrapper[4966]: I0127 16:08:50.789495 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 16:08:50 crc kubenswrapper[4966]: I0127 16:08:50.796334 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 16:08:50 crc kubenswrapper[4966]: I0127 16:08:50.796767 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 16:08:50 crc kubenswrapper[4966]: I0127 16:08:50.807097 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 16:08:50 crc kubenswrapper[4966]: I0127 16:08:50.807386 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 16:08:50 crc kubenswrapper[4966]: I0127 16:08:50.996317 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 16:08:51 crc kubenswrapper[4966]: I0127 16:08:51.004777 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 16:08:51 crc kubenswrapper[4966]: I0127 16:08:51.010115 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 16:08:51 crc kubenswrapper[4966]: I0127 16:08:51.705393 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 16:08:51 crc kubenswrapper[4966]: I0127 16:08:51.705707 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="2d7787a4-9dd3-438b-8fc8-f6708ede1f4b" containerName="kube-state-metrics" containerID="cri-o://161f26658653b3872e9663c07498908d32eaa975eff30c7343026a6199b18c2e" gracePeriod=30 Jan 27 16:08:51 crc kubenswrapper[4966]: I0127 16:08:51.868665 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 16:08:51 crc kubenswrapper[4966]: I0127 16:08:51.869210 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="1af731ff-1327-4db7-b2e7-a90c451390e4" containerName="mysqld-exporter" containerID="cri-o://89be93a796137a45ba13c5e9527e2967c52e649ebf3f3579d47fc3933d7e33f9" gracePeriod=30 Jan 27 16:08:52 crc kubenswrapper[4966]: I0127 16:08:52.015784 4966 generic.go:334] "Generic (PLEG): container finished" podID="1af731ff-1327-4db7-b2e7-a90c451390e4" containerID="89be93a796137a45ba13c5e9527e2967c52e649ebf3f3579d47fc3933d7e33f9" exitCode=2 Jan 27 16:08:52 crc kubenswrapper[4966]: I0127 16:08:52.015869 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"1af731ff-1327-4db7-b2e7-a90c451390e4","Type":"ContainerDied","Data":"89be93a796137a45ba13c5e9527e2967c52e649ebf3f3579d47fc3933d7e33f9"} Jan 27 16:08:52 crc kubenswrapper[4966]: I0127 16:08:52.019546 4966 generic.go:334] "Generic (PLEG): container finished" podID="2d7787a4-9dd3-438b-8fc8-f6708ede1f4b" containerID="161f26658653b3872e9663c07498908d32eaa975eff30c7343026a6199b18c2e" exitCode=2 Jan 27 16:08:52 crc kubenswrapper[4966]: I0127 16:08:52.020925 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2d7787a4-9dd3-438b-8fc8-f6708ede1f4b","Type":"ContainerDied","Data":"161f26658653b3872e9663c07498908d32eaa975eff30c7343026a6199b18c2e"} Jan 27 16:08:52 crc kubenswrapper[4966]: I0127 16:08:52.330874 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 16:08:52 crc kubenswrapper[4966]: I0127 16:08:52.387435 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmrr5\" (UniqueName: \"kubernetes.io/projected/2d7787a4-9dd3-438b-8fc8-f6708ede1f4b-kube-api-access-rmrr5\") pod \"2d7787a4-9dd3-438b-8fc8-f6708ede1f4b\" (UID: \"2d7787a4-9dd3-438b-8fc8-f6708ede1f4b\") " Jan 27 16:08:52 crc kubenswrapper[4966]: I0127 16:08:52.409189 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d7787a4-9dd3-438b-8fc8-f6708ede1f4b-kube-api-access-rmrr5" (OuterVolumeSpecName: "kube-api-access-rmrr5") pod "2d7787a4-9dd3-438b-8fc8-f6708ede1f4b" (UID: "2d7787a4-9dd3-438b-8fc8-f6708ede1f4b"). InnerVolumeSpecName "kube-api-access-rmrr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:08:52 crc kubenswrapper[4966]: I0127 16:08:52.478704 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 16:08:52 crc kubenswrapper[4966]: I0127 16:08:52.490805 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmrr5\" (UniqueName: \"kubernetes.io/projected/2d7787a4-9dd3-438b-8fc8-f6708ede1f4b-kube-api-access-rmrr5\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:52 crc kubenswrapper[4966]: I0127 16:08:52.591948 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1af731ff-1327-4db7-b2e7-a90c451390e4-config-data\") pod \"1af731ff-1327-4db7-b2e7-a90c451390e4\" (UID: \"1af731ff-1327-4db7-b2e7-a90c451390e4\") " Jan 27 16:08:52 crc kubenswrapper[4966]: I0127 16:08:52.592058 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gggm2\" (UniqueName: \"kubernetes.io/projected/1af731ff-1327-4db7-b2e7-a90c451390e4-kube-api-access-gggm2\") pod \"1af731ff-1327-4db7-b2e7-a90c451390e4\" (UID: \"1af731ff-1327-4db7-b2e7-a90c451390e4\") " Jan 27 16:08:52 crc kubenswrapper[4966]: I0127 16:08:52.592171 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af731ff-1327-4db7-b2e7-a90c451390e4-combined-ca-bundle\") pod \"1af731ff-1327-4db7-b2e7-a90c451390e4\" (UID: \"1af731ff-1327-4db7-b2e7-a90c451390e4\") " Jan 27 16:08:52 crc kubenswrapper[4966]: I0127 16:08:52.596581 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1af731ff-1327-4db7-b2e7-a90c451390e4-kube-api-access-gggm2" (OuterVolumeSpecName: "kube-api-access-gggm2") pod "1af731ff-1327-4db7-b2e7-a90c451390e4" (UID: "1af731ff-1327-4db7-b2e7-a90c451390e4"). InnerVolumeSpecName "kube-api-access-gggm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:08:52 crc kubenswrapper[4966]: I0127 16:08:52.632119 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1af731ff-1327-4db7-b2e7-a90c451390e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1af731ff-1327-4db7-b2e7-a90c451390e4" (UID: "1af731ff-1327-4db7-b2e7-a90c451390e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:52 crc kubenswrapper[4966]: I0127 16:08:52.658379 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1af731ff-1327-4db7-b2e7-a90c451390e4-config-data" (OuterVolumeSpecName: "config-data") pod "1af731ff-1327-4db7-b2e7-a90c451390e4" (UID: "1af731ff-1327-4db7-b2e7-a90c451390e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:52 crc kubenswrapper[4966]: I0127 16:08:52.696028 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1af731ff-1327-4db7-b2e7-a90c451390e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:52 crc kubenswrapper[4966]: I0127 16:08:52.696061 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gggm2\" (UniqueName: \"kubernetes.io/projected/1af731ff-1327-4db7-b2e7-a90c451390e4-kube-api-access-gggm2\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:52 crc kubenswrapper[4966]: I0127 16:08:52.696070 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af731ff-1327-4db7-b2e7-a90c451390e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.033471 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2d7787a4-9dd3-438b-8fc8-f6708ede1f4b","Type":"ContainerDied","Data":"ff338db7cba2a5a941ca79f81feb9fd68a18ab055f2edcdc6afded90ef72cf31"} Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.033565 4966 scope.go:117] "RemoveContainer" containerID="161f26658653b3872e9663c07498908d32eaa975eff30c7343026a6199b18c2e" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.033812 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.040319 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"1af731ff-1327-4db7-b2e7-a90c451390e4","Type":"ContainerDied","Data":"13cdd312ec12b363c87e9386de34142a18679682d5ebddc69fbb08afcde29d96"} Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.040335 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.100093 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.119836 4966 scope.go:117] "RemoveContainer" containerID="89be93a796137a45ba13c5e9527e2967c52e649ebf3f3579d47fc3933d7e33f9" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.131501 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.157010 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.184939 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.197762 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 16:08:53 crc kubenswrapper[4966]: E0127 16:08:53.198557 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1af731ff-1327-4db7-b2e7-a90c451390e4" containerName="mysqld-exporter" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.198573 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="1af731ff-1327-4db7-b2e7-a90c451390e4" containerName="mysqld-exporter" Jan 27 16:08:53 crc kubenswrapper[4966]: E0127 16:08:53.198629 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d7787a4-9dd3-438b-8fc8-f6708ede1f4b" containerName="kube-state-metrics" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.198640 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7787a4-9dd3-438b-8fc8-f6708ede1f4b" containerName="kube-state-metrics" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.198974 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d7787a4-9dd3-438b-8fc8-f6708ede1f4b" containerName="kube-state-metrics" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.199024 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="1af731ff-1327-4db7-b2e7-a90c451390e4" containerName="mysqld-exporter" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.200473 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.204301 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.204477 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.211056 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.219988 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.225265 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.225585 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.226643 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.236252 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.312951 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4wpz\" (UniqueName: \"kubernetes.io/projected/0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a-kube-api-access-n4wpz\") pod \"kube-state-metrics-0\" (UID: \"0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a\") " pod="openstack/kube-state-metrics-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.313021 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a\") " pod="openstack/kube-state-metrics-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.313349 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a\") " pod="openstack/kube-state-metrics-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.313872 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94241c97-14d2-406a-9ea5-1b9797ec4785-config-data\") pod \"mysqld-exporter-0\" (UID: \"94241c97-14d2-406a-9ea5-1b9797ec4785\") " pod="openstack/mysqld-exporter-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.314111 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2zlw\" (UniqueName: \"kubernetes.io/projected/94241c97-14d2-406a-9ea5-1b9797ec4785-kube-api-access-x2zlw\") pod \"mysqld-exporter-0\" (UID: \"94241c97-14d2-406a-9ea5-1b9797ec4785\") " pod="openstack/mysqld-exporter-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.314266 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/94241c97-14d2-406a-9ea5-1b9797ec4785-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"94241c97-14d2-406a-9ea5-1b9797ec4785\") " pod="openstack/mysqld-exporter-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.314577 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a\") " pod="openstack/kube-state-metrics-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.314786 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94241c97-14d2-406a-9ea5-1b9797ec4785-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"94241c97-14d2-406a-9ea5-1b9797ec4785\") " pod="openstack/mysqld-exporter-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.418141 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94241c97-14d2-406a-9ea5-1b9797ec4785-config-data\") pod \"mysqld-exporter-0\" (UID: \"94241c97-14d2-406a-9ea5-1b9797ec4785\") " pod="openstack/mysqld-exporter-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.418245 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2zlw\" (UniqueName: \"kubernetes.io/projected/94241c97-14d2-406a-9ea5-1b9797ec4785-kube-api-access-x2zlw\") pod \"mysqld-exporter-0\" (UID: \"94241c97-14d2-406a-9ea5-1b9797ec4785\") " pod="openstack/mysqld-exporter-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.418290 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/94241c97-14d2-406a-9ea5-1b9797ec4785-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"94241c97-14d2-406a-9ea5-1b9797ec4785\") " pod="openstack/mysqld-exporter-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.418424 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a\") " pod="openstack/kube-state-metrics-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.418521 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94241c97-14d2-406a-9ea5-1b9797ec4785-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"94241c97-14d2-406a-9ea5-1b9797ec4785\") " pod="openstack/mysqld-exporter-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.418591 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4wpz\" (UniqueName: \"kubernetes.io/projected/0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a-kube-api-access-n4wpz\") pod \"kube-state-metrics-0\" (UID: \"0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a\") " pod="openstack/kube-state-metrics-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.418643 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a\") " pod="openstack/kube-state-metrics-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.418677 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a\") " pod="openstack/kube-state-metrics-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.423543 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a\") " pod="openstack/kube-state-metrics-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.423705 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94241c97-14d2-406a-9ea5-1b9797ec4785-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"94241c97-14d2-406a-9ea5-1b9797ec4785\") " pod="openstack/mysqld-exporter-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.423808 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a\") " pod="openstack/kube-state-metrics-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.424255 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/94241c97-14d2-406a-9ea5-1b9797ec4785-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"94241c97-14d2-406a-9ea5-1b9797ec4785\") " pod="openstack/mysqld-exporter-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.424818 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94241c97-14d2-406a-9ea5-1b9797ec4785-config-data\") pod \"mysqld-exporter-0\" (UID: \"94241c97-14d2-406a-9ea5-1b9797ec4785\") " pod="openstack/mysqld-exporter-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.433311 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a\") " pod="openstack/kube-state-metrics-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.442298 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2zlw\" (UniqueName: \"kubernetes.io/projected/94241c97-14d2-406a-9ea5-1b9797ec4785-kube-api-access-x2zlw\") pod \"mysqld-exporter-0\" (UID: \"94241c97-14d2-406a-9ea5-1b9797ec4785\") " pod="openstack/mysqld-exporter-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.446648 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4wpz\" (UniqueName: \"kubernetes.io/projected/0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a-kube-api-access-n4wpz\") pod \"kube-state-metrics-0\" (UID: \"0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a\") " pod="openstack/kube-state-metrics-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.526829 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 16:08:53 crc kubenswrapper[4966]: I0127 16:08:53.541006 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 16:08:54 crc kubenswrapper[4966]: I0127 16:08:54.074241 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 16:08:54 crc kubenswrapper[4966]: I0127 16:08:54.102763 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"94241c97-14d2-406a-9ea5-1b9797ec4785","Type":"ContainerStarted","Data":"472753e53f38633b5ce8e0c3be27642e594ebb4a96908daaf5e394f125882c96"} Jan 27 16:08:54 crc kubenswrapper[4966]: I0127 16:08:54.153949 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 16:08:54 crc kubenswrapper[4966]: I0127 16:08:54.536969 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1af731ff-1327-4db7-b2e7-a90c451390e4" path="/var/lib/kubelet/pods/1af731ff-1327-4db7-b2e7-a90c451390e4/volumes" Jan 27 16:08:54 crc kubenswrapper[4966]: I0127 16:08:54.538581 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d7787a4-9dd3-438b-8fc8-f6708ede1f4b" path="/var/lib/kubelet/pods/2d7787a4-9dd3-438b-8fc8-f6708ede1f4b/volumes" Jan 27 16:08:54 crc kubenswrapper[4966]: I0127 16:08:54.663479 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:08:54 crc kubenswrapper[4966]: I0127 16:08:54.663912 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="52643db3-c4a6-4c23-9d4c-d1d29a2983a5" containerName="ceilometer-central-agent" containerID="cri-o://327fb8eb5a30675ba5b018f4f7bb9557e190c305c5fdbe4e2c64ad3b0f92fff1" gracePeriod=30 Jan 27 16:08:54 crc kubenswrapper[4966]: I0127 16:08:54.664696 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="52643db3-c4a6-4c23-9d4c-d1d29a2983a5" containerName="sg-core" containerID="cri-o://257ed30445215a85410a4ee98915e74accfe4d13a6f23946f25fc2b95d359c18" gracePeriod=30 Jan 27 16:08:54 crc kubenswrapper[4966]: I0127 16:08:54.664692 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="52643db3-c4a6-4c23-9d4c-d1d29a2983a5" containerName="proxy-httpd" containerID="cri-o://4ee2880cb2e27901df81922cc2816e47765d283b3d8b8e714681347d929d1be1" gracePeriod=30 Jan 27 16:08:54 crc kubenswrapper[4966]: I0127 16:08:54.664752 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="52643db3-c4a6-4c23-9d4c-d1d29a2983a5" containerName="ceilometer-notification-agent" containerID="cri-o://488f29b0087b9ca63ccee87bf7e0dbd14ae4bf15cf130d6d54bcd6b59067ac24" gracePeriod=30 Jan 27 16:08:55 crc kubenswrapper[4966]: I0127 16:08:55.125326 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"94241c97-14d2-406a-9ea5-1b9797ec4785","Type":"ContainerStarted","Data":"c7a717879b744ec5a129bb4f39d4f9a87b91106a16e24aa1e539060458d5df61"} Jan 27 16:08:55 crc kubenswrapper[4966]: I0127 16:08:55.133427 4966 generic.go:334] "Generic (PLEG): container finished" podID="52643db3-c4a6-4c23-9d4c-d1d29a2983a5" containerID="4ee2880cb2e27901df81922cc2816e47765d283b3d8b8e714681347d929d1be1" exitCode=0 Jan 27 16:08:55 crc kubenswrapper[4966]: I0127 16:08:55.133490 4966 generic.go:334] "Generic (PLEG): container finished" podID="52643db3-c4a6-4c23-9d4c-d1d29a2983a5" containerID="257ed30445215a85410a4ee98915e74accfe4d13a6f23946f25fc2b95d359c18" exitCode=2 Jan 27 16:08:55 crc kubenswrapper[4966]: I0127 16:08:55.133503 4966 generic.go:334] "Generic (PLEG): container finished" podID="52643db3-c4a6-4c23-9d4c-d1d29a2983a5" containerID="327fb8eb5a30675ba5b018f4f7bb9557e190c305c5fdbe4e2c64ad3b0f92fff1" exitCode=0 Jan 27 16:08:55 crc kubenswrapper[4966]: I0127 16:08:55.133584 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52643db3-c4a6-4c23-9d4c-d1d29a2983a5","Type":"ContainerDied","Data":"4ee2880cb2e27901df81922cc2816e47765d283b3d8b8e714681347d929d1be1"} Jan 27 16:08:55 crc kubenswrapper[4966]: I0127 16:08:55.133616 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52643db3-c4a6-4c23-9d4c-d1d29a2983a5","Type":"ContainerDied","Data":"257ed30445215a85410a4ee98915e74accfe4d13a6f23946f25fc2b95d359c18"} Jan 27 16:08:55 crc kubenswrapper[4966]: I0127 16:08:55.133659 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52643db3-c4a6-4c23-9d4c-d1d29a2983a5","Type":"ContainerDied","Data":"327fb8eb5a30675ba5b018f4f7bb9557e190c305c5fdbe4e2c64ad3b0f92fff1"} Jan 27 16:08:55 crc kubenswrapper[4966]: I0127 16:08:55.137531 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a","Type":"ContainerStarted","Data":"dc2286e38fdc74671bfab2fbffb1a05fe4e2288e2b2dab5344beba9b936bc6c5"} Jan 27 16:08:55 crc kubenswrapper[4966]: I0127 16:08:55.137598 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a","Type":"ContainerStarted","Data":"201840cac33a397c8cfcf2e218cd07e976c9aa2374569af56b3ba783879f51fd"} Jan 27 16:08:55 crc kubenswrapper[4966]: I0127 16:08:55.139929 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 16:08:55 crc kubenswrapper[4966]: I0127 16:08:55.171411 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=1.6279599120000001 podStartE2EDuration="2.171380888s" podCreationTimestamp="2026-01-27 16:08:53 +0000 UTC" firstStartedPulling="2026-01-27 16:08:54.053115578 +0000 UTC m=+1600.355909076" lastFinishedPulling="2026-01-27 16:08:54.596536564 +0000 UTC m=+1600.899330052" observedRunningTime="2026-01-27 16:08:55.1586872 +0000 UTC m=+1601.461480708" watchObservedRunningTime="2026-01-27 16:08:55.171380888 +0000 UTC m=+1601.474174396" Jan 27 16:08:55 crc kubenswrapper[4966]: I0127 16:08:55.202166 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.709365586 podStartE2EDuration="2.202135573s" podCreationTimestamp="2026-01-27 16:08:53 +0000 UTC" firstStartedPulling="2026-01-27 16:08:54.146532299 +0000 UTC m=+1600.449325777" lastFinishedPulling="2026-01-27 16:08:54.639302266 +0000 UTC m=+1600.942095764" observedRunningTime="2026-01-27 16:08:55.186051498 +0000 UTC m=+1601.488844986" watchObservedRunningTime="2026-01-27 16:08:55.202135573 +0000 UTC m=+1601.504929061" Jan 27 16:08:56 crc kubenswrapper[4966]: I0127 16:08:56.796415 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:08:56 crc kubenswrapper[4966]: I0127 16:08:56.907188 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-scripts\") pod \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " Jan 27 16:08:56 crc kubenswrapper[4966]: I0127 16:08:56.907264 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-log-httpd\") pod \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " Jan 27 16:08:56 crc kubenswrapper[4966]: I0127 16:08:56.907292 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-sg-core-conf-yaml\") pod \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " Jan 27 16:08:56 crc kubenswrapper[4966]: I0127 16:08:56.907486 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsf75\" (UniqueName: \"kubernetes.io/projected/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-kube-api-access-jsf75\") pod \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " Jan 27 16:08:56 crc kubenswrapper[4966]: I0127 16:08:56.907526 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-run-httpd\") pod \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " Jan 27 16:08:56 crc kubenswrapper[4966]: I0127 16:08:56.907621 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-config-data\") pod \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " Jan 27 16:08:56 crc kubenswrapper[4966]: I0127 16:08:56.907652 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-combined-ca-bundle\") pod \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\" (UID: \"52643db3-c4a6-4c23-9d4c-d1d29a2983a5\") " Jan 27 16:08:56 crc kubenswrapper[4966]: I0127 16:08:56.907814 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "52643db3-c4a6-4c23-9d4c-d1d29a2983a5" (UID: "52643db3-c4a6-4c23-9d4c-d1d29a2983a5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:08:56 crc kubenswrapper[4966]: I0127 16:08:56.908143 4966 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:56 crc kubenswrapper[4966]: I0127 16:08:56.908422 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "52643db3-c4a6-4c23-9d4c-d1d29a2983a5" (UID: "52643db3-c4a6-4c23-9d4c-d1d29a2983a5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:08:56 crc kubenswrapper[4966]: I0127 16:08:56.939745 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-scripts" (OuterVolumeSpecName: "scripts") pod "52643db3-c4a6-4c23-9d4c-d1d29a2983a5" (UID: "52643db3-c4a6-4c23-9d4c-d1d29a2983a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:56 crc kubenswrapper[4966]: I0127 16:08:56.949118 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-kube-api-access-jsf75" (OuterVolumeSpecName: "kube-api-access-jsf75") pod "52643db3-c4a6-4c23-9d4c-d1d29a2983a5" (UID: "52643db3-c4a6-4c23-9d4c-d1d29a2983a5"). InnerVolumeSpecName "kube-api-access-jsf75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.012166 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.012196 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsf75\" (UniqueName: \"kubernetes.io/projected/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-kube-api-access-jsf75\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.012207 4966 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.027058 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "52643db3-c4a6-4c23-9d4c-d1d29a2983a5" (UID: "52643db3-c4a6-4c23-9d4c-d1d29a2983a5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.093728 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52643db3-c4a6-4c23-9d4c-d1d29a2983a5" (UID: "52643db3-c4a6-4c23-9d4c-d1d29a2983a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.105666 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-config-data" (OuterVolumeSpecName: "config-data") pod "52643db3-c4a6-4c23-9d4c-d1d29a2983a5" (UID: "52643db3-c4a6-4c23-9d4c-d1d29a2983a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.114945 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.114976 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.114991 4966 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/52643db3-c4a6-4c23-9d4c-d1d29a2983a5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.167763 4966 generic.go:334] "Generic (PLEG): container finished" podID="52643db3-c4a6-4c23-9d4c-d1d29a2983a5" containerID="488f29b0087b9ca63ccee87bf7e0dbd14ae4bf15cf130d6d54bcd6b59067ac24" exitCode=0 Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.168801 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.171052 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52643db3-c4a6-4c23-9d4c-d1d29a2983a5","Type":"ContainerDied","Data":"488f29b0087b9ca63ccee87bf7e0dbd14ae4bf15cf130d6d54bcd6b59067ac24"} Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.171128 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52643db3-c4a6-4c23-9d4c-d1d29a2983a5","Type":"ContainerDied","Data":"c149d267f47258889bd1ed4b5c89a142f0a9a68cca9731b6989a47de7d422180"} Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.171155 4966 scope.go:117] "RemoveContainer" containerID="4ee2880cb2e27901df81922cc2816e47765d283b3d8b8e714681347d929d1be1" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.204694 4966 scope.go:117] "RemoveContainer" containerID="257ed30445215a85410a4ee98915e74accfe4d13a6f23946f25fc2b95d359c18" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.214678 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.234916 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.258560 4966 scope.go:117] "RemoveContainer" containerID="488f29b0087b9ca63ccee87bf7e0dbd14ae4bf15cf130d6d54bcd6b59067ac24" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.289006 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:08:57 crc kubenswrapper[4966]: E0127 16:08:57.290076 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52643db3-c4a6-4c23-9d4c-d1d29a2983a5" containerName="sg-core" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.290098 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="52643db3-c4a6-4c23-9d4c-d1d29a2983a5" containerName="sg-core" Jan 27 16:08:57 crc kubenswrapper[4966]: E0127 16:08:57.290119 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52643db3-c4a6-4c23-9d4c-d1d29a2983a5" containerName="proxy-httpd" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.290127 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="52643db3-c4a6-4c23-9d4c-d1d29a2983a5" containerName="proxy-httpd" Jan 27 16:08:57 crc kubenswrapper[4966]: E0127 16:08:57.290191 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52643db3-c4a6-4c23-9d4c-d1d29a2983a5" containerName="ceilometer-notification-agent" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.290198 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="52643db3-c4a6-4c23-9d4c-d1d29a2983a5" containerName="ceilometer-notification-agent" Jan 27 16:08:57 crc kubenswrapper[4966]: E0127 16:08:57.290216 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52643db3-c4a6-4c23-9d4c-d1d29a2983a5" containerName="ceilometer-central-agent" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.290222 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="52643db3-c4a6-4c23-9d4c-d1d29a2983a5" containerName="ceilometer-central-agent" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.290599 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="52643db3-c4a6-4c23-9d4c-d1d29a2983a5" containerName="proxy-httpd" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.290630 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="52643db3-c4a6-4c23-9d4c-d1d29a2983a5" containerName="ceilometer-notification-agent" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.290674 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="52643db3-c4a6-4c23-9d4c-d1d29a2983a5" containerName="sg-core" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.290692 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="52643db3-c4a6-4c23-9d4c-d1d29a2983a5" containerName="ceilometer-central-agent" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.296071 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.301075 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.301326 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.301831 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.304056 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.311133 4966 scope.go:117] "RemoveContainer" containerID="327fb8eb5a30675ba5b018f4f7bb9557e190c305c5fdbe4e2c64ad3b0f92fff1" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.321046 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.321105 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edc5a3d0-7858-4d89-8b67-0a448c774257-run-httpd\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.321263 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.321317 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edc5a3d0-7858-4d89-8b67-0a448c774257-log-httpd\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.321579 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-config-data\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.321655 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.321711 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-scripts\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.321738 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbwzt\" (UniqueName: \"kubernetes.io/projected/edc5a3d0-7858-4d89-8b67-0a448c774257-kube-api-access-xbwzt\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.385074 4966 scope.go:117] "RemoveContainer" containerID="4ee2880cb2e27901df81922cc2816e47765d283b3d8b8e714681347d929d1be1" Jan 27 16:08:57 crc kubenswrapper[4966]: E0127 16:08:57.385506 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ee2880cb2e27901df81922cc2816e47765d283b3d8b8e714681347d929d1be1\": container with ID starting with 4ee2880cb2e27901df81922cc2816e47765d283b3d8b8e714681347d929d1be1 not found: ID does not exist" containerID="4ee2880cb2e27901df81922cc2816e47765d283b3d8b8e714681347d929d1be1" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.385532 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ee2880cb2e27901df81922cc2816e47765d283b3d8b8e714681347d929d1be1"} err="failed to get container status \"4ee2880cb2e27901df81922cc2816e47765d283b3d8b8e714681347d929d1be1\": rpc error: code = NotFound desc = could not find container \"4ee2880cb2e27901df81922cc2816e47765d283b3d8b8e714681347d929d1be1\": container with ID starting with 4ee2880cb2e27901df81922cc2816e47765d283b3d8b8e714681347d929d1be1 not found: ID does not exist" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.385553 4966 scope.go:117] "RemoveContainer" containerID="257ed30445215a85410a4ee98915e74accfe4d13a6f23946f25fc2b95d359c18" Jan 27 16:08:57 crc kubenswrapper[4966]: E0127 16:08:57.385843 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"257ed30445215a85410a4ee98915e74accfe4d13a6f23946f25fc2b95d359c18\": container with ID starting with 257ed30445215a85410a4ee98915e74accfe4d13a6f23946f25fc2b95d359c18 not found: ID does not exist" containerID="257ed30445215a85410a4ee98915e74accfe4d13a6f23946f25fc2b95d359c18" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.385884 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"257ed30445215a85410a4ee98915e74accfe4d13a6f23946f25fc2b95d359c18"} err="failed to get container status \"257ed30445215a85410a4ee98915e74accfe4d13a6f23946f25fc2b95d359c18\": rpc error: code = NotFound desc = could not find container \"257ed30445215a85410a4ee98915e74accfe4d13a6f23946f25fc2b95d359c18\": container with ID starting with 257ed30445215a85410a4ee98915e74accfe4d13a6f23946f25fc2b95d359c18 not found: ID does not exist" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.385937 4966 scope.go:117] "RemoveContainer" containerID="488f29b0087b9ca63ccee87bf7e0dbd14ae4bf15cf130d6d54bcd6b59067ac24" Jan 27 16:08:57 crc kubenswrapper[4966]: E0127 16:08:57.386227 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"488f29b0087b9ca63ccee87bf7e0dbd14ae4bf15cf130d6d54bcd6b59067ac24\": container with ID starting with 488f29b0087b9ca63ccee87bf7e0dbd14ae4bf15cf130d6d54bcd6b59067ac24 not found: ID does not exist" containerID="488f29b0087b9ca63ccee87bf7e0dbd14ae4bf15cf130d6d54bcd6b59067ac24" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.386250 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"488f29b0087b9ca63ccee87bf7e0dbd14ae4bf15cf130d6d54bcd6b59067ac24"} err="failed to get container status \"488f29b0087b9ca63ccee87bf7e0dbd14ae4bf15cf130d6d54bcd6b59067ac24\": rpc error: code = NotFound desc = could not find container \"488f29b0087b9ca63ccee87bf7e0dbd14ae4bf15cf130d6d54bcd6b59067ac24\": container with ID starting with 488f29b0087b9ca63ccee87bf7e0dbd14ae4bf15cf130d6d54bcd6b59067ac24 not found: ID does not exist" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.386265 4966 scope.go:117] "RemoveContainer" containerID="327fb8eb5a30675ba5b018f4f7bb9557e190c305c5fdbe4e2c64ad3b0f92fff1" Jan 27 16:08:57 crc kubenswrapper[4966]: E0127 16:08:57.386478 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"327fb8eb5a30675ba5b018f4f7bb9557e190c305c5fdbe4e2c64ad3b0f92fff1\": container with ID starting with 327fb8eb5a30675ba5b018f4f7bb9557e190c305c5fdbe4e2c64ad3b0f92fff1 not found: ID does not exist" containerID="327fb8eb5a30675ba5b018f4f7bb9557e190c305c5fdbe4e2c64ad3b0f92fff1" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.386498 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"327fb8eb5a30675ba5b018f4f7bb9557e190c305c5fdbe4e2c64ad3b0f92fff1"} err="failed to get container status \"327fb8eb5a30675ba5b018f4f7bb9557e190c305c5fdbe4e2c64ad3b0f92fff1\": rpc error: code = NotFound desc = could not find container \"327fb8eb5a30675ba5b018f4f7bb9557e190c305c5fdbe4e2c64ad3b0f92fff1\": container with ID starting with 327fb8eb5a30675ba5b018f4f7bb9557e190c305c5fdbe4e2c64ad3b0f92fff1 not found: ID does not exist" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.423853 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.424695 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-scripts\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.424808 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbwzt\" (UniqueName: \"kubernetes.io/projected/edc5a3d0-7858-4d89-8b67-0a448c774257-kube-api-access-xbwzt\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.424955 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.425033 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edc5a3d0-7858-4d89-8b67-0a448c774257-run-httpd\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.425205 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.425308 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edc5a3d0-7858-4d89-8b67-0a448c774257-log-httpd\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.425402 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-config-data\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.426283 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edc5a3d0-7858-4d89-8b67-0a448c774257-log-httpd\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.426551 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edc5a3d0-7858-4d89-8b67-0a448c774257-run-httpd\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.428224 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.429255 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-config-data\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.429454 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-scripts\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.429721 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.432085 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.445326 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbwzt\" (UniqueName: \"kubernetes.io/projected/edc5a3d0-7858-4d89-8b67-0a448c774257-kube-api-access-xbwzt\") pod \"ceilometer-0\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " pod="openstack/ceilometer-0" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.521695 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:08:57 crc kubenswrapper[4966]: E0127 16:08:57.522057 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:08:57 crc kubenswrapper[4966]: I0127 16:08:57.615875 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:08:58 crc kubenswrapper[4966]: I0127 16:08:58.198027 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:08:58 crc kubenswrapper[4966]: I0127 16:08:58.534413 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52643db3-c4a6-4c23-9d4c-d1d29a2983a5" path="/var/lib/kubelet/pods/52643db3-c4a6-4c23-9d4c-d1d29a2983a5/volumes" Jan 27 16:08:59 crc kubenswrapper[4966]: I0127 16:08:59.192489 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edc5a3d0-7858-4d89-8b67-0a448c774257","Type":"ContainerStarted","Data":"73d4984f23c5cd36f73b7b9ba1f2c3463af3076e812b6b2c618c65058dfff6c3"} Jan 27 16:09:00 crc kubenswrapper[4966]: I0127 16:09:00.204362 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edc5a3d0-7858-4d89-8b67-0a448c774257","Type":"ContainerStarted","Data":"9e6cdb254954f7d484dc207b0b295617c0ec0ab69847e6a0923e058b0029780f"} Jan 27 16:09:01 crc kubenswrapper[4966]: I0127 16:09:01.220541 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edc5a3d0-7858-4d89-8b67-0a448c774257","Type":"ContainerStarted","Data":"680012c14335499d1f834829bca89e04e0b08fcccd6889316c48077ddaa17b5c"} Jan 27 16:09:01 crc kubenswrapper[4966]: I0127 16:09:01.439856 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-9mgp8"] Jan 27 16:09:01 crc kubenswrapper[4966]: I0127 16:09:01.452137 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-9mgp8"] Jan 27 16:09:01 crc kubenswrapper[4966]: I0127 16:09:01.522399 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-jjm6h"] Jan 27 16:09:01 crc kubenswrapper[4966]: I0127 16:09:01.524059 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jjm6h" Jan 27 16:09:01 crc kubenswrapper[4966]: I0127 16:09:01.539294 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jjm6h"] Jan 27 16:09:01 crc kubenswrapper[4966]: I0127 16:09:01.637837 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a82b08-ff78-4e1e-b183-e1d06925aa5e-combined-ca-bundle\") pod \"heat-db-sync-jjm6h\" (UID: \"65a82b08-ff78-4e1e-b183-e1d06925aa5e\") " pod="openstack/heat-db-sync-jjm6h" Jan 27 16:09:01 crc kubenswrapper[4966]: I0127 16:09:01.637904 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a82b08-ff78-4e1e-b183-e1d06925aa5e-config-data\") pod \"heat-db-sync-jjm6h\" (UID: \"65a82b08-ff78-4e1e-b183-e1d06925aa5e\") " pod="openstack/heat-db-sync-jjm6h" Jan 27 16:09:01 crc kubenswrapper[4966]: I0127 16:09:01.638028 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmjqt\" (UniqueName: \"kubernetes.io/projected/65a82b08-ff78-4e1e-b183-e1d06925aa5e-kube-api-access-kmjqt\") pod \"heat-db-sync-jjm6h\" (UID: \"65a82b08-ff78-4e1e-b183-e1d06925aa5e\") " pod="openstack/heat-db-sync-jjm6h" Jan 27 16:09:01 crc kubenswrapper[4966]: I0127 16:09:01.739682 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a82b08-ff78-4e1e-b183-e1d06925aa5e-combined-ca-bundle\") pod \"heat-db-sync-jjm6h\" (UID: \"65a82b08-ff78-4e1e-b183-e1d06925aa5e\") " pod="openstack/heat-db-sync-jjm6h" Jan 27 16:09:01 crc kubenswrapper[4966]: I0127 16:09:01.739724 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a82b08-ff78-4e1e-b183-e1d06925aa5e-config-data\") pod \"heat-db-sync-jjm6h\" (UID: \"65a82b08-ff78-4e1e-b183-e1d06925aa5e\") " pod="openstack/heat-db-sync-jjm6h" Jan 27 16:09:01 crc kubenswrapper[4966]: I0127 16:09:01.739803 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmjqt\" (UniqueName: \"kubernetes.io/projected/65a82b08-ff78-4e1e-b183-e1d06925aa5e-kube-api-access-kmjqt\") pod \"heat-db-sync-jjm6h\" (UID: \"65a82b08-ff78-4e1e-b183-e1d06925aa5e\") " pod="openstack/heat-db-sync-jjm6h" Jan 27 16:09:01 crc kubenswrapper[4966]: I0127 16:09:01.745411 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a82b08-ff78-4e1e-b183-e1d06925aa5e-config-data\") pod \"heat-db-sync-jjm6h\" (UID: \"65a82b08-ff78-4e1e-b183-e1d06925aa5e\") " pod="openstack/heat-db-sync-jjm6h" Jan 27 16:09:01 crc kubenswrapper[4966]: I0127 16:09:01.754209 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a82b08-ff78-4e1e-b183-e1d06925aa5e-combined-ca-bundle\") pod \"heat-db-sync-jjm6h\" (UID: \"65a82b08-ff78-4e1e-b183-e1d06925aa5e\") " pod="openstack/heat-db-sync-jjm6h" Jan 27 16:09:01 crc kubenswrapper[4966]: I0127 16:09:01.760468 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmjqt\" (UniqueName: \"kubernetes.io/projected/65a82b08-ff78-4e1e-b183-e1d06925aa5e-kube-api-access-kmjqt\") pod \"heat-db-sync-jjm6h\" (UID: \"65a82b08-ff78-4e1e-b183-e1d06925aa5e\") " pod="openstack/heat-db-sync-jjm6h" Jan 27 16:09:01 crc kubenswrapper[4966]: I0127 16:09:01.859123 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jjm6h" Jan 27 16:09:02 crc kubenswrapper[4966]: I0127 16:09:02.233794 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edc5a3d0-7858-4d89-8b67-0a448c774257","Type":"ContainerStarted","Data":"5856e37b7b7b68d89441858a9e010de97c1db3397803bcab375b530bda3c03be"} Jan 27 16:09:02 crc kubenswrapper[4966]: I0127 16:09:02.360443 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jjm6h"] Jan 27 16:09:02 crc kubenswrapper[4966]: I0127 16:09:02.535316 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05fc08f5-c60a-4248-8de2-447d0415188e" path="/var/lib/kubelet/pods/05fc08f5-c60a-4248-8de2-447d0415188e/volumes" Jan 27 16:09:03 crc kubenswrapper[4966]: I0127 16:09:03.285487 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jjm6h" event={"ID":"65a82b08-ff78-4e1e-b183-e1d06925aa5e","Type":"ContainerStarted","Data":"33fb0c2e698c84fc8a7d0dc51fa2fc1bad343c8d200431cc8a3881da809c3542"} Jan 27 16:09:03 crc kubenswrapper[4966]: I0127 16:09:03.290417 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edc5a3d0-7858-4d89-8b67-0a448c774257","Type":"ContainerStarted","Data":"a654106532c9021af52f8c58819f2ce58a71338386bcad82dc9ea6ee2241129a"} Jan 27 16:09:03 crc kubenswrapper[4966]: I0127 16:09:03.290736 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 16:09:03 crc kubenswrapper[4966]: I0127 16:09:03.343158 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8777133240000001 podStartE2EDuration="6.343129006s" podCreationTimestamp="2026-01-27 16:08:57 +0000 UTC" firstStartedPulling="2026-01-27 16:08:58.200734005 +0000 UTC m=+1604.503527493" lastFinishedPulling="2026-01-27 16:09:02.666149677 +0000 UTC m=+1608.968943175" observedRunningTime="2026-01-27 16:09:03.314982573 +0000 UTC m=+1609.617776061" watchObservedRunningTime="2026-01-27 16:09:03.343129006 +0000 UTC m=+1609.645922494" Jan 27 16:09:03 crc kubenswrapper[4966]: I0127 16:09:03.546041 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 16:09:03 crc kubenswrapper[4966]: I0127 16:09:03.711623 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 16:09:04 crc kubenswrapper[4966]: I0127 16:09:04.719452 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 16:09:06 crc kubenswrapper[4966]: I0127 16:09:06.783964 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:09:06 crc kubenswrapper[4966]: I0127 16:09:06.784450 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="edc5a3d0-7858-4d89-8b67-0a448c774257" containerName="proxy-httpd" containerID="cri-o://a654106532c9021af52f8c58819f2ce58a71338386bcad82dc9ea6ee2241129a" gracePeriod=30 Jan 27 16:09:06 crc kubenswrapper[4966]: I0127 16:09:06.784824 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="edc5a3d0-7858-4d89-8b67-0a448c774257" containerName="sg-core" containerID="cri-o://5856e37b7b7b68d89441858a9e010de97c1db3397803bcab375b530bda3c03be" gracePeriod=30 Jan 27 16:09:06 crc kubenswrapper[4966]: I0127 16:09:06.784920 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="edc5a3d0-7858-4d89-8b67-0a448c774257" containerName="ceilometer-notification-agent" containerID="cri-o://680012c14335499d1f834829bca89e04e0b08fcccd6889316c48077ddaa17b5c" gracePeriod=30 Jan 27 16:09:06 crc kubenswrapper[4966]: I0127 16:09:06.785003 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="edc5a3d0-7858-4d89-8b67-0a448c774257" containerName="ceilometer-central-agent" containerID="cri-o://9e6cdb254954f7d484dc207b0b295617c0ec0ab69847e6a0923e058b0029780f" gracePeriod=30 Jan 27 16:09:07 crc kubenswrapper[4966]: I0127 16:09:07.337592 4966 generic.go:334] "Generic (PLEG): container finished" podID="edc5a3d0-7858-4d89-8b67-0a448c774257" containerID="5856e37b7b7b68d89441858a9e010de97c1db3397803bcab375b530bda3c03be" exitCode=2 Jan 27 16:09:07 crc kubenswrapper[4966]: I0127 16:09:07.337680 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edc5a3d0-7858-4d89-8b67-0a448c774257","Type":"ContainerDied","Data":"5856e37b7b7b68d89441858a9e010de97c1db3397803bcab375b530bda3c03be"} Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.385843 4966 generic.go:334] "Generic (PLEG): container finished" podID="edc5a3d0-7858-4d89-8b67-0a448c774257" containerID="a654106532c9021af52f8c58819f2ce58a71338386bcad82dc9ea6ee2241129a" exitCode=0 Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.386235 4966 generic.go:334] "Generic (PLEG): container finished" podID="edc5a3d0-7858-4d89-8b67-0a448c774257" containerID="680012c14335499d1f834829bca89e04e0b08fcccd6889316c48077ddaa17b5c" exitCode=0 Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.386243 4966 generic.go:334] "Generic (PLEG): container finished" podID="edc5a3d0-7858-4d89-8b67-0a448c774257" containerID="9e6cdb254954f7d484dc207b0b295617c0ec0ab69847e6a0923e058b0029780f" exitCode=0 Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.386263 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edc5a3d0-7858-4d89-8b67-0a448c774257","Type":"ContainerDied","Data":"a654106532c9021af52f8c58819f2ce58a71338386bcad82dc9ea6ee2241129a"} Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.386299 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edc5a3d0-7858-4d89-8b67-0a448c774257","Type":"ContainerDied","Data":"680012c14335499d1f834829bca89e04e0b08fcccd6889316c48077ddaa17b5c"} Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.386309 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edc5a3d0-7858-4d89-8b67-0a448c774257","Type":"ContainerDied","Data":"9e6cdb254954f7d484dc207b0b295617c0ec0ab69847e6a0923e058b0029780f"} Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.729889 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.811150 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-combined-ca-bundle\") pod \"edc5a3d0-7858-4d89-8b67-0a448c774257\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.811831 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-config-data\") pod \"edc5a3d0-7858-4d89-8b67-0a448c774257\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.811865 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-scripts\") pod \"edc5a3d0-7858-4d89-8b67-0a448c774257\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.811978 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbwzt\" (UniqueName: \"kubernetes.io/projected/edc5a3d0-7858-4d89-8b67-0a448c774257-kube-api-access-xbwzt\") pod \"edc5a3d0-7858-4d89-8b67-0a448c774257\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.812025 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edc5a3d0-7858-4d89-8b67-0a448c774257-run-httpd\") pod \"edc5a3d0-7858-4d89-8b67-0a448c774257\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.812062 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edc5a3d0-7858-4d89-8b67-0a448c774257-log-httpd\") pod \"edc5a3d0-7858-4d89-8b67-0a448c774257\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.812159 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-sg-core-conf-yaml\") pod \"edc5a3d0-7858-4d89-8b67-0a448c774257\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.812278 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-ceilometer-tls-certs\") pod \"edc5a3d0-7858-4d89-8b67-0a448c774257\" (UID: \"edc5a3d0-7858-4d89-8b67-0a448c774257\") " Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.813712 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edc5a3d0-7858-4d89-8b67-0a448c774257-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "edc5a3d0-7858-4d89-8b67-0a448c774257" (UID: "edc5a3d0-7858-4d89-8b67-0a448c774257"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.814145 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edc5a3d0-7858-4d89-8b67-0a448c774257-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "edc5a3d0-7858-4d89-8b67-0a448c774257" (UID: "edc5a3d0-7858-4d89-8b67-0a448c774257"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.818933 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" containerName="rabbitmq" containerID="cri-o://fd167b1321ad76f31f0fc4ebd397387ffcc7106c1a113bb7df2740ee859fe644" gracePeriod=604795 Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.823382 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-scripts" (OuterVolumeSpecName: "scripts") pod "edc5a3d0-7858-4d89-8b67-0a448c774257" (UID: "edc5a3d0-7858-4d89-8b67-0a448c774257"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.856798 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc5a3d0-7858-4d89-8b67-0a448c774257-kube-api-access-xbwzt" (OuterVolumeSpecName: "kube-api-access-xbwzt") pod "edc5a3d0-7858-4d89-8b67-0a448c774257" (UID: "edc5a3d0-7858-4d89-8b67-0a448c774257"). InnerVolumeSpecName "kube-api-access-xbwzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.865847 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "edc5a3d0-7858-4d89-8b67-0a448c774257" (UID: "edc5a3d0-7858-4d89-8b67-0a448c774257"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.909469 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "edc5a3d0-7858-4d89-8b67-0a448c774257" (UID: "edc5a3d0-7858-4d89-8b67-0a448c774257"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.918793 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbwzt\" (UniqueName: \"kubernetes.io/projected/edc5a3d0-7858-4d89-8b67-0a448c774257-kube-api-access-xbwzt\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.918835 4966 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edc5a3d0-7858-4d89-8b67-0a448c774257-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.918849 4966 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edc5a3d0-7858-4d89-8b67-0a448c774257-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.918860 4966 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.918870 4966 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.918882 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.936076 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edc5a3d0-7858-4d89-8b67-0a448c774257" (UID: "edc5a3d0-7858-4d89-8b67-0a448c774257"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:09:08 crc kubenswrapper[4966]: I0127 16:09:08.994256 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-config-data" (OuterVolumeSpecName: "config-data") pod "edc5a3d0-7858-4d89-8b67-0a448c774257" (UID: "edc5a3d0-7858-4d89-8b67-0a448c774257"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.021039 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.021085 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc5a3d0-7858-4d89-8b67-0a448c774257-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.401995 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edc5a3d0-7858-4d89-8b67-0a448c774257","Type":"ContainerDied","Data":"73d4984f23c5cd36f73b7b9ba1f2c3463af3076e812b6b2c618c65058dfff6c3"} Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.402385 4966 scope.go:117] "RemoveContainer" containerID="a654106532c9021af52f8c58819f2ce58a71338386bcad82dc9ea6ee2241129a" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.402302 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.442761 4966 scope.go:117] "RemoveContainer" containerID="5856e37b7b7b68d89441858a9e010de97c1db3397803bcab375b530bda3c03be" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.449418 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.463427 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.476086 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:09:09 crc kubenswrapper[4966]: E0127 16:09:09.476705 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc5a3d0-7858-4d89-8b67-0a448c774257" containerName="proxy-httpd" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.476721 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc5a3d0-7858-4d89-8b67-0a448c774257" containerName="proxy-httpd" Jan 27 16:09:09 crc kubenswrapper[4966]: E0127 16:09:09.476738 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc5a3d0-7858-4d89-8b67-0a448c774257" containerName="ceilometer-central-agent" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.476745 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc5a3d0-7858-4d89-8b67-0a448c774257" containerName="ceilometer-central-agent" Jan 27 16:09:09 crc kubenswrapper[4966]: E0127 16:09:09.476759 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc5a3d0-7858-4d89-8b67-0a448c774257" containerName="ceilometer-notification-agent" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.476765 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc5a3d0-7858-4d89-8b67-0a448c774257" containerName="ceilometer-notification-agent" Jan 27 16:09:09 crc kubenswrapper[4966]: E0127 16:09:09.476820 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc5a3d0-7858-4d89-8b67-0a448c774257" containerName="sg-core" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.476827 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc5a3d0-7858-4d89-8b67-0a448c774257" containerName="sg-core" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.477216 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc5a3d0-7858-4d89-8b67-0a448c774257" containerName="proxy-httpd" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.477257 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc5a3d0-7858-4d89-8b67-0a448c774257" containerName="ceilometer-central-agent" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.477271 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc5a3d0-7858-4d89-8b67-0a448c774257" containerName="sg-core" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.477286 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc5a3d0-7858-4d89-8b67-0a448c774257" containerName="ceilometer-notification-agent" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.480887 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.482325 4966 scope.go:117] "RemoveContainer" containerID="680012c14335499d1f834829bca89e04e0b08fcccd6889316c48077ddaa17b5c" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.486979 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.488574 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.488589 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.489334 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.520847 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:09:09 crc kubenswrapper[4966]: E0127 16:09:09.521174 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.535208 4966 scope.go:117] "RemoveContainer" containerID="9e6cdb254954f7d484dc207b0b295617c0ec0ab69847e6a0923e058b0029780f" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.637145 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/20044d98-b229-4e9a-946f-b18902841fe6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.637264 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20044d98-b229-4e9a-946f-b18902841fe6-scripts\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.637479 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20044d98-b229-4e9a-946f-b18902841fe6-config-data\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.637791 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20044d98-b229-4e9a-946f-b18902841fe6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.637857 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20044d98-b229-4e9a-946f-b18902841fe6-run-httpd\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.638225 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20044d98-b229-4e9a-946f-b18902841fe6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.638284 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vppqm\" (UniqueName: \"kubernetes.io/projected/20044d98-b229-4e9a-946f-b18902841fe6-kube-api-access-vppqm\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.638661 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20044d98-b229-4e9a-946f-b18902841fe6-log-httpd\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.741965 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20044d98-b229-4e9a-946f-b18902841fe6-run-httpd\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.742557 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20044d98-b229-4e9a-946f-b18902841fe6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.742781 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vppqm\" (UniqueName: \"kubernetes.io/projected/20044d98-b229-4e9a-946f-b18902841fe6-kube-api-access-vppqm\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.742984 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20044d98-b229-4e9a-946f-b18902841fe6-log-httpd\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.743073 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20044d98-b229-4e9a-946f-b18902841fe6-run-httpd\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.743377 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20044d98-b229-4e9a-946f-b18902841fe6-log-httpd\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.743758 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/20044d98-b229-4e9a-946f-b18902841fe6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.744138 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20044d98-b229-4e9a-946f-b18902841fe6-scripts\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.744307 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20044d98-b229-4e9a-946f-b18902841fe6-config-data\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.744543 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20044d98-b229-4e9a-946f-b18902841fe6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.748564 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20044d98-b229-4e9a-946f-b18902841fe6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.749476 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20044d98-b229-4e9a-946f-b18902841fe6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.752220 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/20044d98-b229-4e9a-946f-b18902841fe6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.752530 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20044d98-b229-4e9a-946f-b18902841fe6-scripts\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.754050 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20044d98-b229-4e9a-946f-b18902841fe6-config-data\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.772031 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vppqm\" (UniqueName: \"kubernetes.io/projected/20044d98-b229-4e9a-946f-b18902841fe6-kube-api-access-vppqm\") pod \"ceilometer-0\" (UID: \"20044d98-b229-4e9a-946f-b18902841fe6\") " pod="openstack/ceilometer-0" Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.806593 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="9b278fb8-add2-4ddc-9e93-71962f1bb6fa" containerName="rabbitmq" containerID="cri-o://c157f7e47f5330bec30b4bc570fc98c7930f51f7ce8e9d4a0fdcc0a31cc6e885" gracePeriod=604795 Jan 27 16:09:09 crc kubenswrapper[4966]: I0127 16:09:09.828092 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 16:09:10 crc kubenswrapper[4966]: I0127 16:09:10.333135 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 16:09:10 crc kubenswrapper[4966]: W0127 16:09:10.345425 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20044d98_b229_4e9a_946f_b18902841fe6.slice/crio-28224a2b1e49b75e3611f224db04797fbbef8bf534a9705c8c5c7587cff76372 WatchSource:0}: Error finding container 28224a2b1e49b75e3611f224db04797fbbef8bf534a9705c8c5c7587cff76372: Status 404 returned error can't find the container with id 28224a2b1e49b75e3611f224db04797fbbef8bf534a9705c8c5c7587cff76372 Jan 27 16:09:10 crc kubenswrapper[4966]: I0127 16:09:10.415587 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20044d98-b229-4e9a-946f-b18902841fe6","Type":"ContainerStarted","Data":"28224a2b1e49b75e3611f224db04797fbbef8bf534a9705c8c5c7587cff76372"} Jan 27 16:09:10 crc kubenswrapper[4966]: I0127 16:09:10.533053 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edc5a3d0-7858-4d89-8b67-0a448c774257" path="/var/lib/kubelet/pods/edc5a3d0-7858-4d89-8b67-0a448c774257/volumes" Jan 27 16:09:14 crc kubenswrapper[4966]: I0127 16:09:14.746940 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Jan 27 16:09:15 crc kubenswrapper[4966]: I0127 16:09:15.053724 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="9b278fb8-add2-4ddc-9e93-71962f1bb6fa" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Jan 27 16:09:15 crc kubenswrapper[4966]: I0127 16:09:15.514515 4966 generic.go:334] "Generic (PLEG): container finished" podID="3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" containerID="fd167b1321ad76f31f0fc4ebd397387ffcc7106c1a113bb7df2740ee859fe644" exitCode=0 Jan 27 16:09:15 crc kubenswrapper[4966]: I0127 16:09:15.514832 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb","Type":"ContainerDied","Data":"fd167b1321ad76f31f0fc4ebd397387ffcc7106c1a113bb7df2740ee859fe644"} Jan 27 16:09:16 crc kubenswrapper[4966]: I0127 16:09:16.554761 4966 generic.go:334] "Generic (PLEG): container finished" podID="9b278fb8-add2-4ddc-9e93-71962f1bb6fa" containerID="c157f7e47f5330bec30b4bc570fc98c7930f51f7ce8e9d4a0fdcc0a31cc6e885" exitCode=0 Jan 27 16:09:16 crc kubenswrapper[4966]: I0127 16:09:16.554844 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b278fb8-add2-4ddc-9e93-71962f1bb6fa","Type":"ContainerDied","Data":"c157f7e47f5330bec30b4bc570fc98c7930f51f7ce8e9d4a0fdcc0a31cc6e885"} Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.716588 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-tsf2n"] Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.719275 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.724640 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.739012 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-tsf2n\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.739373 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-tsf2n\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.739520 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-tsf2n\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.739601 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-tsf2n\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.739706 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87dk8\" (UniqueName: \"kubernetes.io/projected/a36e5960-b960-4650-ac3c-c588e0047b4e-kube-api-access-87dk8\") pod \"dnsmasq-dns-5b75489c6f-tsf2n\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.739787 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-config\") pod \"dnsmasq-dns-5b75489c6f-tsf2n\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.739872 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-tsf2n\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.771235 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-tsf2n"] Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.841146 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-config\") pod \"dnsmasq-dns-5b75489c6f-tsf2n\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.841476 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-tsf2n\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.841738 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-tsf2n\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.841930 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-tsf2n\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.842160 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-tsf2n\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.842326 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-tsf2n\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.842470 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87dk8\" (UniqueName: \"kubernetes.io/projected/a36e5960-b960-4650-ac3c-c588e0047b4e-kube-api-access-87dk8\") pod \"dnsmasq-dns-5b75489c6f-tsf2n\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.842480 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-tsf2n\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.842158 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-config\") pod \"dnsmasq-dns-5b75489c6f-tsf2n\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.842643 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-tsf2n\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.842683 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-tsf2n\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.843016 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-tsf2n\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.843540 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-tsf2n\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:19 crc kubenswrapper[4966]: I0127 16:09:19.864703 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87dk8\" (UniqueName: \"kubernetes.io/projected/a36e5960-b960-4650-ac3c-c588e0047b4e-kube-api-access-87dk8\") pod \"dnsmasq-dns-5b75489c6f-tsf2n\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:20 crc kubenswrapper[4966]: I0127 16:09:20.051407 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:20 crc kubenswrapper[4966]: I0127 16:09:20.524208 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:09:20 crc kubenswrapper[4966]: E0127 16:09:20.525167 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.232006 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.238521 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.321364 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-server-conf\") pod \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.321410 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-config-data\") pod \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.321446 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-confd\") pod \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.321513 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-erlang-cookie\") pod \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.321611 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-erlang-cookie\") pod \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.321636 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-erlang-cookie-secret\") pod \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.321662 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-confd\") pod \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.321710 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-plugins\") pod \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.321731 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-plugins-conf\") pod \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.321753 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-pod-info\") pod \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.321790 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c7jj\" (UniqueName: \"kubernetes.io/projected/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-kube-api-access-4c7jj\") pod \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.321812 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-erlang-cookie-secret\") pod \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.321856 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-config-data\") pod \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.321882 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-plugins\") pod \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.322026 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-server-conf\") pod \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.322061 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dks9j\" (UniqueName: \"kubernetes.io/projected/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-kube-api-access-dks9j\") pod \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.322780 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de\") pod \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.322816 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-tls\") pod \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.322861 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-plugins-conf\") pod \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.322958 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-tls\") pod \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.323035 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-pod-info\") pod \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\" (UID: \"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.323139 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9b278fb8-add2-4ddc-9e93-71962f1bb6fa" (UID: "9b278fb8-add2-4ddc-9e93-71962f1bb6fa"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.323712 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7\") pod \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.324747 4966 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.327011 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" (UID: "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.341589 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-kube-api-access-4c7jj" (OuterVolumeSpecName: "kube-api-access-4c7jj") pod "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" (UID: "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb"). InnerVolumeSpecName "kube-api-access-4c7jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.342228 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" (UID: "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.342599 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9b278fb8-add2-4ddc-9e93-71962f1bb6fa" (UID: "9b278fb8-add2-4ddc-9e93-71962f1bb6fa"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.343202 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9b278fb8-add2-4ddc-9e93-71962f1bb6fa" (UID: "9b278fb8-add2-4ddc-9e93-71962f1bb6fa"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.344282 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9b278fb8-add2-4ddc-9e93-71962f1bb6fa" (UID: "9b278fb8-add2-4ddc-9e93-71962f1bb6fa"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.346015 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" (UID: "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.358019 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-pod-info" (OuterVolumeSpecName: "pod-info") pod "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" (UID: "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.359397 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-pod-info" (OuterVolumeSpecName: "pod-info") pod "9b278fb8-add2-4ddc-9e93-71962f1bb6fa" (UID: "9b278fb8-add2-4ddc-9e93-71962f1bb6fa"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.364320 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-kube-api-access-dks9j" (OuterVolumeSpecName: "kube-api-access-dks9j") pod "9b278fb8-add2-4ddc-9e93-71962f1bb6fa" (UID: "9b278fb8-add2-4ddc-9e93-71962f1bb6fa"). InnerVolumeSpecName "kube-api-access-dks9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.364963 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" (UID: "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.372401 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" (UID: "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.398306 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9b278fb8-add2-4ddc-9e93-71962f1bb6fa" (UID: "9b278fb8-add2-4ddc-9e93-71962f1bb6fa"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.415033 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de" (OuterVolumeSpecName: "persistence") pod "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" (UID: "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb"). InnerVolumeSpecName "pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.427225 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dks9j\" (UniqueName: \"kubernetes.io/projected/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-kube-api-access-dks9j\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.427333 4966 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de\") on node \"crc\" " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.427355 4966 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.427368 4966 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.427380 4966 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.427392 4966 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.427404 4966 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.427418 4966 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.427431 4966 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.427445 4966 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.427456 4966 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.427470 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c7jj\" (UniqueName: \"kubernetes.io/projected/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-kube-api-access-4c7jj\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.427481 4966 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.427492 4966 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:25 crc kubenswrapper[4966]: E0127 16:09:25.434771 4966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7 podName:9b278fb8-add2-4ddc-9e93-71962f1bb6fa nodeName:}" failed. No retries permitted until 2026-01-27 16:09:25.934748627 +0000 UTC m=+1632.237542115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7") pod "9b278fb8-add2-4ddc-9e93-71962f1bb6fa" (UID: "9b278fb8-add2-4ddc-9e93-71962f1bb6fa") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.436697 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-config-data" (OuterVolumeSpecName: "config-data") pod "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" (UID: "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.458650 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-config-data" (OuterVolumeSpecName: "config-data") pod "9b278fb8-add2-4ddc-9e93-71962f1bb6fa" (UID: "9b278fb8-add2-4ddc-9e93-71962f1bb6fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.496563 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-server-conf" (OuterVolumeSpecName: "server-conf") pod "9b278fb8-add2-4ddc-9e93-71962f1bb6fa" (UID: "9b278fb8-add2-4ddc-9e93-71962f1bb6fa"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.502241 4966 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.502408 4966 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de") on node "crc" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.525917 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-server-conf" (OuterVolumeSpecName: "server-conf") pod "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" (UID: "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.530762 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.530808 4966 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.530841 4966 reconciler_common.go:293] "Volume detached for volume \"pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.530927 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.530957 4966 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.633210 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9b278fb8-add2-4ddc-9e93-71962f1bb6fa" (UID: "9b278fb8-add2-4ddc-9e93-71962f1bb6fa"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.633696 4966 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b278fb8-add2-4ddc-9e93-71962f1bb6fa-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.650716 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" (UID: "3f9cc33b-7f85-4c4a-9cf5-074309fd76eb"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.670443 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b278fb8-add2-4ddc-9e93-71962f1bb6fa","Type":"ContainerDied","Data":"4eb8955693c47d65abbb32f2c568bb793f190f178f91c251c98263d87ba841d6"} Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.670514 4966 scope.go:117] "RemoveContainer" containerID="c157f7e47f5330bec30b4bc570fc98c7930f51f7ce8e9d4a0fdcc0a31cc6e885" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.670756 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.677176 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3f9cc33b-7f85-4c4a-9cf5-074309fd76eb","Type":"ContainerDied","Data":"a3da364fca59502a0d8509bca45b20a38e2438d48b72460ef72a76650d067b1a"} Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.677527 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.711057 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.735747 4966 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.754791 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.808644 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 16:09:25 crc kubenswrapper[4966]: E0127 16:09:25.810125 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b278fb8-add2-4ddc-9e93-71962f1bb6fa" containerName="setup-container" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.810149 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b278fb8-add2-4ddc-9e93-71962f1bb6fa" containerName="setup-container" Jan 27 16:09:25 crc kubenswrapper[4966]: E0127 16:09:25.810168 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" containerName="rabbitmq" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.810174 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" containerName="rabbitmq" Jan 27 16:09:25 crc kubenswrapper[4966]: E0127 16:09:25.810191 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b278fb8-add2-4ddc-9e93-71962f1bb6fa" containerName="rabbitmq" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.810197 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b278fb8-add2-4ddc-9e93-71962f1bb6fa" containerName="rabbitmq" Jan 27 16:09:25 crc kubenswrapper[4966]: E0127 16:09:25.810218 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" containerName="setup-container" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.810225 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" containerName="setup-container" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.810469 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b278fb8-add2-4ddc-9e93-71962f1bb6fa" containerName="rabbitmq" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.810492 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" containerName="rabbitmq" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.812135 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.834431 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.938845 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7\") pod \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\" (UID: \"9b278fb8-add2-4ddc-9e93-71962f1bb6fa\") " Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.939708 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/793ef49f-7394-4261-a7c1-b262c6744776-server-conf\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.939759 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zzv6\" (UniqueName: \"kubernetes.io/projected/793ef49f-7394-4261-a7c1-b262c6744776-kube-api-access-4zzv6\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.939860 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/793ef49f-7394-4261-a7c1-b262c6744776-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.939928 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/793ef49f-7394-4261-a7c1-b262c6744776-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.939954 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/793ef49f-7394-4261-a7c1-b262c6744776-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.940075 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/793ef49f-7394-4261-a7c1-b262c6744776-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.940190 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/793ef49f-7394-4261-a7c1-b262c6744776-pod-info\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.940244 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/793ef49f-7394-4261-a7c1-b262c6744776-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.940276 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.940318 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/793ef49f-7394-4261-a7c1-b262c6744776-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.940741 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/793ef49f-7394-4261-a7c1-b262c6744776-config-data\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:25 crc kubenswrapper[4966]: I0127 16:09:25.957721 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7" (OuterVolumeSpecName: "persistence") pod "9b278fb8-add2-4ddc-9e93-71962f1bb6fa" (UID: "9b278fb8-add2-4ddc-9e93-71962f1bb6fa"). InnerVolumeSpecName "pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.063003 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/793ef49f-7394-4261-a7c1-b262c6744776-server-conf\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.063085 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zzv6\" (UniqueName: \"kubernetes.io/projected/793ef49f-7394-4261-a7c1-b262c6744776-kube-api-access-4zzv6\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.063297 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/793ef49f-7394-4261-a7c1-b262c6744776-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.063401 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/793ef49f-7394-4261-a7c1-b262c6744776-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.063438 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/793ef49f-7394-4261-a7c1-b262c6744776-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.064799 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/793ef49f-7394-4261-a7c1-b262c6744776-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.065764 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/793ef49f-7394-4261-a7c1-b262c6744776-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.065832 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/793ef49f-7394-4261-a7c1-b262c6744776-pod-info\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.065856 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/793ef49f-7394-4261-a7c1-b262c6744776-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.065880 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.065917 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/793ef49f-7394-4261-a7c1-b262c6744776-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.066646 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/793ef49f-7394-4261-a7c1-b262c6744776-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.066783 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/793ef49f-7394-4261-a7c1-b262c6744776-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.068338 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.070910 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/793ef49f-7394-4261-a7c1-b262c6744776-pod-info\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.071013 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/793ef49f-7394-4261-a7c1-b262c6744776-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.072030 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.072058 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6968b035681a68868604c21042f04be572e2d6e7a96fb8fab8851faec754bf6a/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.072344 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/793ef49f-7394-4261-a7c1-b262c6744776-config-data\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.072604 4966 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7\") on node \"crc\" " Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.078035 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/793ef49f-7394-4261-a7c1-b262c6744776-server-conf\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.081891 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/793ef49f-7394-4261-a7c1-b262c6744776-config-data\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.085325 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/793ef49f-7394-4261-a7c1-b262c6744776-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.088050 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zzv6\" (UniqueName: \"kubernetes.io/projected/793ef49f-7394-4261-a7c1-b262c6744776-kube-api-access-4zzv6\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.101151 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.113570 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/793ef49f-7394-4261-a7c1-b262c6744776-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.120242 4966 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.120674 4966 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7") on node "crc" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.149925 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.152518 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.154163 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.154690 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.169479 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.172593 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.172831 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.173042 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4rpnk" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.178127 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.178308 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.181928 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.182017 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.182146 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.182820 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.182876 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.183634 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.183936 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.183969 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.184056 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5d5t\" (UniqueName: \"kubernetes.io/projected/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-kube-api-access-h5d5t\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.184119 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.184199 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.184204 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a530bbf9-19c0-4b4f-8ea9-84b8f8a999de\") pod \"rabbitmq-server-2\" (UID: \"793ef49f-7394-4261-a7c1-b262c6744776\") " pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.184501 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.184550 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f89dd92db4792d3c870b009b0083cb9063c922fd2565d2430e19a54155fb4df4/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.255947 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f8e2ccd-6dbe-4e5a-b0d9-563dddc56af7\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.287762 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.287920 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.287979 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.288052 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.288090 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.288115 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.288137 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5d5t\" (UniqueName: \"kubernetes.io/projected/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-kube-api-access-h5d5t\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.288178 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.288220 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.288277 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.289311 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.289687 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.292374 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.292501 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.294290 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.297568 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.297663 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.302831 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.307455 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.314868 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5d5t\" (UniqueName: \"kubernetes.io/projected/392d1dfb-fb0e-4c96-bd6b-0d85c032f41b-kube-api-access-h5d5t\") pod \"rabbitmq-cell1-server-0\" (UID: \"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.433215 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.513934 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.539312 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" path="/var/lib/kubelet/pods/3f9cc33b-7f85-4c4a-9cf5-074309fd76eb/volumes" Jan 27 16:09:26 crc kubenswrapper[4966]: I0127 16:09:26.540190 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b278fb8-add2-4ddc-9e93-71962f1bb6fa" path="/var/lib/kubelet/pods/9b278fb8-add2-4ddc-9e93-71962f1bb6fa/volumes" Jan 27 16:09:27 crc kubenswrapper[4966]: E0127 16:09:27.873651 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 27 16:09:27 crc kubenswrapper[4966]: E0127 16:09:27.873994 4966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 27 16:09:27 crc kubenswrapper[4966]: E0127 16:09:27.874126 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kmjqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jjm6h_openstack(65a82b08-ff78-4e1e-b183-e1d06925aa5e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 16:09:27 crc kubenswrapper[4966]: E0127 16:09:27.877087 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-jjm6h" podUID="65a82b08-ff78-4e1e-b183-e1d06925aa5e" Jan 27 16:09:28 crc kubenswrapper[4966]: I0127 16:09:28.301955 4966 scope.go:117] "RemoveContainer" containerID="d49edacde651fb39c747b161fac2fee2073f47d6e0a4fad1361e65b9d66a4739" Jan 27 16:09:28 crc kubenswrapper[4966]: I0127 16:09:28.400005 4966 scope.go:117] "RemoveContainer" containerID="fd167b1321ad76f31f0fc4ebd397387ffcc7106c1a113bb7df2740ee859fe644" Jan 27 16:09:28 crc kubenswrapper[4966]: I0127 16:09:28.542189 4966 scope.go:117] "RemoveContainer" containerID="bd0a2da95b03bf8155f1de1f0fff387b68f0343f263fee5bbb7fdf0b0e05dcb1" Jan 27 16:09:28 crc kubenswrapper[4966]: I0127 16:09:28.652723 4966 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:09:28 crc kubenswrapper[4966]: I0127 16:09:28.720130 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20044d98-b229-4e9a-946f-b18902841fe6","Type":"ContainerStarted","Data":"c64846f7b44d76aed3bc93bd63c096a1a9c767154eac045ed6ae2b13727625f1"} Jan 27 16:09:28 crc kubenswrapper[4966]: E0127 16:09:28.724811 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jjm6h" podUID="65a82b08-ff78-4e1e-b183-e1d06925aa5e" Jan 27 16:09:28 crc kubenswrapper[4966]: I0127 16:09:28.929580 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-tsf2n"] Jan 27 16:09:28 crc kubenswrapper[4966]: I0127 16:09:28.962213 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 16:09:29 crc kubenswrapper[4966]: I0127 16:09:29.094738 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 16:09:29 crc kubenswrapper[4966]: I0127 16:09:29.736525 4966 generic.go:334] "Generic (PLEG): container finished" podID="a36e5960-b960-4650-ac3c-c588e0047b4e" containerID="898d2ec3db5df3d17746c39bc214ddf9245e74c63fc683dd0acdbd495040da26" exitCode=0 Jan 27 16:09:29 crc kubenswrapper[4966]: I0127 16:09:29.736641 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" event={"ID":"a36e5960-b960-4650-ac3c-c588e0047b4e","Type":"ContainerDied","Data":"898d2ec3db5df3d17746c39bc214ddf9245e74c63fc683dd0acdbd495040da26"} Jan 27 16:09:29 crc kubenswrapper[4966]: I0127 16:09:29.736678 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" event={"ID":"a36e5960-b960-4650-ac3c-c588e0047b4e","Type":"ContainerStarted","Data":"2ef24a924fd032d78668021cb4b1336d38864cce7d63ffa821bbe555966c6109"} Jan 27 16:09:29 crc kubenswrapper[4966]: I0127 16:09:29.744036 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"793ef49f-7394-4261-a7c1-b262c6744776","Type":"ContainerStarted","Data":"1b3911dc974b0f41320db37bf15f81228459d4dea4ec1bc40b56ce2fb3a7ca5e"} Jan 27 16:09:29 crc kubenswrapper[4966]: I0127 16:09:29.747142 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="3f9cc33b-7f85-4c4a-9cf5-074309fd76eb" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: i/o timeout" Jan 27 16:09:29 crc kubenswrapper[4966]: I0127 16:09:29.749628 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b","Type":"ContainerStarted","Data":"6fd4876a4fb718d9e2e4136586b2d6806b55a9d28e691c48038670da28541b54"} Jan 27 16:09:29 crc kubenswrapper[4966]: I0127 16:09:29.754583 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20044d98-b229-4e9a-946f-b18902841fe6","Type":"ContainerStarted","Data":"122de39f42283800f76df52deb25e7a1b23ba98970ab18f6c2a2304095371fc5"} Jan 27 16:09:30 crc kubenswrapper[4966]: I0127 16:09:30.047049 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="9b278fb8-add2-4ddc-9e93-71962f1bb6fa" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: i/o timeout" Jan 27 16:09:30 crc kubenswrapper[4966]: I0127 16:09:30.791860 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" event={"ID":"a36e5960-b960-4650-ac3c-c588e0047b4e","Type":"ContainerStarted","Data":"699d2fb815f412bbc938d365839d44ed7957452ca1a230a4db7df860cd22f9fb"} Jan 27 16:09:30 crc kubenswrapper[4966]: I0127 16:09:30.792066 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:30 crc kubenswrapper[4966]: I0127 16:09:30.814445 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" podStartSLOduration=11.814423339 podStartE2EDuration="11.814423339s" podCreationTimestamp="2026-01-27 16:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:09:30.812142867 +0000 UTC m=+1637.114936395" watchObservedRunningTime="2026-01-27 16:09:30.814423339 +0000 UTC m=+1637.117216847" Jan 27 16:09:31 crc kubenswrapper[4966]: I0127 16:09:31.836475 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"793ef49f-7394-4261-a7c1-b262c6744776","Type":"ContainerStarted","Data":"abc8142eafd1204dd361e187d5cff69a650897e05387882257ec3240aeed9c7a"} Jan 27 16:09:31 crc kubenswrapper[4966]: I0127 16:09:31.843986 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b","Type":"ContainerStarted","Data":"69c01b062bbd3002121472d755be61ca44c2ade3fbfb634ccf281da542c61938"} Jan 27 16:09:31 crc kubenswrapper[4966]: I0127 16:09:31.864054 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20044d98-b229-4e9a-946f-b18902841fe6","Type":"ContainerStarted","Data":"475f46f8441d8fa905ed16180d359d33bce699d537e746cece5e94c4fdcb6aa6"} Jan 27 16:09:32 crc kubenswrapper[4966]: I0127 16:09:32.882412 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20044d98-b229-4e9a-946f-b18902841fe6","Type":"ContainerStarted","Data":"d41f32f2a3053204b53add5ca4419596ee632607d1b6199930931ae5379ac591"} Jan 27 16:09:32 crc kubenswrapper[4966]: I0127 16:09:32.915999 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.733868193 podStartE2EDuration="23.915976011s" podCreationTimestamp="2026-01-27 16:09:09 +0000 UTC" firstStartedPulling="2026-01-27 16:09:10.348215186 +0000 UTC m=+1616.651008684" lastFinishedPulling="2026-01-27 16:09:32.530323004 +0000 UTC m=+1638.833116502" observedRunningTime="2026-01-27 16:09:32.913329968 +0000 UTC m=+1639.216123466" watchObservedRunningTime="2026-01-27 16:09:32.915976011 +0000 UTC m=+1639.218769529" Jan 27 16:09:33 crc kubenswrapper[4966]: I0127 16:09:33.894863 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 16:09:34 crc kubenswrapper[4966]: I0127 16:09:34.531662 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:09:34 crc kubenswrapper[4966]: E0127 16:09:34.531967 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.057088 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.189645 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-6w8s7"] Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.190227 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" podUID="05824f99-475d-4f23-84fa-33b23a3030b7" containerName="dnsmasq-dns" containerID="cri-o://76722416e8e1d9ef1654f06a700f693a846cb88706cb407c1b1d9aff39b48390" gracePeriod=10 Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.324798 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-c482c"] Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.326890 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.358952 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-c482c"] Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.473628 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/327ce86b-3b3f-4b71-b51a-498ec4a19e63-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-c482c\" (UID: \"327ce86b-3b3f-4b71-b51a-498ec4a19e63\") " pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.473680 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmxtg\" (UniqueName: \"kubernetes.io/projected/327ce86b-3b3f-4b71-b51a-498ec4a19e63-kube-api-access-jmxtg\") pod \"dnsmasq-dns-5d75f767dc-c482c\" (UID: \"327ce86b-3b3f-4b71-b51a-498ec4a19e63\") " pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.473735 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/327ce86b-3b3f-4b71-b51a-498ec4a19e63-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-c482c\" (UID: \"327ce86b-3b3f-4b71-b51a-498ec4a19e63\") " pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.473770 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/327ce86b-3b3f-4b71-b51a-498ec4a19e63-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-c482c\" (UID: \"327ce86b-3b3f-4b71-b51a-498ec4a19e63\") " pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.473834 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/327ce86b-3b3f-4b71-b51a-498ec4a19e63-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-c482c\" (UID: \"327ce86b-3b3f-4b71-b51a-498ec4a19e63\") " pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.473947 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/327ce86b-3b3f-4b71-b51a-498ec4a19e63-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-c482c\" (UID: \"327ce86b-3b3f-4b71-b51a-498ec4a19e63\") " pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.473996 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/327ce86b-3b3f-4b71-b51a-498ec4a19e63-config\") pod \"dnsmasq-dns-5d75f767dc-c482c\" (UID: \"327ce86b-3b3f-4b71-b51a-498ec4a19e63\") " pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.579358 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/327ce86b-3b3f-4b71-b51a-498ec4a19e63-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-c482c\" (UID: \"327ce86b-3b3f-4b71-b51a-498ec4a19e63\") " pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.579408 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmxtg\" (UniqueName: \"kubernetes.io/projected/327ce86b-3b3f-4b71-b51a-498ec4a19e63-kube-api-access-jmxtg\") pod \"dnsmasq-dns-5d75f767dc-c482c\" (UID: \"327ce86b-3b3f-4b71-b51a-498ec4a19e63\") " pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.579455 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/327ce86b-3b3f-4b71-b51a-498ec4a19e63-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-c482c\" (UID: \"327ce86b-3b3f-4b71-b51a-498ec4a19e63\") " pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.579481 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/327ce86b-3b3f-4b71-b51a-498ec4a19e63-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-c482c\" (UID: \"327ce86b-3b3f-4b71-b51a-498ec4a19e63\") " pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.579527 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/327ce86b-3b3f-4b71-b51a-498ec4a19e63-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-c482c\" (UID: \"327ce86b-3b3f-4b71-b51a-498ec4a19e63\") " pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.579589 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/327ce86b-3b3f-4b71-b51a-498ec4a19e63-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-c482c\" (UID: \"327ce86b-3b3f-4b71-b51a-498ec4a19e63\") " pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.579619 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/327ce86b-3b3f-4b71-b51a-498ec4a19e63-config\") pod \"dnsmasq-dns-5d75f767dc-c482c\" (UID: \"327ce86b-3b3f-4b71-b51a-498ec4a19e63\") " pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.580583 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/327ce86b-3b3f-4b71-b51a-498ec4a19e63-config\") pod \"dnsmasq-dns-5d75f767dc-c482c\" (UID: \"327ce86b-3b3f-4b71-b51a-498ec4a19e63\") " pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.581115 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/327ce86b-3b3f-4b71-b51a-498ec4a19e63-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-c482c\" (UID: \"327ce86b-3b3f-4b71-b51a-498ec4a19e63\") " pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.581685 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/327ce86b-3b3f-4b71-b51a-498ec4a19e63-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-c482c\" (UID: \"327ce86b-3b3f-4b71-b51a-498ec4a19e63\") " pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.581762 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/327ce86b-3b3f-4b71-b51a-498ec4a19e63-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-c482c\" (UID: \"327ce86b-3b3f-4b71-b51a-498ec4a19e63\") " pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.582522 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/327ce86b-3b3f-4b71-b51a-498ec4a19e63-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-c482c\" (UID: \"327ce86b-3b3f-4b71-b51a-498ec4a19e63\") " pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.586644 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/327ce86b-3b3f-4b71-b51a-498ec4a19e63-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-c482c\" (UID: \"327ce86b-3b3f-4b71-b51a-498ec4a19e63\") " pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.629549 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmxtg\" (UniqueName: \"kubernetes.io/projected/327ce86b-3b3f-4b71-b51a-498ec4a19e63-kube-api-access-jmxtg\") pod \"dnsmasq-dns-5d75f767dc-c482c\" (UID: \"327ce86b-3b3f-4b71-b51a-498ec4a19e63\") " pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.696592 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.928892 4966 generic.go:334] "Generic (PLEG): container finished" podID="05824f99-475d-4f23-84fa-33b23a3030b7" containerID="76722416e8e1d9ef1654f06a700f693a846cb88706cb407c1b1d9aff39b48390" exitCode=0 Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.928960 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" event={"ID":"05824f99-475d-4f23-84fa-33b23a3030b7","Type":"ContainerDied","Data":"76722416e8e1d9ef1654f06a700f693a846cb88706cb407c1b1d9aff39b48390"} Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.928986 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" event={"ID":"05824f99-475d-4f23-84fa-33b23a3030b7","Type":"ContainerDied","Data":"e798561ac868a5113571a29afd5924410061a1040ccfcf721f787a601f1231d9"} Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.928996 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e798561ac868a5113571a29afd5924410061a1040ccfcf721f787a601f1231d9" Jan 27 16:09:35 crc kubenswrapper[4966]: I0127 16:09:35.938680 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.093367 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-ovsdbserver-sb\") pod \"05824f99-475d-4f23-84fa-33b23a3030b7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.093789 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-ovsdbserver-nb\") pod \"05824f99-475d-4f23-84fa-33b23a3030b7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.094019 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kpb2\" (UniqueName: \"kubernetes.io/projected/05824f99-475d-4f23-84fa-33b23a3030b7-kube-api-access-4kpb2\") pod \"05824f99-475d-4f23-84fa-33b23a3030b7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.094937 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-dns-svc\") pod \"05824f99-475d-4f23-84fa-33b23a3030b7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.095006 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-dns-swift-storage-0\") pod \"05824f99-475d-4f23-84fa-33b23a3030b7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.095100 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-config\") pod \"05824f99-475d-4f23-84fa-33b23a3030b7\" (UID: \"05824f99-475d-4f23-84fa-33b23a3030b7\") " Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.102197 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05824f99-475d-4f23-84fa-33b23a3030b7-kube-api-access-4kpb2" (OuterVolumeSpecName: "kube-api-access-4kpb2") pod "05824f99-475d-4f23-84fa-33b23a3030b7" (UID: "05824f99-475d-4f23-84fa-33b23a3030b7"). InnerVolumeSpecName "kube-api-access-4kpb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.174775 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "05824f99-475d-4f23-84fa-33b23a3030b7" (UID: "05824f99-475d-4f23-84fa-33b23a3030b7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.178487 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "05824f99-475d-4f23-84fa-33b23a3030b7" (UID: "05824f99-475d-4f23-84fa-33b23a3030b7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.183827 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-config" (OuterVolumeSpecName: "config") pod "05824f99-475d-4f23-84fa-33b23a3030b7" (UID: "05824f99-475d-4f23-84fa-33b23a3030b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.184474 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "05824f99-475d-4f23-84fa-33b23a3030b7" (UID: "05824f99-475d-4f23-84fa-33b23a3030b7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.184656 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "05824f99-475d-4f23-84fa-33b23a3030b7" (UID: "05824f99-475d-4f23-84fa-33b23a3030b7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.200788 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.200831 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.200845 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kpb2\" (UniqueName: \"kubernetes.io/projected/05824f99-475d-4f23-84fa-33b23a3030b7-kube-api-access-4kpb2\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.200862 4966 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.200873 4966 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.200882 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05824f99-475d-4f23-84fa-33b23a3030b7-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:36 crc kubenswrapper[4966]: W0127 16:09:36.273212 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod327ce86b_3b3f_4b71_b51a_498ec4a19e63.slice/crio-71d9689351a728c35bf24bf949d634375b24975f64ee98290d6287104ab6bfd7 WatchSource:0}: Error finding container 71d9689351a728c35bf24bf949d634375b24975f64ee98290d6287104ab6bfd7: Status 404 returned error can't find the container with id 71d9689351a728c35bf24bf949d634375b24975f64ee98290d6287104ab6bfd7 Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.273909 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-c482c"] Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.941492 4966 generic.go:334] "Generic (PLEG): container finished" podID="327ce86b-3b3f-4b71-b51a-498ec4a19e63" containerID="9e3adcb9379fe954157a9c2aef444c7b5eba41d2d6f59d550f311928c33221e1" exitCode=0 Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.941814 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-6w8s7" Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.943566 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-c482c" event={"ID":"327ce86b-3b3f-4b71-b51a-498ec4a19e63","Type":"ContainerDied","Data":"9e3adcb9379fe954157a9c2aef444c7b5eba41d2d6f59d550f311928c33221e1"} Jan 27 16:09:36 crc kubenswrapper[4966]: I0127 16:09:36.943609 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-c482c" event={"ID":"327ce86b-3b3f-4b71-b51a-498ec4a19e63","Type":"ContainerStarted","Data":"71d9689351a728c35bf24bf949d634375b24975f64ee98290d6287104ab6bfd7"} Jan 27 16:09:37 crc kubenswrapper[4966]: I0127 16:09:37.010031 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-6w8s7"] Jan 27 16:09:37 crc kubenswrapper[4966]: I0127 16:09:37.024235 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-6w8s7"] Jan 27 16:09:37 crc kubenswrapper[4966]: I0127 16:09:37.955924 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-c482c" event={"ID":"327ce86b-3b3f-4b71-b51a-498ec4a19e63","Type":"ContainerStarted","Data":"921f0f777d5958baeff473bbdd7de569a177fd42f0507e264a64501bd31f8b41"} Jan 27 16:09:37 crc kubenswrapper[4966]: I0127 16:09:37.956274 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:37 crc kubenswrapper[4966]: I0127 16:09:37.989675 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d75f767dc-c482c" podStartSLOduration=2.9896491640000002 podStartE2EDuration="2.989649164s" podCreationTimestamp="2026-01-27 16:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:09:37.978502634 +0000 UTC m=+1644.281296152" watchObservedRunningTime="2026-01-27 16:09:37.989649164 +0000 UTC m=+1644.292442682" Jan 27 16:09:38 crc kubenswrapper[4966]: I0127 16:09:38.540224 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05824f99-475d-4f23-84fa-33b23a3030b7" path="/var/lib/kubelet/pods/05824f99-475d-4f23-84fa-33b23a3030b7/volumes" Jan 27 16:09:42 crc kubenswrapper[4966]: I0127 16:09:42.016094 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jjm6h" event={"ID":"65a82b08-ff78-4e1e-b183-e1d06925aa5e","Type":"ContainerStarted","Data":"901aa4e8c1cd20bf8bc8fa6bbd75e884c3ecf75c805160a742b20499f86cd6fd"} Jan 27 16:09:42 crc kubenswrapper[4966]: I0127 16:09:42.056536 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-jjm6h" podStartSLOduration=2.689810242 podStartE2EDuration="41.056511654s" podCreationTimestamp="2026-01-27 16:09:01 +0000 UTC" firstStartedPulling="2026-01-27 16:09:02.371765447 +0000 UTC m=+1608.674558935" lastFinishedPulling="2026-01-27 16:09:40.738466849 +0000 UTC m=+1647.041260347" observedRunningTime="2026-01-27 16:09:42.046609213 +0000 UTC m=+1648.349402731" watchObservedRunningTime="2026-01-27 16:09:42.056511654 +0000 UTC m=+1648.359305152" Jan 27 16:09:44 crc kubenswrapper[4966]: I0127 16:09:44.054623 4966 generic.go:334] "Generic (PLEG): container finished" podID="65a82b08-ff78-4e1e-b183-e1d06925aa5e" containerID="901aa4e8c1cd20bf8bc8fa6bbd75e884c3ecf75c805160a742b20499f86cd6fd" exitCode=0 Jan 27 16:09:44 crc kubenswrapper[4966]: I0127 16:09:44.054872 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jjm6h" event={"ID":"65a82b08-ff78-4e1e-b183-e1d06925aa5e","Type":"ContainerDied","Data":"901aa4e8c1cd20bf8bc8fa6bbd75e884c3ecf75c805160a742b20499f86cd6fd"} Jan 27 16:09:45 crc kubenswrapper[4966]: I0127 16:09:45.577279 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jjm6h" Jan 27 16:09:45 crc kubenswrapper[4966]: I0127 16:09:45.675785 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmjqt\" (UniqueName: \"kubernetes.io/projected/65a82b08-ff78-4e1e-b183-e1d06925aa5e-kube-api-access-kmjqt\") pod \"65a82b08-ff78-4e1e-b183-e1d06925aa5e\" (UID: \"65a82b08-ff78-4e1e-b183-e1d06925aa5e\") " Jan 27 16:09:45 crc kubenswrapper[4966]: I0127 16:09:45.676273 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a82b08-ff78-4e1e-b183-e1d06925aa5e-combined-ca-bundle\") pod \"65a82b08-ff78-4e1e-b183-e1d06925aa5e\" (UID: \"65a82b08-ff78-4e1e-b183-e1d06925aa5e\") " Jan 27 16:09:45 crc kubenswrapper[4966]: I0127 16:09:45.676507 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a82b08-ff78-4e1e-b183-e1d06925aa5e-config-data\") pod \"65a82b08-ff78-4e1e-b183-e1d06925aa5e\" (UID: \"65a82b08-ff78-4e1e-b183-e1d06925aa5e\") " Jan 27 16:09:45 crc kubenswrapper[4966]: I0127 16:09:45.682381 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65a82b08-ff78-4e1e-b183-e1d06925aa5e-kube-api-access-kmjqt" (OuterVolumeSpecName: "kube-api-access-kmjqt") pod "65a82b08-ff78-4e1e-b183-e1d06925aa5e" (UID: "65a82b08-ff78-4e1e-b183-e1d06925aa5e"). InnerVolumeSpecName "kube-api-access-kmjqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:09:45 crc kubenswrapper[4966]: I0127 16:09:45.698053 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d75f767dc-c482c" Jan 27 16:09:45 crc kubenswrapper[4966]: I0127 16:09:45.733074 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a82b08-ff78-4e1e-b183-e1d06925aa5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65a82b08-ff78-4e1e-b183-e1d06925aa5e" (UID: "65a82b08-ff78-4e1e-b183-e1d06925aa5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:09:45 crc kubenswrapper[4966]: I0127 16:09:45.780675 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-tsf2n"] Jan 27 16:09:45 crc kubenswrapper[4966]: I0127 16:09:45.780939 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" podUID="a36e5960-b960-4650-ac3c-c588e0047b4e" containerName="dnsmasq-dns" containerID="cri-o://699d2fb815f412bbc938d365839d44ed7957452ca1a230a4db7df860cd22f9fb" gracePeriod=10 Jan 27 16:09:45 crc kubenswrapper[4966]: I0127 16:09:45.781286 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmjqt\" (UniqueName: \"kubernetes.io/projected/65a82b08-ff78-4e1e-b183-e1d06925aa5e-kube-api-access-kmjqt\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:45 crc kubenswrapper[4966]: I0127 16:09:45.781320 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a82b08-ff78-4e1e-b183-e1d06925aa5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:45 crc kubenswrapper[4966]: I0127 16:09:45.824152 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a82b08-ff78-4e1e-b183-e1d06925aa5e-config-data" (OuterVolumeSpecName: "config-data") pod "65a82b08-ff78-4e1e-b183-e1d06925aa5e" (UID: "65a82b08-ff78-4e1e-b183-e1d06925aa5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:09:45 crc kubenswrapper[4966]: I0127 16:09:45.884173 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a82b08-ff78-4e1e-b183-e1d06925aa5e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.129386 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jjm6h" event={"ID":"65a82b08-ff78-4e1e-b183-e1d06925aa5e","Type":"ContainerDied","Data":"33fb0c2e698c84fc8a7d0dc51fa2fc1bad343c8d200431cc8a3881da809c3542"} Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.129432 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33fb0c2e698c84fc8a7d0dc51fa2fc1bad343c8d200431cc8a3881da809c3542" Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.129502 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jjm6h" Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.145851 4966 generic.go:334] "Generic (PLEG): container finished" podID="a36e5960-b960-4650-ac3c-c588e0047b4e" containerID="699d2fb815f412bbc938d365839d44ed7957452ca1a230a4db7df860cd22f9fb" exitCode=0 Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.145889 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" event={"ID":"a36e5960-b960-4650-ac3c-c588e0047b4e","Type":"ContainerDied","Data":"699d2fb815f412bbc938d365839d44ed7957452ca1a230a4db7df860cd22f9fb"} Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.298234 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.410272 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-config\") pod \"a36e5960-b960-4650-ac3c-c588e0047b4e\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.410400 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-ovsdbserver-sb\") pod \"a36e5960-b960-4650-ac3c-c588e0047b4e\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.410440 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-dns-swift-storage-0\") pod \"a36e5960-b960-4650-ac3c-c588e0047b4e\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.410468 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-openstack-edpm-ipam\") pod \"a36e5960-b960-4650-ac3c-c588e0047b4e\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.410602 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87dk8\" (UniqueName: \"kubernetes.io/projected/a36e5960-b960-4650-ac3c-c588e0047b4e-kube-api-access-87dk8\") pod \"a36e5960-b960-4650-ac3c-c588e0047b4e\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.410631 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-ovsdbserver-nb\") pod \"a36e5960-b960-4650-ac3c-c588e0047b4e\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.410738 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-dns-svc\") pod \"a36e5960-b960-4650-ac3c-c588e0047b4e\" (UID: \"a36e5960-b960-4650-ac3c-c588e0047b4e\") " Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.418198 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a36e5960-b960-4650-ac3c-c588e0047b4e-kube-api-access-87dk8" (OuterVolumeSpecName: "kube-api-access-87dk8") pod "a36e5960-b960-4650-ac3c-c588e0047b4e" (UID: "a36e5960-b960-4650-ac3c-c588e0047b4e"). InnerVolumeSpecName "kube-api-access-87dk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.489979 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a36e5960-b960-4650-ac3c-c588e0047b4e" (UID: "a36e5960-b960-4650-ac3c-c588e0047b4e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.490155 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a36e5960-b960-4650-ac3c-c588e0047b4e" (UID: "a36e5960-b960-4650-ac3c-c588e0047b4e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.494659 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "a36e5960-b960-4650-ac3c-c588e0047b4e" (UID: "a36e5960-b960-4650-ac3c-c588e0047b4e"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.494686 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-config" (OuterVolumeSpecName: "config") pod "a36e5960-b960-4650-ac3c-c588e0047b4e" (UID: "a36e5960-b960-4650-ac3c-c588e0047b4e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.500358 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a36e5960-b960-4650-ac3c-c588e0047b4e" (UID: "a36e5960-b960-4650-ac3c-c588e0047b4e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.514503 4966 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.514538 4966 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.514547 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.514558 4966 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.514568 4966 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.514576 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87dk8\" (UniqueName: \"kubernetes.io/projected/a36e5960-b960-4650-ac3c-c588e0047b4e-kube-api-access-87dk8\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.534625 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a36e5960-b960-4650-ac3c-c588e0047b4e" (UID: "a36e5960-b960-4650-ac3c-c588e0047b4e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:09:46 crc kubenswrapper[4966]: I0127 16:09:46.618187 4966 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a36e5960-b960-4650-ac3c-c588e0047b4e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.159190 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" event={"ID":"a36e5960-b960-4650-ac3c-c588e0047b4e","Type":"ContainerDied","Data":"2ef24a924fd032d78668021cb4b1336d38864cce7d63ffa821bbe555966c6109"} Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.159253 4966 scope.go:117] "RemoveContainer" containerID="699d2fb815f412bbc938d365839d44ed7957452ca1a230a4db7df860cd22f9fb" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.159288 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-tsf2n" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.186302 4966 scope.go:117] "RemoveContainer" containerID="898d2ec3db5df3d17746c39bc214ddf9245e74c63fc683dd0acdbd495040da26" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.207946 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-tsf2n"] Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.230026 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-tsf2n"] Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.321824 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-759fbdccc8-4p9db"] Jan 27 16:09:47 crc kubenswrapper[4966]: E0127 16:09:47.322412 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a82b08-ff78-4e1e-b183-e1d06925aa5e" containerName="heat-db-sync" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.322439 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a82b08-ff78-4e1e-b183-e1d06925aa5e" containerName="heat-db-sync" Jan 27 16:09:47 crc kubenswrapper[4966]: E0127 16:09:47.322462 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05824f99-475d-4f23-84fa-33b23a3030b7" containerName="dnsmasq-dns" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.322470 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="05824f99-475d-4f23-84fa-33b23a3030b7" containerName="dnsmasq-dns" Jan 27 16:09:47 crc kubenswrapper[4966]: E0127 16:09:47.322486 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a36e5960-b960-4650-ac3c-c588e0047b4e" containerName="dnsmasq-dns" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.322493 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="a36e5960-b960-4650-ac3c-c588e0047b4e" containerName="dnsmasq-dns" Jan 27 16:09:47 crc kubenswrapper[4966]: E0127 16:09:47.322511 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05824f99-475d-4f23-84fa-33b23a3030b7" containerName="init" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.322519 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="05824f99-475d-4f23-84fa-33b23a3030b7" containerName="init" Jan 27 16:09:47 crc kubenswrapper[4966]: E0127 16:09:47.322555 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a36e5960-b960-4650-ac3c-c588e0047b4e" containerName="init" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.322563 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="a36e5960-b960-4650-ac3c-c588e0047b4e" containerName="init" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.322835 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="05824f99-475d-4f23-84fa-33b23a3030b7" containerName="dnsmasq-dns" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.322863 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a82b08-ff78-4e1e-b183-e1d06925aa5e" containerName="heat-db-sync" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.322887 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="a36e5960-b960-4650-ac3c-c588e0047b4e" containerName="dnsmasq-dns" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.323872 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-759fbdccc8-4p9db" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.344955 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-759fbdccc8-4p9db"] Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.367776 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-dd4644df4-l2k8s"] Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.370517 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.428071 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-dd4644df4-l2k8s"] Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.443608 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-757f8b56d5-hnpxj"] Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.450553 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f9a93c7c-a441-4742-9b3f-6b71992b842e-config-data-custom\") pod \"heat-engine-759fbdccc8-4p9db\" (UID: \"f9a93c7c-a441-4742-9b3f-6b71992b842e\") " pod="openstack/heat-engine-759fbdccc8-4p9db" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.450714 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fn4z\" (UniqueName: \"kubernetes.io/projected/ce9500c8-7004-47aa-a51a-050e3ffa6555-kube-api-access-2fn4z\") pod \"heat-api-dd4644df4-l2k8s\" (UID: \"ce9500c8-7004-47aa-a51a-050e3ffa6555\") " pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.450752 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9a93c7c-a441-4742-9b3f-6b71992b842e-combined-ca-bundle\") pod \"heat-engine-759fbdccc8-4p9db\" (UID: \"f9a93c7c-a441-4742-9b3f-6b71992b842e\") " pod="openstack/heat-engine-759fbdccc8-4p9db" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.450790 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp9dt\" (UniqueName: \"kubernetes.io/projected/f9a93c7c-a441-4742-9b3f-6b71992b842e-kube-api-access-kp9dt\") pod \"heat-engine-759fbdccc8-4p9db\" (UID: \"f9a93c7c-a441-4742-9b3f-6b71992b842e\") " pod="openstack/heat-engine-759fbdccc8-4p9db" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.452862 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9a93c7c-a441-4742-9b3f-6b71992b842e-config-data\") pod \"heat-engine-759fbdccc8-4p9db\" (UID: \"f9a93c7c-a441-4742-9b3f-6b71992b842e\") " pod="openstack/heat-engine-759fbdccc8-4p9db" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.453010 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9500c8-7004-47aa-a51a-050e3ffa6555-internal-tls-certs\") pod \"heat-api-dd4644df4-l2k8s\" (UID: \"ce9500c8-7004-47aa-a51a-050e3ffa6555\") " pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.453078 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce9500c8-7004-47aa-a51a-050e3ffa6555-config-data-custom\") pod \"heat-api-dd4644df4-l2k8s\" (UID: \"ce9500c8-7004-47aa-a51a-050e3ffa6555\") " pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.453173 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9500c8-7004-47aa-a51a-050e3ffa6555-combined-ca-bundle\") pod \"heat-api-dd4644df4-l2k8s\" (UID: \"ce9500c8-7004-47aa-a51a-050e3ffa6555\") " pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.453220 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9500c8-7004-47aa-a51a-050e3ffa6555-config-data\") pod \"heat-api-dd4644df4-l2k8s\" (UID: \"ce9500c8-7004-47aa-a51a-050e3ffa6555\") " pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.453420 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9500c8-7004-47aa-a51a-050e3ffa6555-public-tls-certs\") pod \"heat-api-dd4644df4-l2k8s\" (UID: \"ce9500c8-7004-47aa-a51a-050e3ffa6555\") " pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.462624 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.529365 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-757f8b56d5-hnpxj"] Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.568367 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9500c8-7004-47aa-a51a-050e3ffa6555-internal-tls-certs\") pod \"heat-api-dd4644df4-l2k8s\" (UID: \"ce9500c8-7004-47aa-a51a-050e3ffa6555\") " pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.568725 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzh94\" (UniqueName: \"kubernetes.io/projected/6c074d4e-440a-4518-897c-d05c3197ae79-kube-api-access-lzh94\") pod \"heat-cfnapi-757f8b56d5-hnpxj\" (UID: \"6c074d4e-440a-4518-897c-d05c3197ae79\") " pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.568771 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce9500c8-7004-47aa-a51a-050e3ffa6555-config-data-custom\") pod \"heat-api-dd4644df4-l2k8s\" (UID: \"ce9500c8-7004-47aa-a51a-050e3ffa6555\") " pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.568831 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9500c8-7004-47aa-a51a-050e3ffa6555-combined-ca-bundle\") pod \"heat-api-dd4644df4-l2k8s\" (UID: \"ce9500c8-7004-47aa-a51a-050e3ffa6555\") " pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.568866 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9500c8-7004-47aa-a51a-050e3ffa6555-config-data\") pod \"heat-api-dd4644df4-l2k8s\" (UID: \"ce9500c8-7004-47aa-a51a-050e3ffa6555\") " pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.568935 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c074d4e-440a-4518-897c-d05c3197ae79-internal-tls-certs\") pod \"heat-cfnapi-757f8b56d5-hnpxj\" (UID: \"6c074d4e-440a-4518-897c-d05c3197ae79\") " pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.569088 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c074d4e-440a-4518-897c-d05c3197ae79-config-data\") pod \"heat-cfnapi-757f8b56d5-hnpxj\" (UID: \"6c074d4e-440a-4518-897c-d05c3197ae79\") " pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.569186 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9500c8-7004-47aa-a51a-050e3ffa6555-public-tls-certs\") pod \"heat-api-dd4644df4-l2k8s\" (UID: \"ce9500c8-7004-47aa-a51a-050e3ffa6555\") " pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.569268 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c074d4e-440a-4518-897c-d05c3197ae79-public-tls-certs\") pod \"heat-cfnapi-757f8b56d5-hnpxj\" (UID: \"6c074d4e-440a-4518-897c-d05c3197ae79\") " pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.569332 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f9a93c7c-a441-4742-9b3f-6b71992b842e-config-data-custom\") pod \"heat-engine-759fbdccc8-4p9db\" (UID: \"f9a93c7c-a441-4742-9b3f-6b71992b842e\") " pod="openstack/heat-engine-759fbdccc8-4p9db" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.569424 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fn4z\" (UniqueName: \"kubernetes.io/projected/ce9500c8-7004-47aa-a51a-050e3ffa6555-kube-api-access-2fn4z\") pod \"heat-api-dd4644df4-l2k8s\" (UID: \"ce9500c8-7004-47aa-a51a-050e3ffa6555\") " pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.569485 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9a93c7c-a441-4742-9b3f-6b71992b842e-combined-ca-bundle\") pod \"heat-engine-759fbdccc8-4p9db\" (UID: \"f9a93c7c-a441-4742-9b3f-6b71992b842e\") " pod="openstack/heat-engine-759fbdccc8-4p9db" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.569523 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp9dt\" (UniqueName: \"kubernetes.io/projected/f9a93c7c-a441-4742-9b3f-6b71992b842e-kube-api-access-kp9dt\") pod \"heat-engine-759fbdccc8-4p9db\" (UID: \"f9a93c7c-a441-4742-9b3f-6b71992b842e\") " pod="openstack/heat-engine-759fbdccc8-4p9db" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.569600 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c074d4e-440a-4518-897c-d05c3197ae79-config-data-custom\") pod \"heat-cfnapi-757f8b56d5-hnpxj\" (UID: \"6c074d4e-440a-4518-897c-d05c3197ae79\") " pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.569693 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c074d4e-440a-4518-897c-d05c3197ae79-combined-ca-bundle\") pod \"heat-cfnapi-757f8b56d5-hnpxj\" (UID: \"6c074d4e-440a-4518-897c-d05c3197ae79\") " pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.569909 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9a93c7c-a441-4742-9b3f-6b71992b842e-config-data\") pod \"heat-engine-759fbdccc8-4p9db\" (UID: \"f9a93c7c-a441-4742-9b3f-6b71992b842e\") " pod="openstack/heat-engine-759fbdccc8-4p9db" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.575101 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9a93c7c-a441-4742-9b3f-6b71992b842e-config-data\") pod \"heat-engine-759fbdccc8-4p9db\" (UID: \"f9a93c7c-a441-4742-9b3f-6b71992b842e\") " pod="openstack/heat-engine-759fbdccc8-4p9db" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.578307 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9500c8-7004-47aa-a51a-050e3ffa6555-public-tls-certs\") pod \"heat-api-dd4644df4-l2k8s\" (UID: \"ce9500c8-7004-47aa-a51a-050e3ffa6555\") " pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.579379 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f9a93c7c-a441-4742-9b3f-6b71992b842e-config-data-custom\") pod \"heat-engine-759fbdccc8-4p9db\" (UID: \"f9a93c7c-a441-4742-9b3f-6b71992b842e\") " pod="openstack/heat-engine-759fbdccc8-4p9db" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.580169 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9a93c7c-a441-4742-9b3f-6b71992b842e-combined-ca-bundle\") pod \"heat-engine-759fbdccc8-4p9db\" (UID: \"f9a93c7c-a441-4742-9b3f-6b71992b842e\") " pod="openstack/heat-engine-759fbdccc8-4p9db" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.583486 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9500c8-7004-47aa-a51a-050e3ffa6555-combined-ca-bundle\") pod \"heat-api-dd4644df4-l2k8s\" (UID: \"ce9500c8-7004-47aa-a51a-050e3ffa6555\") " pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.583802 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce9500c8-7004-47aa-a51a-050e3ffa6555-config-data-custom\") pod \"heat-api-dd4644df4-l2k8s\" (UID: \"ce9500c8-7004-47aa-a51a-050e3ffa6555\") " pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.584142 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9500c8-7004-47aa-a51a-050e3ffa6555-config-data\") pod \"heat-api-dd4644df4-l2k8s\" (UID: \"ce9500c8-7004-47aa-a51a-050e3ffa6555\") " pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.584570 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9500c8-7004-47aa-a51a-050e3ffa6555-internal-tls-certs\") pod \"heat-api-dd4644df4-l2k8s\" (UID: \"ce9500c8-7004-47aa-a51a-050e3ffa6555\") " pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.591885 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fn4z\" (UniqueName: \"kubernetes.io/projected/ce9500c8-7004-47aa-a51a-050e3ffa6555-kube-api-access-2fn4z\") pod \"heat-api-dd4644df4-l2k8s\" (UID: \"ce9500c8-7004-47aa-a51a-050e3ffa6555\") " pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.592347 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp9dt\" (UniqueName: \"kubernetes.io/projected/f9a93c7c-a441-4742-9b3f-6b71992b842e-kube-api-access-kp9dt\") pod \"heat-engine-759fbdccc8-4p9db\" (UID: \"f9a93c7c-a441-4742-9b3f-6b71992b842e\") " pod="openstack/heat-engine-759fbdccc8-4p9db" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.659558 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-759fbdccc8-4p9db" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.672468 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c074d4e-440a-4518-897c-d05c3197ae79-config-data-custom\") pod \"heat-cfnapi-757f8b56d5-hnpxj\" (UID: \"6c074d4e-440a-4518-897c-d05c3197ae79\") " pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.672520 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c074d4e-440a-4518-897c-d05c3197ae79-combined-ca-bundle\") pod \"heat-cfnapi-757f8b56d5-hnpxj\" (UID: \"6c074d4e-440a-4518-897c-d05c3197ae79\") " pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.672634 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzh94\" (UniqueName: \"kubernetes.io/projected/6c074d4e-440a-4518-897c-d05c3197ae79-kube-api-access-lzh94\") pod \"heat-cfnapi-757f8b56d5-hnpxj\" (UID: \"6c074d4e-440a-4518-897c-d05c3197ae79\") " pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.672710 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c074d4e-440a-4518-897c-d05c3197ae79-internal-tls-certs\") pod \"heat-cfnapi-757f8b56d5-hnpxj\" (UID: \"6c074d4e-440a-4518-897c-d05c3197ae79\") " pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.672774 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c074d4e-440a-4518-897c-d05c3197ae79-config-data\") pod \"heat-cfnapi-757f8b56d5-hnpxj\" (UID: \"6c074d4e-440a-4518-897c-d05c3197ae79\") " pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.672840 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c074d4e-440a-4518-897c-d05c3197ae79-public-tls-certs\") pod \"heat-cfnapi-757f8b56d5-hnpxj\" (UID: \"6c074d4e-440a-4518-897c-d05c3197ae79\") " pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.676668 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c074d4e-440a-4518-897c-d05c3197ae79-config-data-custom\") pod \"heat-cfnapi-757f8b56d5-hnpxj\" (UID: \"6c074d4e-440a-4518-897c-d05c3197ae79\") " pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.677171 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c074d4e-440a-4518-897c-d05c3197ae79-internal-tls-certs\") pod \"heat-cfnapi-757f8b56d5-hnpxj\" (UID: \"6c074d4e-440a-4518-897c-d05c3197ae79\") " pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.677807 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c074d4e-440a-4518-897c-d05c3197ae79-combined-ca-bundle\") pod \"heat-cfnapi-757f8b56d5-hnpxj\" (UID: \"6c074d4e-440a-4518-897c-d05c3197ae79\") " pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.678053 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c074d4e-440a-4518-897c-d05c3197ae79-config-data\") pod \"heat-cfnapi-757f8b56d5-hnpxj\" (UID: \"6c074d4e-440a-4518-897c-d05c3197ae79\") " pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.681495 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c074d4e-440a-4518-897c-d05c3197ae79-public-tls-certs\") pod \"heat-cfnapi-757f8b56d5-hnpxj\" (UID: \"6c074d4e-440a-4518-897c-d05c3197ae79\") " pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.689222 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzh94\" (UniqueName: \"kubernetes.io/projected/6c074d4e-440a-4518-897c-d05c3197ae79-kube-api-access-lzh94\") pod \"heat-cfnapi-757f8b56d5-hnpxj\" (UID: \"6c074d4e-440a-4518-897c-d05c3197ae79\") " pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.701730 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:47 crc kubenswrapper[4966]: I0127 16:09:47.805159 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:48 crc kubenswrapper[4966]: I0127 16:09:48.211250 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-759fbdccc8-4p9db"] Jan 27 16:09:48 crc kubenswrapper[4966]: I0127 16:09:48.312578 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-dd4644df4-l2k8s"] Jan 27 16:09:48 crc kubenswrapper[4966]: W0127 16:09:48.313584 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce9500c8_7004_47aa_a51a_050e3ffa6555.slice/crio-0a76e4bca2c85901fe0b06215e9acf7e6f0ab93ac77e65cf1ae6dd590fc83552 WatchSource:0}: Error finding container 0a76e4bca2c85901fe0b06215e9acf7e6f0ab93ac77e65cf1ae6dd590fc83552: Status 404 returned error can't find the container with id 0a76e4bca2c85901fe0b06215e9acf7e6f0ab93ac77e65cf1ae6dd590fc83552 Jan 27 16:09:48 crc kubenswrapper[4966]: I0127 16:09:48.414637 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-757f8b56d5-hnpxj"] Jan 27 16:09:48 crc kubenswrapper[4966]: W0127 16:09:48.415159 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c074d4e_440a_4518_897c_d05c3197ae79.slice/crio-0eff220d159907cc4f51fcff8711a04d42faeabaeac00e446fde50b2da70b14e WatchSource:0}: Error finding container 0eff220d159907cc4f51fcff8711a04d42faeabaeac00e446fde50b2da70b14e: Status 404 returned error can't find the container with id 0eff220d159907cc4f51fcff8711a04d42faeabaeac00e446fde50b2da70b14e Jan 27 16:09:48 crc kubenswrapper[4966]: I0127 16:09:48.521416 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:09:48 crc kubenswrapper[4966]: E0127 16:09:48.522209 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:09:48 crc kubenswrapper[4966]: I0127 16:09:48.537485 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a36e5960-b960-4650-ac3c-c588e0047b4e" path="/var/lib/kubelet/pods/a36e5960-b960-4650-ac3c-c588e0047b4e/volumes" Jan 27 16:09:49 crc kubenswrapper[4966]: I0127 16:09:49.206524 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" event={"ID":"6c074d4e-440a-4518-897c-d05c3197ae79","Type":"ContainerStarted","Data":"0eff220d159907cc4f51fcff8711a04d42faeabaeac00e446fde50b2da70b14e"} Jan 27 16:09:49 crc kubenswrapper[4966]: I0127 16:09:49.210377 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-dd4644df4-l2k8s" event={"ID":"ce9500c8-7004-47aa-a51a-050e3ffa6555","Type":"ContainerStarted","Data":"0a76e4bca2c85901fe0b06215e9acf7e6f0ab93ac77e65cf1ae6dd590fc83552"} Jan 27 16:09:49 crc kubenswrapper[4966]: I0127 16:09:49.213971 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-759fbdccc8-4p9db" event={"ID":"f9a93c7c-a441-4742-9b3f-6b71992b842e","Type":"ContainerStarted","Data":"89309e831fdc05bc1ab0e8f0f08e8a2ae71468f60e1a402ee084143dcb852502"} Jan 27 16:09:49 crc kubenswrapper[4966]: I0127 16:09:49.214023 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-759fbdccc8-4p9db" event={"ID":"f9a93c7c-a441-4742-9b3f-6b71992b842e","Type":"ContainerStarted","Data":"5eef7d2f8d9d4175f3ef1e40173bfa25f955cb28190e316a7d38a47f4afaf15a"} Jan 27 16:09:49 crc kubenswrapper[4966]: I0127 16:09:49.214421 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-759fbdccc8-4p9db" Jan 27 16:09:49 crc kubenswrapper[4966]: I0127 16:09:49.234641 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-759fbdccc8-4p9db" podStartSLOduration=2.234618589 podStartE2EDuration="2.234618589s" podCreationTimestamp="2026-01-27 16:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:09:49.229453187 +0000 UTC m=+1655.532246675" watchObservedRunningTime="2026-01-27 16:09:49.234618589 +0000 UTC m=+1655.537412077" Jan 27 16:09:51 crc kubenswrapper[4966]: I0127 16:09:51.244626 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-dd4644df4-l2k8s" event={"ID":"ce9500c8-7004-47aa-a51a-050e3ffa6555","Type":"ContainerStarted","Data":"d231eee32835c1e8435217faaac53b0d5bbc866f2d324897eae21e2319b762c4"} Jan 27 16:09:51 crc kubenswrapper[4966]: I0127 16:09:51.246081 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:51 crc kubenswrapper[4966]: I0127 16:09:51.248368 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" event={"ID":"6c074d4e-440a-4518-897c-d05c3197ae79","Type":"ContainerStarted","Data":"839350f6ce90277b4b5818f48e29575a8efcccffdf2059b9c084df4756729424"} Jan 27 16:09:51 crc kubenswrapper[4966]: I0127 16:09:51.248632 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:51 crc kubenswrapper[4966]: I0127 16:09:51.280548 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-dd4644df4-l2k8s" podStartSLOduration=2.233672558 podStartE2EDuration="4.280528896s" podCreationTimestamp="2026-01-27 16:09:47 +0000 UTC" firstStartedPulling="2026-01-27 16:09:48.315693133 +0000 UTC m=+1654.618486621" lastFinishedPulling="2026-01-27 16:09:50.362549471 +0000 UTC m=+1656.665342959" observedRunningTime="2026-01-27 16:09:51.278813083 +0000 UTC m=+1657.581606571" watchObservedRunningTime="2026-01-27 16:09:51.280528896 +0000 UTC m=+1657.583322384" Jan 27 16:09:51 crc kubenswrapper[4966]: I0127 16:09:51.305589 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" podStartSLOduration=2.3552417119999998 podStartE2EDuration="4.305568452s" podCreationTimestamp="2026-01-27 16:09:47 +0000 UTC" firstStartedPulling="2026-01-27 16:09:48.418504198 +0000 UTC m=+1654.721297686" lastFinishedPulling="2026-01-27 16:09:50.368830938 +0000 UTC m=+1656.671624426" observedRunningTime="2026-01-27 16:09:51.299628615 +0000 UTC m=+1657.602422113" watchObservedRunningTime="2026-01-27 16:09:51.305568452 +0000 UTC m=+1657.608361940" Jan 27 16:09:55 crc kubenswrapper[4966]: I0127 16:09:55.737927 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv"] Jan 27 16:09:55 crc kubenswrapper[4966]: I0127 16:09:55.758171 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv"] Jan 27 16:09:55 crc kubenswrapper[4966]: I0127 16:09:55.758263 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" Jan 27 16:09:55 crc kubenswrapper[4966]: I0127 16:09:55.761564 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 16:09:55 crc kubenswrapper[4966]: I0127 16:09:55.761778 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 16:09:55 crc kubenswrapper[4966]: I0127 16:09:55.762232 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hsd" Jan 27 16:09:55 crc kubenswrapper[4966]: I0127 16:09:55.762697 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 16:09:55 crc kubenswrapper[4966]: I0127 16:09:55.914475 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r5q9\" (UniqueName: \"kubernetes.io/projected/a39c95f2-d908-4593-9ddf-813da13c1f6a-kube-api-access-2r5q9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv\" (UID: \"a39c95f2-d908-4593-9ddf-813da13c1f6a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" Jan 27 16:09:55 crc kubenswrapper[4966]: I0127 16:09:55.914587 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a39c95f2-d908-4593-9ddf-813da13c1f6a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv\" (UID: \"a39c95f2-d908-4593-9ddf-813da13c1f6a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" Jan 27 16:09:55 crc kubenswrapper[4966]: I0127 16:09:55.914765 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a39c95f2-d908-4593-9ddf-813da13c1f6a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv\" (UID: \"a39c95f2-d908-4593-9ddf-813da13c1f6a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" Jan 27 16:09:55 crc kubenswrapper[4966]: I0127 16:09:55.914842 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a39c95f2-d908-4593-9ddf-813da13c1f6a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv\" (UID: \"a39c95f2-d908-4593-9ddf-813da13c1f6a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" Jan 27 16:09:56 crc kubenswrapper[4966]: I0127 16:09:56.016928 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r5q9\" (UniqueName: \"kubernetes.io/projected/a39c95f2-d908-4593-9ddf-813da13c1f6a-kube-api-access-2r5q9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv\" (UID: \"a39c95f2-d908-4593-9ddf-813da13c1f6a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" Jan 27 16:09:56 crc kubenswrapper[4966]: I0127 16:09:56.017051 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a39c95f2-d908-4593-9ddf-813da13c1f6a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv\" (UID: \"a39c95f2-d908-4593-9ddf-813da13c1f6a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" Jan 27 16:09:56 crc kubenswrapper[4966]: I0127 16:09:56.017109 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a39c95f2-d908-4593-9ddf-813da13c1f6a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv\" (UID: \"a39c95f2-d908-4593-9ddf-813da13c1f6a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" Jan 27 16:09:56 crc kubenswrapper[4966]: I0127 16:09:56.017149 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a39c95f2-d908-4593-9ddf-813da13c1f6a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv\" (UID: \"a39c95f2-d908-4593-9ddf-813da13c1f6a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" Jan 27 16:09:56 crc kubenswrapper[4966]: I0127 16:09:56.024548 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a39c95f2-d908-4593-9ddf-813da13c1f6a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv\" (UID: \"a39c95f2-d908-4593-9ddf-813da13c1f6a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" Jan 27 16:09:56 crc kubenswrapper[4966]: I0127 16:09:56.024686 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a39c95f2-d908-4593-9ddf-813da13c1f6a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv\" (UID: \"a39c95f2-d908-4593-9ddf-813da13c1f6a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" Jan 27 16:09:56 crc kubenswrapper[4966]: I0127 16:09:56.024870 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a39c95f2-d908-4593-9ddf-813da13c1f6a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv\" (UID: \"a39c95f2-d908-4593-9ddf-813da13c1f6a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" Jan 27 16:09:56 crc kubenswrapper[4966]: I0127 16:09:56.035853 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r5q9\" (UniqueName: \"kubernetes.io/projected/a39c95f2-d908-4593-9ddf-813da13c1f6a-kube-api-access-2r5q9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv\" (UID: \"a39c95f2-d908-4593-9ddf-813da13c1f6a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" Jan 27 16:09:56 crc kubenswrapper[4966]: I0127 16:09:56.088014 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" Jan 27 16:09:56 crc kubenswrapper[4966]: W0127 16:09:56.989808 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda39c95f2_d908_4593_9ddf_813da13c1f6a.slice/crio-2ed09b8af3a8301a684e7310307d409317f9968b0d3b304ba7c70d09c3ffe55c WatchSource:0}: Error finding container 2ed09b8af3a8301a684e7310307d409317f9968b0d3b304ba7c70d09c3ffe55c: Status 404 returned error can't find the container with id 2ed09b8af3a8301a684e7310307d409317f9968b0d3b304ba7c70d09c3ffe55c Jan 27 16:09:56 crc kubenswrapper[4966]: I0127 16:09:56.992070 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv"] Jan 27 16:09:57 crc kubenswrapper[4966]: I0127 16:09:57.317194 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" event={"ID":"a39c95f2-d908-4593-9ddf-813da13c1f6a","Type":"ContainerStarted","Data":"2ed09b8af3a8301a684e7310307d409317f9968b0d3b304ba7c70d09c3ffe55c"} Jan 27 16:09:59 crc kubenswrapper[4966]: I0127 16:09:59.374426 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-dd4644df4-l2k8s" Jan 27 16:09:59 crc kubenswrapper[4966]: I0127 16:09:59.460268 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7c6c4669c5-dd9mb"] Jan 27 16:09:59 crc kubenswrapper[4966]: I0127 16:09:59.460859 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-7c6c4669c5-dd9mb" podUID="8ca408b2-f3aa-4504-9f89-8028f0cdc94a" containerName="heat-api" containerID="cri-o://1cd41d813f83df6bb7678233b9e011a99b55f2e3c3305c812477d3cbfd8f2143" gracePeriod=60 Jan 27 16:09:59 crc kubenswrapper[4966]: I0127 16:09:59.753270 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-757f8b56d5-hnpxj" Jan 27 16:09:59 crc kubenswrapper[4966]: I0127 16:09:59.812980 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-d949f4598-q7z92"] Jan 27 16:09:59 crc kubenswrapper[4966]: I0127 16:09:59.813180 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-d949f4598-q7z92" podUID="4ea91e8a-6a88-4f54-a60e-81f68d447beb" containerName="heat-cfnapi" containerID="cri-o://5b3e5fa076c447451790096f85e86ded1780fa91802a91bf88a63b744db33d26" gracePeriod=60 Jan 27 16:10:00 crc kubenswrapper[4966]: I0127 16:10:00.527989 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:10:00 crc kubenswrapper[4966]: E0127 16:10:00.528522 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:10:03 crc kubenswrapper[4966]: I0127 16:10:03.392009 4966 generic.go:334] "Generic (PLEG): container finished" podID="392d1dfb-fb0e-4c96-bd6b-0d85c032f41b" containerID="69c01b062bbd3002121472d755be61ca44c2ade3fbfb634ccf281da542c61938" exitCode=0 Jan 27 16:10:03 crc kubenswrapper[4966]: I0127 16:10:03.392103 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b","Type":"ContainerDied","Data":"69c01b062bbd3002121472d755be61ca44c2ade3fbfb634ccf281da542c61938"} Jan 27 16:10:03 crc kubenswrapper[4966]: I0127 16:10:03.396946 4966 generic.go:334] "Generic (PLEG): container finished" podID="8ca408b2-f3aa-4504-9f89-8028f0cdc94a" containerID="1cd41d813f83df6bb7678233b9e011a99b55f2e3c3305c812477d3cbfd8f2143" exitCode=0 Jan 27 16:10:03 crc kubenswrapper[4966]: I0127 16:10:03.397148 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c6c4669c5-dd9mb" event={"ID":"8ca408b2-f3aa-4504-9f89-8028f0cdc94a","Type":"ContainerDied","Data":"1cd41d813f83df6bb7678233b9e011a99b55f2e3c3305c812477d3cbfd8f2143"} Jan 27 16:10:03 crc kubenswrapper[4966]: I0127 16:10:03.398909 4966 generic.go:334] "Generic (PLEG): container finished" podID="793ef49f-7394-4261-a7c1-b262c6744776" containerID="abc8142eafd1204dd361e187d5cff69a650897e05387882257ec3240aeed9c7a" exitCode=0 Jan 27 16:10:03 crc kubenswrapper[4966]: I0127 16:10:03.398935 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"793ef49f-7394-4261-a7c1-b262c6744776","Type":"ContainerDied","Data":"abc8142eafd1204dd361e187d5cff69a650897e05387882257ec3240aeed9c7a"} Jan 27 16:10:03 crc kubenswrapper[4966]: I0127 16:10:03.401223 4966 generic.go:334] "Generic (PLEG): container finished" podID="4ea91e8a-6a88-4f54-a60e-81f68d447beb" containerID="5b3e5fa076c447451790096f85e86ded1780fa91802a91bf88a63b744db33d26" exitCode=0 Jan 27 16:10:03 crc kubenswrapper[4966]: I0127 16:10:03.401312 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d949f4598-q7z92" event={"ID":"4ea91e8a-6a88-4f54-a60e-81f68d447beb","Type":"ContainerDied","Data":"5b3e5fa076c447451790096f85e86ded1780fa91802a91bf88a63b744db33d26"} Jan 27 16:10:04 crc kubenswrapper[4966]: I0127 16:10:04.314301 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-7c6c4669c5-dd9mb" podUID="8ca408b2-f3aa-4504-9f89-8028f0cdc94a" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.221:8004/healthcheck\": dial tcp 10.217.0.221:8004: connect: connection refused" Jan 27 16:10:04 crc kubenswrapper[4966]: I0127 16:10:04.356732 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-d949f4598-q7z92" podUID="4ea91e8a-6a88-4f54-a60e-81f68d447beb" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.222:8000/healthcheck\": dial tcp 10.217.0.222:8000: connect: connection refused" Jan 27 16:10:07 crc kubenswrapper[4966]: I0127 16:10:07.696603 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-759fbdccc8-4p9db" Jan 27 16:10:07 crc kubenswrapper[4966]: I0127 16:10:07.786171 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5d57b89dfc-4z8bl"] Jan 27 16:10:07 crc kubenswrapper[4966]: I0127 16:10:07.786439 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-5d57b89dfc-4z8bl" podUID="b96eb211-2a11-469e-9342-6881a3f3c799" containerName="heat-engine" containerID="cri-o://c5cdbb6a04da173be2362aafa21817e560aad65c2444fe4d05c486c0eec640cb" gracePeriod=60 Jan 27 16:10:09 crc kubenswrapper[4966]: I0127 16:10:09.315278 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-7c6c4669c5-dd9mb" podUID="8ca408b2-f3aa-4504-9f89-8028f0cdc94a" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.221:8004/healthcheck\": dial tcp 10.217.0.221:8004: connect: connection refused" Jan 27 16:10:09 crc kubenswrapper[4966]: I0127 16:10:09.356158 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-d949f4598-q7z92" podUID="4ea91e8a-6a88-4f54-a60e-81f68d447beb" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.222:8000/healthcheck\": dial tcp 10.217.0.222:8000: connect: connection refused" Jan 27 16:10:10 crc kubenswrapper[4966]: I0127 16:10:10.489880 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 16:10:12 crc kubenswrapper[4966]: I0127 16:10:12.443817 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-fr6bx"] Jan 27 16:10:12 crc kubenswrapper[4966]: I0127 16:10:12.458020 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-fr6bx"] Jan 27 16:10:12 crc kubenswrapper[4966]: I0127 16:10:12.554674 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dfee338-4e72-4d66-aed2-e81ce752c4fc" path="/var/lib/kubelet/pods/0dfee338-4e72-4d66-aed2-e81ce752c4fc/volumes" Jan 27 16:10:12 crc kubenswrapper[4966]: I0127 16:10:12.682707 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-969r8"] Jan 27 16:10:12 crc kubenswrapper[4966]: I0127 16:10:12.684360 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-969r8" Jan 27 16:10:12 crc kubenswrapper[4966]: I0127 16:10:12.687013 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 16:10:12 crc kubenswrapper[4966]: I0127 16:10:12.705846 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-969r8"] Jan 27 16:10:12 crc kubenswrapper[4966]: I0127 16:10:12.784150 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70504957-f2da-439c-abb3-40ef116366ca-combined-ca-bundle\") pod \"aodh-db-sync-969r8\" (UID: \"70504957-f2da-439c-abb3-40ef116366ca\") " pod="openstack/aodh-db-sync-969r8" Jan 27 16:10:12 crc kubenswrapper[4966]: I0127 16:10:12.784215 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70504957-f2da-439c-abb3-40ef116366ca-config-data\") pod \"aodh-db-sync-969r8\" (UID: \"70504957-f2da-439c-abb3-40ef116366ca\") " pod="openstack/aodh-db-sync-969r8" Jan 27 16:10:12 crc kubenswrapper[4966]: I0127 16:10:12.784279 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70504957-f2da-439c-abb3-40ef116366ca-scripts\") pod \"aodh-db-sync-969r8\" (UID: \"70504957-f2da-439c-abb3-40ef116366ca\") " pod="openstack/aodh-db-sync-969r8" Jan 27 16:10:12 crc kubenswrapper[4966]: I0127 16:10:12.784339 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwpt8\" (UniqueName: \"kubernetes.io/projected/70504957-f2da-439c-abb3-40ef116366ca-kube-api-access-nwpt8\") pod \"aodh-db-sync-969r8\" (UID: \"70504957-f2da-439c-abb3-40ef116366ca\") " pod="openstack/aodh-db-sync-969r8" Jan 27 16:10:12 crc kubenswrapper[4966]: I0127 16:10:12.887573 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwpt8\" (UniqueName: \"kubernetes.io/projected/70504957-f2da-439c-abb3-40ef116366ca-kube-api-access-nwpt8\") pod \"aodh-db-sync-969r8\" (UID: \"70504957-f2da-439c-abb3-40ef116366ca\") " pod="openstack/aodh-db-sync-969r8" Jan 27 16:10:12 crc kubenswrapper[4966]: I0127 16:10:12.887861 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70504957-f2da-439c-abb3-40ef116366ca-combined-ca-bundle\") pod \"aodh-db-sync-969r8\" (UID: \"70504957-f2da-439c-abb3-40ef116366ca\") " pod="openstack/aodh-db-sync-969r8" Jan 27 16:10:12 crc kubenswrapper[4966]: I0127 16:10:12.887920 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70504957-f2da-439c-abb3-40ef116366ca-config-data\") pod \"aodh-db-sync-969r8\" (UID: \"70504957-f2da-439c-abb3-40ef116366ca\") " pod="openstack/aodh-db-sync-969r8" Jan 27 16:10:12 crc kubenswrapper[4966]: I0127 16:10:12.887998 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70504957-f2da-439c-abb3-40ef116366ca-scripts\") pod \"aodh-db-sync-969r8\" (UID: \"70504957-f2da-439c-abb3-40ef116366ca\") " pod="openstack/aodh-db-sync-969r8" Jan 27 16:10:12 crc kubenswrapper[4966]: I0127 16:10:12.893078 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70504957-f2da-439c-abb3-40ef116366ca-scripts\") pod \"aodh-db-sync-969r8\" (UID: \"70504957-f2da-439c-abb3-40ef116366ca\") " pod="openstack/aodh-db-sync-969r8" Jan 27 16:10:12 crc kubenswrapper[4966]: I0127 16:10:12.910286 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70504957-f2da-439c-abb3-40ef116366ca-config-data\") pod \"aodh-db-sync-969r8\" (UID: \"70504957-f2da-439c-abb3-40ef116366ca\") " pod="openstack/aodh-db-sync-969r8" Jan 27 16:10:12 crc kubenswrapper[4966]: E0127 16:10:12.912852 4966 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c5cdbb6a04da173be2362aafa21817e560aad65c2444fe4d05c486c0eec640cb" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 16:10:12 crc kubenswrapper[4966]: I0127 16:10:12.913324 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70504957-f2da-439c-abb3-40ef116366ca-combined-ca-bundle\") pod \"aodh-db-sync-969r8\" (UID: \"70504957-f2da-439c-abb3-40ef116366ca\") " pod="openstack/aodh-db-sync-969r8" Jan 27 16:10:12 crc kubenswrapper[4966]: I0127 16:10:12.925535 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwpt8\" (UniqueName: \"kubernetes.io/projected/70504957-f2da-439c-abb3-40ef116366ca-kube-api-access-nwpt8\") pod \"aodh-db-sync-969r8\" (UID: \"70504957-f2da-439c-abb3-40ef116366ca\") " pod="openstack/aodh-db-sync-969r8" Jan 27 16:10:12 crc kubenswrapper[4966]: E0127 16:10:12.937889 4966 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c5cdbb6a04da173be2362aafa21817e560aad65c2444fe4d05c486c0eec640cb" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 16:10:12 crc kubenswrapper[4966]: E0127 16:10:12.944221 4966 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c5cdbb6a04da173be2362aafa21817e560aad65c2444fe4d05c486c0eec640cb" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 16:10:12 crc kubenswrapper[4966]: E0127 16:10:12.944365 4966 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5d57b89dfc-4z8bl" podUID="b96eb211-2a11-469e-9342-6881a3f3c799" containerName="heat-engine" Jan 27 16:10:13 crc kubenswrapper[4966]: I0127 16:10:13.006920 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-969r8" Jan 27 16:10:13 crc kubenswrapper[4966]: I0127 16:10:13.817060 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-969r8"] Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.023195 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:10:14 crc kubenswrapper[4966]: E0127 16:10:14.029664 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Jan 27 16:10:14 crc kubenswrapper[4966]: E0127 16:10:14.029829 4966 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 16:10:14 crc kubenswrapper[4966]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Jan 27 16:10:14 crc kubenswrapper[4966]: - hosts: all Jan 27 16:10:14 crc kubenswrapper[4966]: strategy: linear Jan 27 16:10:14 crc kubenswrapper[4966]: tasks: Jan 27 16:10:14 crc kubenswrapper[4966]: - name: Enable podified-repos Jan 27 16:10:14 crc kubenswrapper[4966]: become: true Jan 27 16:10:14 crc kubenswrapper[4966]: ansible.builtin.shell: | Jan 27 16:10:14 crc kubenswrapper[4966]: set -euxo pipefail Jan 27 16:10:14 crc kubenswrapper[4966]: pushd /var/tmp Jan 27 16:10:14 crc kubenswrapper[4966]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Jan 27 16:10:14 crc kubenswrapper[4966]: pushd repo-setup-main Jan 27 16:10:14 crc kubenswrapper[4966]: python3 -m venv ./venv Jan 27 16:10:14 crc kubenswrapper[4966]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Jan 27 16:10:14 crc kubenswrapper[4966]: ./venv/bin/repo-setup current-podified -b antelope Jan 27 16:10:14 crc kubenswrapper[4966]: popd Jan 27 16:10:14 crc kubenswrapper[4966]: rm -rf repo-setup-main Jan 27 16:10:14 crc kubenswrapper[4966]: Jan 27 16:10:14 crc kubenswrapper[4966]: Jan 27 16:10:14 crc kubenswrapper[4966]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Jan 27 16:10:14 crc kubenswrapper[4966]: edpm_override_hosts: openstack-edpm-ipam Jan 27 16:10:14 crc kubenswrapper[4966]: edpm_service_type: repo-setup Jan 27 16:10:14 crc kubenswrapper[4966]: Jan 27 16:10:14 crc kubenswrapper[4966]: Jan 27 16:10:14 crc kubenswrapper[4966]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2r5q9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv_openstack(a39c95f2-d908-4593-9ddf-813da13c1f6a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 27 16:10:14 crc kubenswrapper[4966]: > logger="UnhandledError" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.030794 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:10:14 crc kubenswrapper[4966]: E0127 16:10:14.030994 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" podUID="a39c95f2-d908-4593-9ddf-813da13c1f6a" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.130157 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-internal-tls-certs\") pod \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.130671 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-public-tls-certs\") pod \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.130737 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-public-tls-certs\") pod \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.130816 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-config-data\") pod \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.130883 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-combined-ca-bundle\") pod \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.130956 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-config-data-custom\") pod \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.130994 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl2sw\" (UniqueName: \"kubernetes.io/projected/4ea91e8a-6a88-4f54-a60e-81f68d447beb-kube-api-access-kl2sw\") pod \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.131072 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2rqc\" (UniqueName: \"kubernetes.io/projected/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-kube-api-access-s2rqc\") pod \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.131103 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-config-data-custom\") pod \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.131192 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-combined-ca-bundle\") pod \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.131243 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-config-data\") pod \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\" (UID: \"8ca408b2-f3aa-4504-9f89-8028f0cdc94a\") " Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.131261 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-internal-tls-certs\") pod \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\" (UID: \"4ea91e8a-6a88-4f54-a60e-81f68d447beb\") " Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.361788 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ea91e8a-6a88-4f54-a60e-81f68d447beb-kube-api-access-kl2sw" (OuterVolumeSpecName: "kube-api-access-kl2sw") pod "4ea91e8a-6a88-4f54-a60e-81f68d447beb" (UID: "4ea91e8a-6a88-4f54-a60e-81f68d447beb"). InnerVolumeSpecName "kube-api-access-kl2sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.364673 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-kube-api-access-s2rqc" (OuterVolumeSpecName: "kube-api-access-s2rqc") pod "8ca408b2-f3aa-4504-9f89-8028f0cdc94a" (UID: "8ca408b2-f3aa-4504-9f89-8028f0cdc94a"). InnerVolumeSpecName "kube-api-access-s2rqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.374182 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8ca408b2-f3aa-4504-9f89-8028f0cdc94a" (UID: "8ca408b2-f3aa-4504-9f89-8028f0cdc94a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.379988 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4ea91e8a-6a88-4f54-a60e-81f68d447beb" (UID: "4ea91e8a-6a88-4f54-a60e-81f68d447beb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.422951 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ea91e8a-6a88-4f54-a60e-81f68d447beb" (UID: "4ea91e8a-6a88-4f54-a60e-81f68d447beb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.451722 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2rqc\" (UniqueName: \"kubernetes.io/projected/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-kube-api-access-s2rqc\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.451755 4966 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.451766 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.451776 4966 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.451785 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl2sw\" (UniqueName: \"kubernetes.io/projected/4ea91e8a-6a88-4f54-a60e-81f68d447beb-kube-api-access-kl2sw\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.474918 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8ca408b2-f3aa-4504-9f89-8028f0cdc94a" (UID: "8ca408b2-f3aa-4504-9f89-8028f0cdc94a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.483877 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-config-data" (OuterVolumeSpecName: "config-data") pod "8ca408b2-f3aa-4504-9f89-8028f0cdc94a" (UID: "8ca408b2-f3aa-4504-9f89-8028f0cdc94a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.497025 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ca408b2-f3aa-4504-9f89-8028f0cdc94a" (UID: "8ca408b2-f3aa-4504-9f89-8028f0cdc94a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.511143 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8ca408b2-f3aa-4504-9f89-8028f0cdc94a" (UID: "8ca408b2-f3aa-4504-9f89-8028f0cdc94a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.522865 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:10:14 crc kubenswrapper[4966]: E0127 16:10:14.523789 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.529343 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4ea91e8a-6a88-4f54-a60e-81f68d447beb" (UID: "4ea91e8a-6a88-4f54-a60e-81f68d447beb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.538882 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4ea91e8a-6a88-4f54-a60e-81f68d447beb" (UID: "4ea91e8a-6a88-4f54-a60e-81f68d447beb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.538962 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-config-data" (OuterVolumeSpecName: "config-data") pod "4ea91e8a-6a88-4f54-a60e-81f68d447beb" (UID: "4ea91e8a-6a88-4f54-a60e-81f68d447beb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.545617 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c6c4669c5-dd9mb" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.546203 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d949f4598-q7z92" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.553989 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.554031 4966 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.554046 4966 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.554059 4966 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.554070 4966 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.554081 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ea91e8a-6a88-4f54-a60e-81f68d447beb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.554095 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca408b2-f3aa-4504-9f89-8028f0cdc94a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:14 crc kubenswrapper[4966]: E0127 16:10:14.556867 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" podUID="a39c95f2-d908-4593-9ddf-813da13c1f6a" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.579391 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c6c4669c5-dd9mb" event={"ID":"8ca408b2-f3aa-4504-9f89-8028f0cdc94a","Type":"ContainerDied","Data":"c7a82188155f3aaa5440fd9d351780fb77e7f69950a47edc88c1737eef393156"} Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.579466 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"793ef49f-7394-4261-a7c1-b262c6744776","Type":"ContainerStarted","Data":"a6272cf19cec68867524ceb29fe53eb714aa8ca4aa7e64e1d4ac0364a9fa6a3f"} Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.579483 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d949f4598-q7z92" event={"ID":"4ea91e8a-6a88-4f54-a60e-81f68d447beb","Type":"ContainerDied","Data":"f0df67c86d0aa7dbef7664b195448ff033981d162d9a4a9ef959aebfba051e79"} Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.579500 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-969r8" event={"ID":"70504957-f2da-439c-abb3-40ef116366ca","Type":"ContainerStarted","Data":"0c02bf5257db01debad2282780d3570e186514e4d1782fc695a6c83d2eda968e"} Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.579531 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"392d1dfb-fb0e-4c96-bd6b-0d85c032f41b","Type":"ContainerStarted","Data":"b526ba27219d9a6b60d66855749810ae8a4ee3c9bdc0f68abf4962272012730b"} Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.579527 4966 scope.go:117] "RemoveContainer" containerID="1cd41d813f83df6bb7678233b9e011a99b55f2e3c3305c812477d3cbfd8f2143" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.580933 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.580966 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.620004 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=49.619955114 podStartE2EDuration="49.619955114s" podCreationTimestamp="2026-01-27 16:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:10:14.610167208 +0000 UTC m=+1680.912960716" watchObservedRunningTime="2026-01-27 16:10:14.619955114 +0000 UTC m=+1680.922748602" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.647888 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=48.64787058 podStartE2EDuration="48.64787058s" podCreationTimestamp="2026-01-27 16:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:10:14.629651279 +0000 UTC m=+1680.932444787" watchObservedRunningTime="2026-01-27 16:10:14.64787058 +0000 UTC m=+1680.950664068" Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.673525 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7c6c4669c5-dd9mb"] Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.687144 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7c6c4669c5-dd9mb"] Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.698492 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-d949f4598-q7z92"] Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.708607 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-d949f4598-q7z92"] Jan 27 16:10:14 crc kubenswrapper[4966]: I0127 16:10:14.746346 4966 scope.go:117] "RemoveContainer" containerID="5b3e5fa076c447451790096f85e86ded1780fa91802a91bf88a63b744db33d26" Jan 27 16:10:16 crc kubenswrapper[4966]: I0127 16:10:16.546269 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ea91e8a-6a88-4f54-a60e-81f68d447beb" path="/var/lib/kubelet/pods/4ea91e8a-6a88-4f54-a60e-81f68d447beb/volumes" Jan 27 16:10:16 crc kubenswrapper[4966]: I0127 16:10:16.547127 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ca408b2-f3aa-4504-9f89-8028f0cdc94a" path="/var/lib/kubelet/pods/8ca408b2-f3aa-4504-9f89-8028f0cdc94a/volumes" Jan 27 16:10:16 crc kubenswrapper[4966]: E0127 16:10:16.902388 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ea91e8a_6a88_4f54_a60e_81f68d447beb.slice\": RecentStats: unable to find data in memory cache]" Jan 27 16:10:20 crc kubenswrapper[4966]: E0127 16:10:20.347250 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ea91e8a_6a88_4f54_a60e_81f68d447beb.slice\": RecentStats: unable to find data in memory cache]" Jan 27 16:10:22 crc kubenswrapper[4966]: E0127 16:10:22.905587 4966 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c5cdbb6a04da173be2362aafa21817e560aad65c2444fe4d05c486c0eec640cb" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 16:10:22 crc kubenswrapper[4966]: E0127 16:10:22.907614 4966 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c5cdbb6a04da173be2362aafa21817e560aad65c2444fe4d05c486c0eec640cb" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 16:10:22 crc kubenswrapper[4966]: E0127 16:10:22.909444 4966 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c5cdbb6a04da173be2362aafa21817e560aad65c2444fe4d05c486c0eec640cb" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 16:10:22 crc kubenswrapper[4966]: E0127 16:10:22.909495 4966 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5d57b89dfc-4z8bl" podUID="b96eb211-2a11-469e-9342-6881a3f3c799" containerName="heat-engine" Jan 27 16:10:23 crc kubenswrapper[4966]: I0127 16:10:23.517076 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 16:10:24 crc kubenswrapper[4966]: I0127 16:10:24.725290 4966 generic.go:334] "Generic (PLEG): container finished" podID="b96eb211-2a11-469e-9342-6881a3f3c799" containerID="c5cdbb6a04da173be2362aafa21817e560aad65c2444fe4d05c486c0eec640cb" exitCode=0 Jan 27 16:10:24 crc kubenswrapper[4966]: I0127 16:10:24.725395 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5d57b89dfc-4z8bl" event={"ID":"b96eb211-2a11-469e-9342-6881a3f3c799","Type":"ContainerDied","Data":"c5cdbb6a04da173be2362aafa21817e560aad65c2444fe4d05c486c0eec640cb"} Jan 27 16:10:24 crc kubenswrapper[4966]: I0127 16:10:24.728727 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-969r8" event={"ID":"70504957-f2da-439c-abb3-40ef116366ca","Type":"ContainerStarted","Data":"078a9912b217988550b34e53675cc48b1975547984519f14f6aac999492dc6f7"} Jan 27 16:10:24 crc kubenswrapper[4966]: I0127 16:10:24.762431 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-969r8" podStartSLOduration=3.045287636 podStartE2EDuration="12.762408047s" podCreationTimestamp="2026-01-27 16:10:12 +0000 UTC" firstStartedPulling="2026-01-27 16:10:13.797462374 +0000 UTC m=+1680.100255862" lastFinishedPulling="2026-01-27 16:10:23.514582785 +0000 UTC m=+1689.817376273" observedRunningTime="2026-01-27 16:10:24.748379867 +0000 UTC m=+1691.051173375" watchObservedRunningTime="2026-01-27 16:10:24.762408047 +0000 UTC m=+1691.065201535" Jan 27 16:10:25 crc kubenswrapper[4966]: I0127 16:10:25.167389 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5d57b89dfc-4z8bl" Jan 27 16:10:25 crc kubenswrapper[4966]: I0127 16:10:25.314856 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96eb211-2a11-469e-9342-6881a3f3c799-combined-ca-bundle\") pod \"b96eb211-2a11-469e-9342-6881a3f3c799\" (UID: \"b96eb211-2a11-469e-9342-6881a3f3c799\") " Jan 27 16:10:25 crc kubenswrapper[4966]: I0127 16:10:25.315056 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh4hm\" (UniqueName: \"kubernetes.io/projected/b96eb211-2a11-469e-9342-6881a3f3c799-kube-api-access-vh4hm\") pod \"b96eb211-2a11-469e-9342-6881a3f3c799\" (UID: \"b96eb211-2a11-469e-9342-6881a3f3c799\") " Jan 27 16:10:25 crc kubenswrapper[4966]: I0127 16:10:25.315111 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b96eb211-2a11-469e-9342-6881a3f3c799-config-data-custom\") pod \"b96eb211-2a11-469e-9342-6881a3f3c799\" (UID: \"b96eb211-2a11-469e-9342-6881a3f3c799\") " Jan 27 16:10:25 crc kubenswrapper[4966]: I0127 16:10:25.315199 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96eb211-2a11-469e-9342-6881a3f3c799-config-data\") pod \"b96eb211-2a11-469e-9342-6881a3f3c799\" (UID: \"b96eb211-2a11-469e-9342-6881a3f3c799\") " Jan 27 16:10:25 crc kubenswrapper[4966]: I0127 16:10:25.323206 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b96eb211-2a11-469e-9342-6881a3f3c799-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b96eb211-2a11-469e-9342-6881a3f3c799" (UID: "b96eb211-2a11-469e-9342-6881a3f3c799"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:25 crc kubenswrapper[4966]: I0127 16:10:25.329117 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b96eb211-2a11-469e-9342-6881a3f3c799-kube-api-access-vh4hm" (OuterVolumeSpecName: "kube-api-access-vh4hm") pod "b96eb211-2a11-469e-9342-6881a3f3c799" (UID: "b96eb211-2a11-469e-9342-6881a3f3c799"). InnerVolumeSpecName "kube-api-access-vh4hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:10:25 crc kubenswrapper[4966]: I0127 16:10:25.357030 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b96eb211-2a11-469e-9342-6881a3f3c799-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b96eb211-2a11-469e-9342-6881a3f3c799" (UID: "b96eb211-2a11-469e-9342-6881a3f3c799"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:25 crc kubenswrapper[4966]: I0127 16:10:25.393942 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b96eb211-2a11-469e-9342-6881a3f3c799-config-data" (OuterVolumeSpecName: "config-data") pod "b96eb211-2a11-469e-9342-6881a3f3c799" (UID: "b96eb211-2a11-469e-9342-6881a3f3c799"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:25 crc kubenswrapper[4966]: I0127 16:10:25.418626 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96eb211-2a11-469e-9342-6881a3f3c799-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:25 crc kubenswrapper[4966]: I0127 16:10:25.418666 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh4hm\" (UniqueName: \"kubernetes.io/projected/b96eb211-2a11-469e-9342-6881a3f3c799-kube-api-access-vh4hm\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:25 crc kubenswrapper[4966]: I0127 16:10:25.418681 4966 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b96eb211-2a11-469e-9342-6881a3f3c799-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:25 crc kubenswrapper[4966]: I0127 16:10:25.418693 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96eb211-2a11-469e-9342-6881a3f3c799-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:25 crc kubenswrapper[4966]: I0127 16:10:25.520921 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:10:25 crc kubenswrapper[4966]: E0127 16:10:25.521336 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:10:25 crc kubenswrapper[4966]: I0127 16:10:25.769644 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5d57b89dfc-4z8bl" Jan 27 16:10:25 crc kubenswrapper[4966]: I0127 16:10:25.771134 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5d57b89dfc-4z8bl" event={"ID":"b96eb211-2a11-469e-9342-6881a3f3c799","Type":"ContainerDied","Data":"cdd2b07a4ff3851754e8f705edfcc846b84ecc656f3848d792fd555b344c2af5"} Jan 27 16:10:25 crc kubenswrapper[4966]: I0127 16:10:25.771192 4966 scope.go:117] "RemoveContainer" containerID="c5cdbb6a04da173be2362aafa21817e560aad65c2444fe4d05c486c0eec640cb" Jan 27 16:10:25 crc kubenswrapper[4966]: I0127 16:10:25.811290 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5d57b89dfc-4z8bl"] Jan 27 16:10:25 crc kubenswrapper[4966]: I0127 16:10:25.825723 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-5d57b89dfc-4z8bl"] Jan 27 16:10:26 crc kubenswrapper[4966]: I0127 16:10:26.435669 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="793ef49f-7394-4261-a7c1-b262c6744776" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.13:5671: connect: connection refused" Jan 27 16:10:26 crc kubenswrapper[4966]: I0127 16:10:26.516661 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="392d1dfb-fb0e-4c96-bd6b-0d85c032f41b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.14:5671: connect: connection refused" Jan 27 16:10:26 crc kubenswrapper[4966]: I0127 16:10:26.533072 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b96eb211-2a11-469e-9342-6881a3f3c799" path="/var/lib/kubelet/pods/b96eb211-2a11-469e-9342-6881a3f3c799/volumes" Jan 27 16:10:26 crc kubenswrapper[4966]: I0127 16:10:26.781312 4966 generic.go:334] "Generic (PLEG): container finished" podID="70504957-f2da-439c-abb3-40ef116366ca" containerID="078a9912b217988550b34e53675cc48b1975547984519f14f6aac999492dc6f7" exitCode=0 Jan 27 16:10:26 crc kubenswrapper[4966]: I0127 16:10:26.781422 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-969r8" event={"ID":"70504957-f2da-439c-abb3-40ef116366ca","Type":"ContainerDied","Data":"078a9912b217988550b34e53675cc48b1975547984519f14f6aac999492dc6f7"} Jan 27 16:10:28 crc kubenswrapper[4966]: I0127 16:10:28.298560 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-969r8" Jan 27 16:10:28 crc kubenswrapper[4966]: I0127 16:10:28.389010 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70504957-f2da-439c-abb3-40ef116366ca-config-data\") pod \"70504957-f2da-439c-abb3-40ef116366ca\" (UID: \"70504957-f2da-439c-abb3-40ef116366ca\") " Jan 27 16:10:28 crc kubenswrapper[4966]: I0127 16:10:28.389069 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwpt8\" (UniqueName: \"kubernetes.io/projected/70504957-f2da-439c-abb3-40ef116366ca-kube-api-access-nwpt8\") pod \"70504957-f2da-439c-abb3-40ef116366ca\" (UID: \"70504957-f2da-439c-abb3-40ef116366ca\") " Jan 27 16:10:28 crc kubenswrapper[4966]: I0127 16:10:28.389137 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70504957-f2da-439c-abb3-40ef116366ca-scripts\") pod \"70504957-f2da-439c-abb3-40ef116366ca\" (UID: \"70504957-f2da-439c-abb3-40ef116366ca\") " Jan 27 16:10:28 crc kubenswrapper[4966]: I0127 16:10:28.389211 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70504957-f2da-439c-abb3-40ef116366ca-combined-ca-bundle\") pod \"70504957-f2da-439c-abb3-40ef116366ca\" (UID: \"70504957-f2da-439c-abb3-40ef116366ca\") " Jan 27 16:10:28 crc kubenswrapper[4966]: I0127 16:10:28.396852 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70504957-f2da-439c-abb3-40ef116366ca-scripts" (OuterVolumeSpecName: "scripts") pod "70504957-f2da-439c-abb3-40ef116366ca" (UID: "70504957-f2da-439c-abb3-40ef116366ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:28 crc kubenswrapper[4966]: I0127 16:10:28.400501 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70504957-f2da-439c-abb3-40ef116366ca-kube-api-access-nwpt8" (OuterVolumeSpecName: "kube-api-access-nwpt8") pod "70504957-f2da-439c-abb3-40ef116366ca" (UID: "70504957-f2da-439c-abb3-40ef116366ca"). InnerVolumeSpecName "kube-api-access-nwpt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:10:28 crc kubenswrapper[4966]: I0127 16:10:28.421812 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70504957-f2da-439c-abb3-40ef116366ca-config-data" (OuterVolumeSpecName: "config-data") pod "70504957-f2da-439c-abb3-40ef116366ca" (UID: "70504957-f2da-439c-abb3-40ef116366ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:28 crc kubenswrapper[4966]: I0127 16:10:28.425171 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70504957-f2da-439c-abb3-40ef116366ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70504957-f2da-439c-abb3-40ef116366ca" (UID: "70504957-f2da-439c-abb3-40ef116366ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:28 crc kubenswrapper[4966]: I0127 16:10:28.492338 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70504957-f2da-439c-abb3-40ef116366ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:28 crc kubenswrapper[4966]: I0127 16:10:28.492376 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwpt8\" (UniqueName: \"kubernetes.io/projected/70504957-f2da-439c-abb3-40ef116366ca-kube-api-access-nwpt8\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:28 crc kubenswrapper[4966]: I0127 16:10:28.492388 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70504957-f2da-439c-abb3-40ef116366ca-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:28 crc kubenswrapper[4966]: I0127 16:10:28.492396 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70504957-f2da-439c-abb3-40ef116366ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:28 crc kubenswrapper[4966]: I0127 16:10:28.804958 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-969r8" event={"ID":"70504957-f2da-439c-abb3-40ef116366ca","Type":"ContainerDied","Data":"0c02bf5257db01debad2282780d3570e186514e4d1782fc695a6c83d2eda968e"} Jan 27 16:10:28 crc kubenswrapper[4966]: I0127 16:10:28.805000 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c02bf5257db01debad2282780d3570e186514e4d1782fc695a6c83d2eda968e" Jan 27 16:10:28 crc kubenswrapper[4966]: I0127 16:10:28.805051 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-969r8" Jan 27 16:10:29 crc kubenswrapper[4966]: I0127 16:10:29.022607 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 16:10:29 crc kubenswrapper[4966]: I0127 16:10:29.819190 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" event={"ID":"a39c95f2-d908-4593-9ddf-813da13c1f6a","Type":"ContainerStarted","Data":"83383f467118dddfd416387c0fec988cfafae2fa45413d7bbe8416ee0fcb6b11"} Jan 27 16:10:30 crc kubenswrapper[4966]: E0127 16:10:30.662782 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ea91e8a_6a88_4f54_a60e_81f68d447beb.slice\": RecentStats: unable to find data in memory cache]" Jan 27 16:10:31 crc kubenswrapper[4966]: E0127 16:10:31.640382 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ea91e8a_6a88_4f54_a60e_81f68d447beb.slice\": RecentStats: unable to find data in memory cache]" Jan 27 16:10:32 crc kubenswrapper[4966]: I0127 16:10:32.812469 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" podStartSLOduration=5.786051299 podStartE2EDuration="37.812433543s" podCreationTimestamp="2026-01-27 16:09:55 +0000 UTC" firstStartedPulling="2026-01-27 16:09:56.992736359 +0000 UTC m=+1663.295529867" lastFinishedPulling="2026-01-27 16:10:29.019118603 +0000 UTC m=+1695.321912111" observedRunningTime="2026-01-27 16:10:29.842930365 +0000 UTC m=+1696.145723853" watchObservedRunningTime="2026-01-27 16:10:32.812433543 +0000 UTC m=+1699.115227111" Jan 27 16:10:32 crc kubenswrapper[4966]: I0127 16:10:32.825635 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 27 16:10:32 crc kubenswrapper[4966]: I0127 16:10:32.825932 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="4c9455be-e18b-4b63-aa95-b564b865c894" containerName="aodh-api" containerID="cri-o://93c5553a37fa0703ee26e0e9c080dd7ed8ebf27bdf0aa354dbead102f0837361" gracePeriod=30 Jan 27 16:10:32 crc kubenswrapper[4966]: I0127 16:10:32.826114 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="4c9455be-e18b-4b63-aa95-b564b865c894" containerName="aodh-notifier" containerID="cri-o://d82b63dabc7cb4a8248c8832270e6d4c8f3c260b49478782ed0a8a5d3d80e18a" gracePeriod=30 Jan 27 16:10:32 crc kubenswrapper[4966]: I0127 16:10:32.826342 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="4c9455be-e18b-4b63-aa95-b564b865c894" containerName="aodh-listener" containerID="cri-o://5076bce9517105fea3b188931b6f701c0c0d74cb8d4d6bea7bedc42ff87dfe47" gracePeriod=30 Jan 27 16:10:32 crc kubenswrapper[4966]: I0127 16:10:32.826311 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="4c9455be-e18b-4b63-aa95-b564b865c894" containerName="aodh-evaluator" containerID="cri-o://81a79cc083a72432c68279e877a519d0de42243d01dd05f0cd6873c816ed34b3" gracePeriod=30 Jan 27 16:10:33 crc kubenswrapper[4966]: I0127 16:10:33.865649 4966 generic.go:334] "Generic (PLEG): container finished" podID="4c9455be-e18b-4b63-aa95-b564b865c894" containerID="81a79cc083a72432c68279e877a519d0de42243d01dd05f0cd6873c816ed34b3" exitCode=0 Jan 27 16:10:33 crc kubenswrapper[4966]: I0127 16:10:33.865951 4966 generic.go:334] "Generic (PLEG): container finished" podID="4c9455be-e18b-4b63-aa95-b564b865c894" containerID="93c5553a37fa0703ee26e0e9c080dd7ed8ebf27bdf0aa354dbead102f0837361" exitCode=0 Jan 27 16:10:33 crc kubenswrapper[4966]: I0127 16:10:33.865971 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4c9455be-e18b-4b63-aa95-b564b865c894","Type":"ContainerDied","Data":"81a79cc083a72432c68279e877a519d0de42243d01dd05f0cd6873c816ed34b3"} Jan 27 16:10:33 crc kubenswrapper[4966]: I0127 16:10:33.865996 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4c9455be-e18b-4b63-aa95-b564b865c894","Type":"ContainerDied","Data":"93c5553a37fa0703ee26e0e9c080dd7ed8ebf27bdf0aa354dbead102f0837361"} Jan 27 16:10:36 crc kubenswrapper[4966]: I0127 16:10:36.435144 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 27 16:10:36 crc kubenswrapper[4966]: I0127 16:10:36.492341 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 16:10:36 crc kubenswrapper[4966]: I0127 16:10:36.540455 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 16:10:37 crc kubenswrapper[4966]: I0127 16:10:37.909582 4966 generic.go:334] "Generic (PLEG): container finished" podID="4c9455be-e18b-4b63-aa95-b564b865c894" containerID="d82b63dabc7cb4a8248c8832270e6d4c8f3c260b49478782ed0a8a5d3d80e18a" exitCode=0 Jan 27 16:10:37 crc kubenswrapper[4966]: I0127 16:10:37.909663 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4c9455be-e18b-4b63-aa95-b564b865c894","Type":"ContainerDied","Data":"d82b63dabc7cb4a8248c8832270e6d4c8f3c260b49478782ed0a8a5d3d80e18a"} Jan 27 16:10:38 crc kubenswrapper[4966]: I0127 16:10:38.958316 4966 generic.go:334] "Generic (PLEG): container finished" podID="4c9455be-e18b-4b63-aa95-b564b865c894" containerID="5076bce9517105fea3b188931b6f701c0c0d74cb8d4d6bea7bedc42ff87dfe47" exitCode=0 Jan 27 16:10:38 crc kubenswrapper[4966]: I0127 16:10:38.958419 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4c9455be-e18b-4b63-aa95-b564b865c894","Type":"ContainerDied","Data":"5076bce9517105fea3b188931b6f701c0c0d74cb8d4d6bea7bedc42ff87dfe47"} Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.433933 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.466616 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-config-data\") pod \"4c9455be-e18b-4b63-aa95-b564b865c894\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.466707 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-scripts\") pod \"4c9455be-e18b-4b63-aa95-b564b865c894\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.466815 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-combined-ca-bundle\") pod \"4c9455be-e18b-4b63-aa95-b564b865c894\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.467053 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-internal-tls-certs\") pod \"4c9455be-e18b-4b63-aa95-b564b865c894\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.467759 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n565l\" (UniqueName: \"kubernetes.io/projected/4c9455be-e18b-4b63-aa95-b564b865c894-kube-api-access-n565l\") pod \"4c9455be-e18b-4b63-aa95-b564b865c894\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.467813 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-public-tls-certs\") pod \"4c9455be-e18b-4b63-aa95-b564b865c894\" (UID: \"4c9455be-e18b-4b63-aa95-b564b865c894\") " Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.473779 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c9455be-e18b-4b63-aa95-b564b865c894-kube-api-access-n565l" (OuterVolumeSpecName: "kube-api-access-n565l") pod "4c9455be-e18b-4b63-aa95-b564b865c894" (UID: "4c9455be-e18b-4b63-aa95-b564b865c894"). InnerVolumeSpecName "kube-api-access-n565l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.476124 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-scripts" (OuterVolumeSpecName: "scripts") pod "4c9455be-e18b-4b63-aa95-b564b865c894" (UID: "4c9455be-e18b-4b63-aa95-b564b865c894"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.571367 4966 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.571413 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n565l\" (UniqueName: \"kubernetes.io/projected/4c9455be-e18b-4b63-aa95-b564b865c894-kube-api-access-n565l\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.575347 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4c9455be-e18b-4b63-aa95-b564b865c894" (UID: "4c9455be-e18b-4b63-aa95-b564b865c894"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.579524 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4c9455be-e18b-4b63-aa95-b564b865c894" (UID: "4c9455be-e18b-4b63-aa95-b564b865c894"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.637099 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-config-data" (OuterVolumeSpecName: "config-data") pod "4c9455be-e18b-4b63-aa95-b564b865c894" (UID: "4c9455be-e18b-4b63-aa95-b564b865c894"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.684883 4966 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.685197 4966 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.685286 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.698257 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c9455be-e18b-4b63-aa95-b564b865c894" (UID: "4c9455be-e18b-4b63-aa95-b564b865c894"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.788194 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9455be-e18b-4b63-aa95-b564b865c894-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.972181 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4c9455be-e18b-4b63-aa95-b564b865c894","Type":"ContainerDied","Data":"b9fe43d9f4a425f5e5126877ec92e0963be6eb95102ccdbaa4f1e78392517940"} Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.972239 4966 scope.go:117] "RemoveContainer" containerID="5076bce9517105fea3b188931b6f701c0c0d74cb8d4d6bea7bedc42ff87dfe47" Jan 27 16:10:39 crc kubenswrapper[4966]: I0127 16:10:39.972386 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.009441 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.014348 4966 scope.go:117] "RemoveContainer" containerID="d82b63dabc7cb4a8248c8832270e6d4c8f3c260b49478782ed0a8a5d3d80e18a" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.022931 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.048523 4966 scope.go:117] "RemoveContainer" containerID="81a79cc083a72432c68279e877a519d0de42243d01dd05f0cd6873c816ed34b3" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.049170 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 27 16:10:40 crc kubenswrapper[4966]: E0127 16:10:40.049690 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c9455be-e18b-4b63-aa95-b564b865c894" containerName="aodh-evaluator" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.049712 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c9455be-e18b-4b63-aa95-b564b865c894" containerName="aodh-evaluator" Jan 27 16:10:40 crc kubenswrapper[4966]: E0127 16:10:40.049731 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b96eb211-2a11-469e-9342-6881a3f3c799" containerName="heat-engine" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.049737 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96eb211-2a11-469e-9342-6881a3f3c799" containerName="heat-engine" Jan 27 16:10:40 crc kubenswrapper[4966]: E0127 16:10:40.049750 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c9455be-e18b-4b63-aa95-b564b865c894" containerName="aodh-listener" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.049756 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c9455be-e18b-4b63-aa95-b564b865c894" containerName="aodh-listener" Jan 27 16:10:40 crc kubenswrapper[4966]: E0127 16:10:40.049763 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ca408b2-f3aa-4504-9f89-8028f0cdc94a" containerName="heat-api" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.049768 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ca408b2-f3aa-4504-9f89-8028f0cdc94a" containerName="heat-api" Jan 27 16:10:40 crc kubenswrapper[4966]: E0127 16:10:40.049788 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70504957-f2da-439c-abb3-40ef116366ca" containerName="aodh-db-sync" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.049794 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="70504957-f2da-439c-abb3-40ef116366ca" containerName="aodh-db-sync" Jan 27 16:10:40 crc kubenswrapper[4966]: E0127 16:10:40.049811 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea91e8a-6a88-4f54-a60e-81f68d447beb" containerName="heat-cfnapi" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.049816 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea91e8a-6a88-4f54-a60e-81f68d447beb" containerName="heat-cfnapi" Jan 27 16:10:40 crc kubenswrapper[4966]: E0127 16:10:40.049824 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c9455be-e18b-4b63-aa95-b564b865c894" containerName="aodh-notifier" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.049829 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c9455be-e18b-4b63-aa95-b564b865c894" containerName="aodh-notifier" Jan 27 16:10:40 crc kubenswrapper[4966]: E0127 16:10:40.049843 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c9455be-e18b-4b63-aa95-b564b865c894" containerName="aodh-api" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.049849 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c9455be-e18b-4b63-aa95-b564b865c894" containerName="aodh-api" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.050104 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c9455be-e18b-4b63-aa95-b564b865c894" containerName="aodh-api" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.050119 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea91e8a-6a88-4f54-a60e-81f68d447beb" containerName="heat-cfnapi" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.050135 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="b96eb211-2a11-469e-9342-6881a3f3c799" containerName="heat-engine" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.050143 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c9455be-e18b-4b63-aa95-b564b865c894" containerName="aodh-notifier" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.050153 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c9455be-e18b-4b63-aa95-b564b865c894" containerName="aodh-evaluator" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.050165 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="70504957-f2da-439c-abb3-40ef116366ca" containerName="aodh-db-sync" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.050188 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c9455be-e18b-4b63-aa95-b564b865c894" containerName="aodh-listener" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.050200 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ca408b2-f3aa-4504-9f89-8028f0cdc94a" containerName="heat-api" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.052481 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.057089 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.057535 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-jknwm" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.057700 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.057934 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.058217 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.077053 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.094882 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/677fc3da-714e-4157-a167-cd49355a7e62-internal-tls-certs\") pod \"aodh-0\" (UID: \"677fc3da-714e-4157-a167-cd49355a7e62\") " pod="openstack/aodh-0" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.095089 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/677fc3da-714e-4157-a167-cd49355a7e62-config-data\") pod \"aodh-0\" (UID: \"677fc3da-714e-4157-a167-cd49355a7e62\") " pod="openstack/aodh-0" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.095187 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/677fc3da-714e-4157-a167-cd49355a7e62-scripts\") pod \"aodh-0\" (UID: \"677fc3da-714e-4157-a167-cd49355a7e62\") " pod="openstack/aodh-0" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.095276 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677fc3da-714e-4157-a167-cd49355a7e62-combined-ca-bundle\") pod \"aodh-0\" (UID: \"677fc3da-714e-4157-a167-cd49355a7e62\") " pod="openstack/aodh-0" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.095356 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/677fc3da-714e-4157-a167-cd49355a7e62-public-tls-certs\") pod \"aodh-0\" (UID: \"677fc3da-714e-4157-a167-cd49355a7e62\") " pod="openstack/aodh-0" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.095444 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6zfq\" (UniqueName: \"kubernetes.io/projected/677fc3da-714e-4157-a167-cd49355a7e62-kube-api-access-g6zfq\") pod \"aodh-0\" (UID: \"677fc3da-714e-4157-a167-cd49355a7e62\") " pod="openstack/aodh-0" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.101856 4966 scope.go:117] "RemoveContainer" containerID="93c5553a37fa0703ee26e0e9c080dd7ed8ebf27bdf0aa354dbead102f0837361" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.198003 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/677fc3da-714e-4157-a167-cd49355a7e62-internal-tls-certs\") pod \"aodh-0\" (UID: \"677fc3da-714e-4157-a167-cd49355a7e62\") " pod="openstack/aodh-0" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.198246 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/677fc3da-714e-4157-a167-cd49355a7e62-config-data\") pod \"aodh-0\" (UID: \"677fc3da-714e-4157-a167-cd49355a7e62\") " pod="openstack/aodh-0" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.198328 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/677fc3da-714e-4157-a167-cd49355a7e62-scripts\") pod \"aodh-0\" (UID: \"677fc3da-714e-4157-a167-cd49355a7e62\") " pod="openstack/aodh-0" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.198421 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677fc3da-714e-4157-a167-cd49355a7e62-combined-ca-bundle\") pod \"aodh-0\" (UID: \"677fc3da-714e-4157-a167-cd49355a7e62\") " pod="openstack/aodh-0" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.198538 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/677fc3da-714e-4157-a167-cd49355a7e62-public-tls-certs\") pod \"aodh-0\" (UID: \"677fc3da-714e-4157-a167-cd49355a7e62\") " pod="openstack/aodh-0" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.198646 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6zfq\" (UniqueName: \"kubernetes.io/projected/677fc3da-714e-4157-a167-cd49355a7e62-kube-api-access-g6zfq\") pod \"aodh-0\" (UID: \"677fc3da-714e-4157-a167-cd49355a7e62\") " pod="openstack/aodh-0" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.202331 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/677fc3da-714e-4157-a167-cd49355a7e62-internal-tls-certs\") pod \"aodh-0\" (UID: \"677fc3da-714e-4157-a167-cd49355a7e62\") " pod="openstack/aodh-0" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.202442 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/677fc3da-714e-4157-a167-cd49355a7e62-public-tls-certs\") pod \"aodh-0\" (UID: \"677fc3da-714e-4157-a167-cd49355a7e62\") " pod="openstack/aodh-0" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.203507 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/677fc3da-714e-4157-a167-cd49355a7e62-scripts\") pod \"aodh-0\" (UID: \"677fc3da-714e-4157-a167-cd49355a7e62\") " pod="openstack/aodh-0" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.203529 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677fc3da-714e-4157-a167-cd49355a7e62-combined-ca-bundle\") pod \"aodh-0\" (UID: \"677fc3da-714e-4157-a167-cd49355a7e62\") " pod="openstack/aodh-0" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.204507 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/677fc3da-714e-4157-a167-cd49355a7e62-config-data\") pod \"aodh-0\" (UID: \"677fc3da-714e-4157-a167-cd49355a7e62\") " pod="openstack/aodh-0" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.235473 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6zfq\" (UniqueName: \"kubernetes.io/projected/677fc3da-714e-4157-a167-cd49355a7e62-kube-api-access-g6zfq\") pod \"aodh-0\" (UID: \"677fc3da-714e-4157-a167-cd49355a7e62\") " pod="openstack/aodh-0" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.402753 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.522776 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:10:40 crc kubenswrapper[4966]: E0127 16:10:40.523219 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.553371 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c9455be-e18b-4b63-aa95-b564b865c894" path="/var/lib/kubelet/pods/4c9455be-e18b-4b63-aa95-b564b865c894/volumes" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.643026 4966 scope.go:117] "RemoveContainer" containerID="6445ae7fbca2748c1be5760b43581fe71bb7bb18839c7b2bc3e612f69dbf0ae4" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.687994 4966 scope.go:117] "RemoveContainer" containerID="ab1cd6427956ddc79ebd492971f4d59b36923d54310383625b2019c8b8887456" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.730460 4966 scope.go:117] "RemoveContainer" containerID="17b91c1fa92e87c3067c3108be3b36234a3bb71530608be06b81511a7be2f323" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.781098 4966 scope.go:117] "RemoveContainer" containerID="15f5c81dd9d223e1dfecc2cdc46452c5a6b63c767021b2b37e8a04cb85a7ba5f" Jan 27 16:10:40 crc kubenswrapper[4966]: I0127 16:10:40.962032 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 16:10:41 crc kubenswrapper[4966]: E0127 16:10:41.023853 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ea91e8a_6a88_4f54_a60e_81f68d447beb.slice\": RecentStats: unable to find data in memory cache]" Jan 27 16:10:41 crc kubenswrapper[4966]: I0127 16:10:41.248029 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="c9809bd0-3c51-46c3-b6c0-0b2576685999" containerName="rabbitmq" containerID="cri-o://83744988b8a2dae4e073a188b47d32bd81b7dbaa6c72254eb23d1078c7ae238a" gracePeriod=604796 Jan 27 16:10:42 crc kubenswrapper[4966]: I0127 16:10:42.006953 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"677fc3da-714e-4157-a167-cd49355a7e62","Type":"ContainerStarted","Data":"557a9adf0f326f6f3869152b94b77c5b1542fbbe41623bfa910a32bf66f3bcea"} Jan 27 16:10:42 crc kubenswrapper[4966]: I0127 16:10:42.007010 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"677fc3da-714e-4157-a167-cd49355a7e62","Type":"ContainerStarted","Data":"c5ae9244a6f13c23093876763d2b8961e260c1c332216feb96fa53a0f77c233b"} Jan 27 16:10:43 crc kubenswrapper[4966]: I0127 16:10:43.016640 4966 generic.go:334] "Generic (PLEG): container finished" podID="a39c95f2-d908-4593-9ddf-813da13c1f6a" containerID="83383f467118dddfd416387c0fec988cfafae2fa45413d7bbe8416ee0fcb6b11" exitCode=0 Jan 27 16:10:43 crc kubenswrapper[4966]: I0127 16:10:43.016786 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" event={"ID":"a39c95f2-d908-4593-9ddf-813da13c1f6a","Type":"ContainerDied","Data":"83383f467118dddfd416387c0fec988cfafae2fa45413d7bbe8416ee0fcb6b11"} Jan 27 16:10:43 crc kubenswrapper[4966]: I0127 16:10:43.019398 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"677fc3da-714e-4157-a167-cd49355a7e62","Type":"ContainerStarted","Data":"5be2ce2984704e62dc67102dcd8f59d345dbee99ad1171d0e30a4ee5a1957e7b"} Jan 27 16:10:44 crc kubenswrapper[4966]: I0127 16:10:44.730584 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="c9809bd0-3c51-46c3-b6c0-0b2576685999" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 27 16:10:44 crc kubenswrapper[4966]: I0127 16:10:44.755873 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" Jan 27 16:10:44 crc kubenswrapper[4966]: I0127 16:10:44.855602 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r5q9\" (UniqueName: \"kubernetes.io/projected/a39c95f2-d908-4593-9ddf-813da13c1f6a-kube-api-access-2r5q9\") pod \"a39c95f2-d908-4593-9ddf-813da13c1f6a\" (UID: \"a39c95f2-d908-4593-9ddf-813da13c1f6a\") " Jan 27 16:10:44 crc kubenswrapper[4966]: I0127 16:10:44.855658 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a39c95f2-d908-4593-9ddf-813da13c1f6a-ssh-key-openstack-edpm-ipam\") pod \"a39c95f2-d908-4593-9ddf-813da13c1f6a\" (UID: \"a39c95f2-d908-4593-9ddf-813da13c1f6a\") " Jan 27 16:10:44 crc kubenswrapper[4966]: I0127 16:10:44.855706 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a39c95f2-d908-4593-9ddf-813da13c1f6a-repo-setup-combined-ca-bundle\") pod \"a39c95f2-d908-4593-9ddf-813da13c1f6a\" (UID: \"a39c95f2-d908-4593-9ddf-813da13c1f6a\") " Jan 27 16:10:44 crc kubenswrapper[4966]: I0127 16:10:44.855758 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a39c95f2-d908-4593-9ddf-813da13c1f6a-inventory\") pod \"a39c95f2-d908-4593-9ddf-813da13c1f6a\" (UID: \"a39c95f2-d908-4593-9ddf-813da13c1f6a\") " Jan 27 16:10:44 crc kubenswrapper[4966]: I0127 16:10:44.861776 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a39c95f2-d908-4593-9ddf-813da13c1f6a-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "a39c95f2-d908-4593-9ddf-813da13c1f6a" (UID: "a39c95f2-d908-4593-9ddf-813da13c1f6a"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:44 crc kubenswrapper[4966]: I0127 16:10:44.862035 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a39c95f2-d908-4593-9ddf-813da13c1f6a-kube-api-access-2r5q9" (OuterVolumeSpecName: "kube-api-access-2r5q9") pod "a39c95f2-d908-4593-9ddf-813da13c1f6a" (UID: "a39c95f2-d908-4593-9ddf-813da13c1f6a"). InnerVolumeSpecName "kube-api-access-2r5q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:10:44 crc kubenswrapper[4966]: I0127 16:10:44.889001 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a39c95f2-d908-4593-9ddf-813da13c1f6a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a39c95f2-d908-4593-9ddf-813da13c1f6a" (UID: "a39c95f2-d908-4593-9ddf-813da13c1f6a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:44 crc kubenswrapper[4966]: I0127 16:10:44.906227 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a39c95f2-d908-4593-9ddf-813da13c1f6a-inventory" (OuterVolumeSpecName: "inventory") pod "a39c95f2-d908-4593-9ddf-813da13c1f6a" (UID: "a39c95f2-d908-4593-9ddf-813da13c1f6a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:44 crc kubenswrapper[4966]: I0127 16:10:44.958722 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r5q9\" (UniqueName: \"kubernetes.io/projected/a39c95f2-d908-4593-9ddf-813da13c1f6a-kube-api-access-2r5q9\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:44 crc kubenswrapper[4966]: I0127 16:10:44.958757 4966 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a39c95f2-d908-4593-9ddf-813da13c1f6a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:44 crc kubenswrapper[4966]: I0127 16:10:44.958767 4966 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a39c95f2-d908-4593-9ddf-813da13c1f6a-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:44 crc kubenswrapper[4966]: I0127 16:10:44.958780 4966 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a39c95f2-d908-4593-9ddf-813da13c1f6a-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.049285 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" event={"ID":"a39c95f2-d908-4593-9ddf-813da13c1f6a","Type":"ContainerDied","Data":"2ed09b8af3a8301a684e7310307d409317f9968b0d3b304ba7c70d09c3ffe55c"} Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.049326 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv" Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.049331 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ed09b8af3a8301a684e7310307d409317f9968b0d3b304ba7c70d09c3ffe55c" Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.066414 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"677fc3da-714e-4157-a167-cd49355a7e62","Type":"ContainerStarted","Data":"eef399523d1f0bae2db9e514c3498effb170522b93cc2dc0b4896fd0cda8e1d2"} Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.197742 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8q6cm"] Jan 27 16:10:45 crc kubenswrapper[4966]: E0127 16:10:45.198404 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a39c95f2-d908-4593-9ddf-813da13c1f6a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.198429 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="a39c95f2-d908-4593-9ddf-813da13c1f6a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.198689 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="a39c95f2-d908-4593-9ddf-813da13c1f6a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.199575 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8q6cm" Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.202660 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.202824 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hsd" Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.204502 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.208919 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.211988 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8q6cm"] Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.266468 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbm9d\" (UniqueName: \"kubernetes.io/projected/a7f9447f-e2b0-4ff0-bdf9-bfc90483e383-kube-api-access-nbm9d\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8q6cm\" (UID: \"a7f9447f-e2b0-4ff0-bdf9-bfc90483e383\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8q6cm" Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.266582 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7f9447f-e2b0-4ff0-bdf9-bfc90483e383-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8q6cm\" (UID: \"a7f9447f-e2b0-4ff0-bdf9-bfc90483e383\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8q6cm" Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.266784 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7f9447f-e2b0-4ff0-bdf9-bfc90483e383-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8q6cm\" (UID: \"a7f9447f-e2b0-4ff0-bdf9-bfc90483e383\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8q6cm" Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.369016 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7f9447f-e2b0-4ff0-bdf9-bfc90483e383-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8q6cm\" (UID: \"a7f9447f-e2b0-4ff0-bdf9-bfc90483e383\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8q6cm" Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.369178 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbm9d\" (UniqueName: \"kubernetes.io/projected/a7f9447f-e2b0-4ff0-bdf9-bfc90483e383-kube-api-access-nbm9d\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8q6cm\" (UID: \"a7f9447f-e2b0-4ff0-bdf9-bfc90483e383\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8q6cm" Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.369233 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7f9447f-e2b0-4ff0-bdf9-bfc90483e383-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8q6cm\" (UID: \"a7f9447f-e2b0-4ff0-bdf9-bfc90483e383\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8q6cm" Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.374614 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7f9447f-e2b0-4ff0-bdf9-bfc90483e383-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8q6cm\" (UID: \"a7f9447f-e2b0-4ff0-bdf9-bfc90483e383\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8q6cm" Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.376871 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7f9447f-e2b0-4ff0-bdf9-bfc90483e383-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8q6cm\" (UID: \"a7f9447f-e2b0-4ff0-bdf9-bfc90483e383\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8q6cm" Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.385860 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbm9d\" (UniqueName: \"kubernetes.io/projected/a7f9447f-e2b0-4ff0-bdf9-bfc90483e383-kube-api-access-nbm9d\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8q6cm\" (UID: \"a7f9447f-e2b0-4ff0-bdf9-bfc90483e383\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8q6cm" Jan 27 16:10:45 crc kubenswrapper[4966]: I0127 16:10:45.522305 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8q6cm" Jan 27 16:10:46 crc kubenswrapper[4966]: I0127 16:10:46.085279 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"677fc3da-714e-4157-a167-cd49355a7e62","Type":"ContainerStarted","Data":"a60b96bdac5175178d0f176a9e06bba58c97773c9005609895d76f02abd0a491"} Jan 27 16:10:46 crc kubenswrapper[4966]: I0127 16:10:46.110369 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.670595848 podStartE2EDuration="6.110350505s" podCreationTimestamp="2026-01-27 16:10:40 +0000 UTC" firstStartedPulling="2026-01-27 16:10:40.996315697 +0000 UTC m=+1707.299109185" lastFinishedPulling="2026-01-27 16:10:45.436070354 +0000 UTC m=+1711.738863842" observedRunningTime="2026-01-27 16:10:46.105430181 +0000 UTC m=+1712.408223669" watchObservedRunningTime="2026-01-27 16:10:46.110350505 +0000 UTC m=+1712.413143993" Jan 27 16:10:46 crc kubenswrapper[4966]: I0127 16:10:46.219234 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8q6cm"] Jan 27 16:10:46 crc kubenswrapper[4966]: E0127 16:10:46.898412 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ea91e8a_6a88_4f54_a60e_81f68d447beb.slice\": RecentStats: unable to find data in memory cache]" Jan 27 16:10:47 crc kubenswrapper[4966]: I0127 16:10:47.097573 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8q6cm" event={"ID":"a7f9447f-e2b0-4ff0-bdf9-bfc90483e383","Type":"ContainerStarted","Data":"b32e4413989247b2d7825555b22b50f4f6976d6eed6ea3e84e87e997c0f96607"} Jan 27 16:10:47 crc kubenswrapper[4966]: I0127 16:10:47.983412 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.051412 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9809bd0-3c51-46c3-b6c0-0b2576685999-erlang-cookie-secret\") pod \"c9809bd0-3c51-46c3-b6c0-0b2576685999\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.051599 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9809bd0-3c51-46c3-b6c0-0b2576685999-plugins-conf\") pod \"c9809bd0-3c51-46c3-b6c0-0b2576685999\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.051635 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9809bd0-3c51-46c3-b6c0-0b2576685999-pod-info\") pod \"c9809bd0-3c51-46c3-b6c0-0b2576685999\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.051682 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9809bd0-3c51-46c3-b6c0-0b2576685999-server-conf\") pod \"c9809bd0-3c51-46c3-b6c0-0b2576685999\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.051723 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-erlang-cookie\") pod \"c9809bd0-3c51-46c3-b6c0-0b2576685999\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.051769 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-confd\") pod \"c9809bd0-3c51-46c3-b6c0-0b2576685999\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.051818 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-plugins\") pod \"c9809bd0-3c51-46c3-b6c0-0b2576685999\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.051862 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29mt8\" (UniqueName: \"kubernetes.io/projected/c9809bd0-3c51-46c3-b6c0-0b2576685999-kube-api-access-29mt8\") pod \"c9809bd0-3c51-46c3-b6c0-0b2576685999\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.055401 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c9809bd0-3c51-46c3-b6c0-0b2576685999" (UID: "c9809bd0-3c51-46c3-b6c0-0b2576685999"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.056764 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9809bd0-3c51-46c3-b6c0-0b2576685999-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c9809bd0-3c51-46c3-b6c0-0b2576685999" (UID: "c9809bd0-3c51-46c3-b6c0-0b2576685999"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.057323 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2769c9f-753f-4b73-a13f-149ee353e85b\") pod \"c9809bd0-3c51-46c3-b6c0-0b2576685999\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.057427 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9809bd0-3c51-46c3-b6c0-0b2576685999-config-data\") pod \"c9809bd0-3c51-46c3-b6c0-0b2576685999\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.057490 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-tls\") pod \"c9809bd0-3c51-46c3-b6c0-0b2576685999\" (UID: \"c9809bd0-3c51-46c3-b6c0-0b2576685999\") " Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.057927 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c9809bd0-3c51-46c3-b6c0-0b2576685999" (UID: "c9809bd0-3c51-46c3-b6c0-0b2576685999"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.058857 4966 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.058886 4966 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9809bd0-3c51-46c3-b6c0-0b2576685999-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.058959 4966 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.066248 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9809bd0-3c51-46c3-b6c0-0b2576685999-kube-api-access-29mt8" (OuterVolumeSpecName: "kube-api-access-29mt8") pod "c9809bd0-3c51-46c3-b6c0-0b2576685999" (UID: "c9809bd0-3c51-46c3-b6c0-0b2576685999"). InnerVolumeSpecName "kube-api-access-29mt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.079668 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c9809bd0-3c51-46c3-b6c0-0b2576685999-pod-info" (OuterVolumeSpecName: "pod-info") pod "c9809bd0-3c51-46c3-b6c0-0b2576685999" (UID: "c9809bd0-3c51-46c3-b6c0-0b2576685999"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.090100 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "c9809bd0-3c51-46c3-b6c0-0b2576685999" (UID: "c9809bd0-3c51-46c3-b6c0-0b2576685999"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.096718 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9809bd0-3c51-46c3-b6c0-0b2576685999-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c9809bd0-3c51-46c3-b6c0-0b2576685999" (UID: "c9809bd0-3c51-46c3-b6c0-0b2576685999"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:48 crc kubenswrapper[4966]: E0127 16:10:48.106318 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ea91e8a_6a88_4f54_a60e_81f68d447beb.slice\": RecentStats: unable to find data in memory cache]" Jan 27 16:10:48 crc kubenswrapper[4966]: E0127 16:10:48.118115 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ea91e8a_6a88_4f54_a60e_81f68d447beb.slice\": RecentStats: unable to find data in memory cache]" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.172054 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2769c9f-753f-4b73-a13f-149ee353e85b" (OuterVolumeSpecName: "persistence") pod "c9809bd0-3c51-46c3-b6c0-0b2576685999" (UID: "c9809bd0-3c51-46c3-b6c0-0b2576685999"). InnerVolumeSpecName "pvc-d2769c9f-753f-4b73-a13f-149ee353e85b". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.212325 4966 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9809bd0-3c51-46c3-b6c0-0b2576685999-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.214812 4966 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9809bd0-3c51-46c3-b6c0-0b2576685999-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.215154 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29mt8\" (UniqueName: \"kubernetes.io/projected/c9809bd0-3c51-46c3-b6c0-0b2576685999-kube-api-access-29mt8\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.215325 4966 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d2769c9f-753f-4b73-a13f-149ee353e85b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2769c9f-753f-4b73-a13f-149ee353e85b\") on node \"crc\" " Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.215435 4966 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.239121 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8q6cm" event={"ID":"a7f9447f-e2b0-4ff0-bdf9-bfc90483e383","Type":"ContainerStarted","Data":"ebcb73fd4121fadbb6bf97ae6efcefdd5c20c4af7ad9b73ce8fdec6b5511d4b3"} Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.281213 4966 generic.go:334] "Generic (PLEG): container finished" podID="c9809bd0-3c51-46c3-b6c0-0b2576685999" containerID="83744988b8a2dae4e073a188b47d32bd81b7dbaa6c72254eb23d1078c7ae238a" exitCode=0 Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.281260 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"c9809bd0-3c51-46c3-b6c0-0b2576685999","Type":"ContainerDied","Data":"83744988b8a2dae4e073a188b47d32bd81b7dbaa6c72254eb23d1078c7ae238a"} Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.281989 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"c9809bd0-3c51-46c3-b6c0-0b2576685999","Type":"ContainerDied","Data":"12c8cb0523eb5cdbeb475c11b0a7803681a709b17a5dfb940c902d52dfc24847"} Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.282108 4966 scope.go:117] "RemoveContainer" containerID="83744988b8a2dae4e073a188b47d32bd81b7dbaa6c72254eb23d1078c7ae238a" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.281401 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.287443 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9809bd0-3c51-46c3-b6c0-0b2576685999-config-data" (OuterVolumeSpecName: "config-data") pod "c9809bd0-3c51-46c3-b6c0-0b2576685999" (UID: "c9809bd0-3c51-46c3-b6c0-0b2576685999"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.302365 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8q6cm" podStartSLOduration=2.796093684 podStartE2EDuration="3.302342614s" podCreationTimestamp="2026-01-27 16:10:45 +0000 UTC" firstStartedPulling="2026-01-27 16:10:46.218800347 +0000 UTC m=+1712.521593835" lastFinishedPulling="2026-01-27 16:10:46.725049277 +0000 UTC m=+1713.027842765" observedRunningTime="2026-01-27 16:10:48.277007479 +0000 UTC m=+1714.579800977" watchObservedRunningTime="2026-01-27 16:10:48.302342614 +0000 UTC m=+1714.605136102" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.308616 4966 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.308778 4966 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d2769c9f-753f-4b73-a13f-149ee353e85b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2769c9f-753f-4b73-a13f-149ee353e85b") on node "crc" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.313827 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9809bd0-3c51-46c3-b6c0-0b2576685999-server-conf" (OuterVolumeSpecName: "server-conf") pod "c9809bd0-3c51-46c3-b6c0-0b2576685999" (UID: "c9809bd0-3c51-46c3-b6c0-0b2576685999"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.318572 4966 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9809bd0-3c51-46c3-b6c0-0b2576685999-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.318761 4966 reconciler_common.go:293] "Volume detached for volume \"pvc-d2769c9f-753f-4b73-a13f-149ee353e85b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2769c9f-753f-4b73-a13f-149ee353e85b\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.318849 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9809bd0-3c51-46c3-b6c0-0b2576685999-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.347116 4966 scope.go:117] "RemoveContainer" containerID="cb544cfcf80d3d721d19d5f4891a271e38e2857775344a1887bcbab76dd858e0" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.420143 4966 scope.go:117] "RemoveContainer" containerID="83744988b8a2dae4e073a188b47d32bd81b7dbaa6c72254eb23d1078c7ae238a" Jan 27 16:10:48 crc kubenswrapper[4966]: E0127 16:10:48.422095 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83744988b8a2dae4e073a188b47d32bd81b7dbaa6c72254eb23d1078c7ae238a\": container with ID starting with 83744988b8a2dae4e073a188b47d32bd81b7dbaa6c72254eb23d1078c7ae238a not found: ID does not exist" containerID="83744988b8a2dae4e073a188b47d32bd81b7dbaa6c72254eb23d1078c7ae238a" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.422130 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83744988b8a2dae4e073a188b47d32bd81b7dbaa6c72254eb23d1078c7ae238a"} err="failed to get container status \"83744988b8a2dae4e073a188b47d32bd81b7dbaa6c72254eb23d1078c7ae238a\": rpc error: code = NotFound desc = could not find container \"83744988b8a2dae4e073a188b47d32bd81b7dbaa6c72254eb23d1078c7ae238a\": container with ID starting with 83744988b8a2dae4e073a188b47d32bd81b7dbaa6c72254eb23d1078c7ae238a not found: ID does not exist" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.422858 4966 scope.go:117] "RemoveContainer" containerID="cb544cfcf80d3d721d19d5f4891a271e38e2857775344a1887bcbab76dd858e0" Jan 27 16:10:48 crc kubenswrapper[4966]: E0127 16:10:48.425135 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb544cfcf80d3d721d19d5f4891a271e38e2857775344a1887bcbab76dd858e0\": container with ID starting with cb544cfcf80d3d721d19d5f4891a271e38e2857775344a1887bcbab76dd858e0 not found: ID does not exist" containerID="cb544cfcf80d3d721d19d5f4891a271e38e2857775344a1887bcbab76dd858e0" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.425198 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb544cfcf80d3d721d19d5f4891a271e38e2857775344a1887bcbab76dd858e0"} err="failed to get container status \"cb544cfcf80d3d721d19d5f4891a271e38e2857775344a1887bcbab76dd858e0\": rpc error: code = NotFound desc = could not find container \"cb544cfcf80d3d721d19d5f4891a271e38e2857775344a1887bcbab76dd858e0\": container with ID starting with cb544cfcf80d3d721d19d5f4891a271e38e2857775344a1887bcbab76dd858e0 not found: ID does not exist" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.432447 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c9809bd0-3c51-46c3-b6c0-0b2576685999" (UID: "c9809bd0-3c51-46c3-b6c0-0b2576685999"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.523133 4966 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9809bd0-3c51-46c3-b6c0-0b2576685999-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.608827 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.620360 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.633151 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 16:10:48 crc kubenswrapper[4966]: E0127 16:10:48.633660 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9809bd0-3c51-46c3-b6c0-0b2576685999" containerName="rabbitmq" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.633677 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9809bd0-3c51-46c3-b6c0-0b2576685999" containerName="rabbitmq" Jan 27 16:10:48 crc kubenswrapper[4966]: E0127 16:10:48.633711 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9809bd0-3c51-46c3-b6c0-0b2576685999" containerName="setup-container" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.633721 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9809bd0-3c51-46c3-b6c0-0b2576685999" containerName="setup-container" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.640101 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9809bd0-3c51-46c3-b6c0-0b2576685999" containerName="rabbitmq" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.641585 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.668452 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.729145 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.729199 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.729228 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpdjq\" (UniqueName: \"kubernetes.io/projected/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-kube-api-access-jpdjq\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.729262 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.729292 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.729309 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.729331 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d2769c9f-753f-4b73-a13f-149ee353e85b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2769c9f-753f-4b73-a13f-149ee353e85b\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.729350 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.729401 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-config-data\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.729495 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-pod-info\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.729543 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-server-conf\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.832114 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-pod-info\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.832248 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-server-conf\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.832306 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.832362 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.832412 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpdjq\" (UniqueName: \"kubernetes.io/projected/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-kube-api-access-jpdjq\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.832466 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.832519 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.832555 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.832605 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d2769c9f-753f-4b73-a13f-149ee353e85b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2769c9f-753f-4b73-a13f-149ee353e85b\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.832647 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.832746 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-config-data\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.834457 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.834494 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.834990 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-server-conf\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.835018 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-config-data\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.835184 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.839134 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.839216 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.839610 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-pod-info\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.839765 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.850326 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.850376 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d2769c9f-753f-4b73-a13f-149ee353e85b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2769c9f-753f-4b73-a13f-149ee353e85b\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ae8ca631bd9e01a225a1e43fc47472bdb87f8107a883fd607e22e62d3fb3f48c/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.851238 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpdjq\" (UniqueName: \"kubernetes.io/projected/cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d-kube-api-access-jpdjq\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.938075 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d2769c9f-753f-4b73-a13f-149ee353e85b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2769c9f-753f-4b73-a13f-149ee353e85b\") pod \"rabbitmq-server-1\" (UID: \"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d\") " pod="openstack/rabbitmq-server-1" Jan 27 16:10:48 crc kubenswrapper[4966]: I0127 16:10:48.968366 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 16:10:49 crc kubenswrapper[4966]: I0127 16:10:49.566696 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 16:10:50 crc kubenswrapper[4966]: I0127 16:10:50.340031 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d","Type":"ContainerStarted","Data":"4bca6f31d461b375d2bf5c725815b084b0b7e59d5ee04db92781ceade4a723ef"} Jan 27 16:10:50 crc kubenswrapper[4966]: I0127 16:10:50.535951 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9809bd0-3c51-46c3-b6c0-0b2576685999" path="/var/lib/kubelet/pods/c9809bd0-3c51-46c3-b6c0-0b2576685999/volumes" Jan 27 16:10:51 crc kubenswrapper[4966]: E0127 16:10:51.079332 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ea91e8a_6a88_4f54_a60e_81f68d447beb.slice\": RecentStats: unable to find data in memory cache]" Jan 27 16:10:51 crc kubenswrapper[4966]: I0127 16:10:51.353680 4966 generic.go:334] "Generic (PLEG): container finished" podID="a7f9447f-e2b0-4ff0-bdf9-bfc90483e383" containerID="ebcb73fd4121fadbb6bf97ae6efcefdd5c20c4af7ad9b73ce8fdec6b5511d4b3" exitCode=0 Jan 27 16:10:51 crc kubenswrapper[4966]: I0127 16:10:51.353747 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8q6cm" event={"ID":"a7f9447f-e2b0-4ff0-bdf9-bfc90483e383","Type":"ContainerDied","Data":"ebcb73fd4121fadbb6bf97ae6efcefdd5c20c4af7ad9b73ce8fdec6b5511d4b3"} Jan 27 16:10:52 crc kubenswrapper[4966]: I0127 16:10:52.365179 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d","Type":"ContainerStarted","Data":"1efe2ead7d392a630e0843b221893a74877b7b643fdf91f9ed6868cdfd5c51b1"} Jan 27 16:10:52 crc kubenswrapper[4966]: I0127 16:10:52.523542 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:10:52 crc kubenswrapper[4966]: E0127 16:10:52.523880 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:10:52 crc kubenswrapper[4966]: I0127 16:10:52.841239 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8q6cm" Jan 27 16:10:52 crc kubenswrapper[4966]: I0127 16:10:52.934489 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7f9447f-e2b0-4ff0-bdf9-bfc90483e383-ssh-key-openstack-edpm-ipam\") pod \"a7f9447f-e2b0-4ff0-bdf9-bfc90483e383\" (UID: \"a7f9447f-e2b0-4ff0-bdf9-bfc90483e383\") " Jan 27 16:10:52 crc kubenswrapper[4966]: I0127 16:10:52.934612 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7f9447f-e2b0-4ff0-bdf9-bfc90483e383-inventory\") pod \"a7f9447f-e2b0-4ff0-bdf9-bfc90483e383\" (UID: \"a7f9447f-e2b0-4ff0-bdf9-bfc90483e383\") " Jan 27 16:10:52 crc kubenswrapper[4966]: I0127 16:10:52.934703 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbm9d\" (UniqueName: \"kubernetes.io/projected/a7f9447f-e2b0-4ff0-bdf9-bfc90483e383-kube-api-access-nbm9d\") pod \"a7f9447f-e2b0-4ff0-bdf9-bfc90483e383\" (UID: \"a7f9447f-e2b0-4ff0-bdf9-bfc90483e383\") " Jan 27 16:10:52 crc kubenswrapper[4966]: I0127 16:10:52.940446 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7f9447f-e2b0-4ff0-bdf9-bfc90483e383-kube-api-access-nbm9d" (OuterVolumeSpecName: "kube-api-access-nbm9d") pod "a7f9447f-e2b0-4ff0-bdf9-bfc90483e383" (UID: "a7f9447f-e2b0-4ff0-bdf9-bfc90483e383"). InnerVolumeSpecName "kube-api-access-nbm9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:10:52 crc kubenswrapper[4966]: I0127 16:10:52.965154 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7f9447f-e2b0-4ff0-bdf9-bfc90483e383-inventory" (OuterVolumeSpecName: "inventory") pod "a7f9447f-e2b0-4ff0-bdf9-bfc90483e383" (UID: "a7f9447f-e2b0-4ff0-bdf9-bfc90483e383"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:52 crc kubenswrapper[4966]: I0127 16:10:52.981584 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7f9447f-e2b0-4ff0-bdf9-bfc90483e383-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a7f9447f-e2b0-4ff0-bdf9-bfc90483e383" (UID: "a7f9447f-e2b0-4ff0-bdf9-bfc90483e383"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.037366 4966 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7f9447f-e2b0-4ff0-bdf9-bfc90483e383-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.037414 4966 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7f9447f-e2b0-4ff0-bdf9-bfc90483e383-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.037427 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbm9d\" (UniqueName: \"kubernetes.io/projected/a7f9447f-e2b0-4ff0-bdf9-bfc90483e383-kube-api-access-nbm9d\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.378221 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8q6cm" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.378210 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8q6cm" event={"ID":"a7f9447f-e2b0-4ff0-bdf9-bfc90483e383","Type":"ContainerDied","Data":"b32e4413989247b2d7825555b22b50f4f6976d6eed6ea3e84e87e997c0f96607"} Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.378690 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b32e4413989247b2d7825555b22b50f4f6976d6eed6ea3e84e87e997c0f96607" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.471037 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr"] Jan 27 16:10:53 crc kubenswrapper[4966]: E0127 16:10:53.471570 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f9447f-e2b0-4ff0-bdf9-bfc90483e383" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.471586 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f9447f-e2b0-4ff0-bdf9-bfc90483e383" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.471863 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7f9447f-e2b0-4ff0-bdf9-bfc90483e383" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.472761 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.475726 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.476594 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.479342 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hsd" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.492685 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr"] Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.496224 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.549580 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2l96\" (UniqueName: \"kubernetes.io/projected/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-kube-api-access-c2l96\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr\" (UID: \"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.549686 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr\" (UID: \"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.549711 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr\" (UID: \"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.549782 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr\" (UID: \"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.652469 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr\" (UID: \"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.652645 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2l96\" (UniqueName: \"kubernetes.io/projected/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-kube-api-access-c2l96\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr\" (UID: \"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.652789 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr\" (UID: \"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.652827 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr\" (UID: \"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.667711 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr\" (UID: \"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.671514 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr\" (UID: \"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.671963 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr\" (UID: \"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.684598 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2l96\" (UniqueName: \"kubernetes.io/projected/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-kube-api-access-c2l96\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr\" (UID: \"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" Jan 27 16:10:53 crc kubenswrapper[4966]: I0127 16:10:53.791218 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" Jan 27 16:10:54 crc kubenswrapper[4966]: I0127 16:10:54.369127 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr"] Jan 27 16:10:54 crc kubenswrapper[4966]: I0127 16:10:54.389853 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" event={"ID":"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e","Type":"ContainerStarted","Data":"b76aa1ce37f13267a146b05283e988bb600dc55e627020cbeffa0c0b5edecd54"} Jan 27 16:10:54 crc kubenswrapper[4966]: E0127 16:10:54.481679 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest: Requesting bearer token: invalid status code from registry 502 (Bad Gateway)" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Jan 27 16:10:54 crc kubenswrapper[4966]: E0127 16:10:54.481834 4966 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 16:10:54 crc kubenswrapper[4966]: container &Container{Name:bootstrap-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p osp.edpm.bootstrap -i bootstrap-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Jan 27 16:10:54 crc kubenswrapper[4966]: osp.edpm.bootstrap Jan 27 16:10:54 crc kubenswrapper[4966]: Jan 27 16:10:54 crc kubenswrapper[4966]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Jan 27 16:10:54 crc kubenswrapper[4966]: edpm_override_hosts: openstack-edpm-ipam Jan 27 16:10:54 crc kubenswrapper[4966]: edpm_service_type: bootstrap Jan 27 16:10:54 crc kubenswrapper[4966]: Jan 27 16:10:54 crc kubenswrapper[4966]: Jan 27 16:10:54 crc kubenswrapper[4966]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bootstrap-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/bootstrap,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2l96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr_openstack(c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e): ErrImagePull: initializing source docker://quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest: Requesting bearer token: invalid status code from registry 502 (Bad Gateway) Jan 27 16:10:54 crc kubenswrapper[4966]: > logger="UnhandledError" Jan 27 16:10:54 crc kubenswrapper[4966]: E0127 16:10:54.482995 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bootstrap-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"initializing source docker://quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest: Requesting bearer token: invalid status code from registry 502 (Bad Gateway)\"" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" podUID="c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e" Jan 27 16:10:55 crc kubenswrapper[4966]: E0127 16:10:55.404161 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bootstrap-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" podUID="c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e" Jan 27 16:11:01 crc kubenswrapper[4966]: E0127 16:11:01.496115 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ea91e8a_6a88_4f54_a60e_81f68d447beb.slice\": RecentStats: unable to find data in memory cache]" Jan 27 16:11:01 crc kubenswrapper[4966]: E0127 16:11:01.638076 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ea91e8a_6a88_4f54_a60e_81f68d447beb.slice\": RecentStats: unable to find data in memory cache]" Jan 27 16:11:03 crc kubenswrapper[4966]: I0127 16:11:03.521282 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:11:03 crc kubenswrapper[4966]: E0127 16:11:03.521835 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:11:11 crc kubenswrapper[4966]: E0127 16:11:11.840820 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ea91e8a_6a88_4f54_a60e_81f68d447beb.slice\": RecentStats: unable to find data in memory cache]" Jan 27 16:11:12 crc kubenswrapper[4966]: I0127 16:11:12.596294 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" event={"ID":"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e","Type":"ContainerStarted","Data":"785f08d1824b05a8958d770b79286ac257d229e1f1ad4f41abb044b9cca6022a"} Jan 27 16:11:12 crc kubenswrapper[4966]: I0127 16:11:12.629677 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" podStartSLOduration=2.6094391249999997 podStartE2EDuration="19.62965309s" podCreationTimestamp="2026-01-27 16:10:53 +0000 UTC" firstStartedPulling="2026-01-27 16:10:54.366194287 +0000 UTC m=+1720.668987775" lastFinishedPulling="2026-01-27 16:11:11.386408252 +0000 UTC m=+1737.689201740" observedRunningTime="2026-01-27 16:11:12.61912936 +0000 UTC m=+1738.921922868" watchObservedRunningTime="2026-01-27 16:11:12.62965309 +0000 UTC m=+1738.932446568" Jan 27 16:11:17 crc kubenswrapper[4966]: I0127 16:11:17.521970 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:11:17 crc kubenswrapper[4966]: E0127 16:11:17.522932 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:11:24 crc kubenswrapper[4966]: I0127 16:11:24.732875 4966 generic.go:334] "Generic (PLEG): container finished" podID="cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d" containerID="1efe2ead7d392a630e0843b221893a74877b7b643fdf91f9ed6868cdfd5c51b1" exitCode=0 Jan 27 16:11:24 crc kubenswrapper[4966]: I0127 16:11:24.732926 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d","Type":"ContainerDied","Data":"1efe2ead7d392a630e0843b221893a74877b7b643fdf91f9ed6868cdfd5c51b1"} Jan 27 16:11:25 crc kubenswrapper[4966]: I0127 16:11:25.746086 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d","Type":"ContainerStarted","Data":"eb0e271244bc5e56a0a2b01a55c988bd4030e7eaefc79cb412cb2ae884ef113c"} Jan 27 16:11:25 crc kubenswrapper[4966]: I0127 16:11:25.746773 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 27 16:11:25 crc kubenswrapper[4966]: I0127 16:11:25.774331 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.774307033 podStartE2EDuration="37.774307033s" podCreationTimestamp="2026-01-27 16:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:11:25.770962358 +0000 UTC m=+1752.073755866" watchObservedRunningTime="2026-01-27 16:11:25.774307033 +0000 UTC m=+1752.077100521" Jan 27 16:11:28 crc kubenswrapper[4966]: I0127 16:11:28.525463 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:11:28 crc kubenswrapper[4966]: E0127 16:11:28.526799 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:11:38 crc kubenswrapper[4966]: I0127 16:11:38.972099 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 27 16:11:39 crc kubenswrapper[4966]: I0127 16:11:39.044724 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 16:11:41 crc kubenswrapper[4966]: I0127 16:11:41.115614 4966 scope.go:117] "RemoveContainer" containerID="5685c7a8b83e567e446891d0a4cdff3f354f53b26adc1b73d1d466e112c1a1f5" Jan 27 16:11:41 crc kubenswrapper[4966]: I0127 16:11:41.175153 4966 scope.go:117] "RemoveContainer" containerID="d6067a5aa2410ee63aaa2194fc062054fb7d86e968c4646ada3dea1cf4f76702" Jan 27 16:11:41 crc kubenswrapper[4966]: I0127 16:11:41.227196 4966 scope.go:117] "RemoveContainer" containerID="579c6d99a3efe5221d1b43dc67146c664fee7e13fb046ad70d7ecc90fb83da6b" Jan 27 16:11:43 crc kubenswrapper[4966]: I0127 16:11:43.309543 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="3744b7e0-d355-43b7-bbf3-853416fb4483" containerName="rabbitmq" containerID="cri-o://3018afa668b9549e8f42c06685b7e785300c78518b2be9c48c9e7d3a020be5b3" gracePeriod=604796 Jan 27 16:11:43 crc kubenswrapper[4966]: I0127 16:11:43.520504 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:11:43 crc kubenswrapper[4966]: E0127 16:11:43.521010 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:11:44 crc kubenswrapper[4966]: I0127 16:11:44.713675 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="3744b7e0-d355-43b7-bbf3-853416fb4483" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.629623 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.645218 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-tls\") pod \"3744b7e0-d355-43b7-bbf3-853416fb4483\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.645288 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3744b7e0-d355-43b7-bbf3-853416fb4483-plugins-conf\") pod \"3744b7e0-d355-43b7-bbf3-853416fb4483\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.645364 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw9qv\" (UniqueName: \"kubernetes.io/projected/3744b7e0-d355-43b7-bbf3-853416fb4483-kube-api-access-tw9qv\") pod \"3744b7e0-d355-43b7-bbf3-853416fb4483\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.645401 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-confd\") pod \"3744b7e0-d355-43b7-bbf3-853416fb4483\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.645723 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3744b7e0-d355-43b7-bbf3-853416fb4483-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3744b7e0-d355-43b7-bbf3-853416fb4483" (UID: "3744b7e0-d355-43b7-bbf3-853416fb4483"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.646128 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8\") pod \"3744b7e0-d355-43b7-bbf3-853416fb4483\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.646385 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3744b7e0-d355-43b7-bbf3-853416fb4483-pod-info\") pod \"3744b7e0-d355-43b7-bbf3-853416fb4483\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.646418 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-plugins\") pod \"3744b7e0-d355-43b7-bbf3-853416fb4483\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.646541 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-erlang-cookie\") pod \"3744b7e0-d355-43b7-bbf3-853416fb4483\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.646716 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3744b7e0-d355-43b7-bbf3-853416fb4483-config-data\") pod \"3744b7e0-d355-43b7-bbf3-853416fb4483\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.647190 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3744b7e0-d355-43b7-bbf3-853416fb4483-erlang-cookie-secret\") pod \"3744b7e0-d355-43b7-bbf3-853416fb4483\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.647247 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3744b7e0-d355-43b7-bbf3-853416fb4483-server-conf\") pod \"3744b7e0-d355-43b7-bbf3-853416fb4483\" (UID: \"3744b7e0-d355-43b7-bbf3-853416fb4483\") " Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.648073 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3744b7e0-d355-43b7-bbf3-853416fb4483" (UID: "3744b7e0-d355-43b7-bbf3-853416fb4483"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.648318 4966 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.648336 4966 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3744b7e0-d355-43b7-bbf3-853416fb4483-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.648709 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3744b7e0-d355-43b7-bbf3-853416fb4483" (UID: "3744b7e0-d355-43b7-bbf3-853416fb4483"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.651526 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3744b7e0-d355-43b7-bbf3-853416fb4483-pod-info" (OuterVolumeSpecName: "pod-info") pod "3744b7e0-d355-43b7-bbf3-853416fb4483" (UID: "3744b7e0-d355-43b7-bbf3-853416fb4483"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.653555 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3744b7e0-d355-43b7-bbf3-853416fb4483-kube-api-access-tw9qv" (OuterVolumeSpecName: "kube-api-access-tw9qv") pod "3744b7e0-d355-43b7-bbf3-853416fb4483" (UID: "3744b7e0-d355-43b7-bbf3-853416fb4483"). InnerVolumeSpecName "kube-api-access-tw9qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.653923 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3744b7e0-d355-43b7-bbf3-853416fb4483-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3744b7e0-d355-43b7-bbf3-853416fb4483" (UID: "3744b7e0-d355-43b7-bbf3-853416fb4483"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.658169 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3744b7e0-d355-43b7-bbf3-853416fb4483" (UID: "3744b7e0-d355-43b7-bbf3-853416fb4483"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.697282 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3744b7e0-d355-43b7-bbf3-853416fb4483-config-data" (OuterVolumeSpecName: "config-data") pod "3744b7e0-d355-43b7-bbf3-853416fb4483" (UID: "3744b7e0-d355-43b7-bbf3-853416fb4483"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.738401 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8" (OuterVolumeSpecName: "persistence") pod "3744b7e0-d355-43b7-bbf3-853416fb4483" (UID: "3744b7e0-d355-43b7-bbf3-853416fb4483"). InnerVolumeSpecName "pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.751418 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw9qv\" (UniqueName: \"kubernetes.io/projected/3744b7e0-d355-43b7-bbf3-853416fb4483-kube-api-access-tw9qv\") on node \"crc\" DevicePath \"\"" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.751484 4966 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8\") on node \"crc\" " Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.751500 4966 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3744b7e0-d355-43b7-bbf3-853416fb4483-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.751512 4966 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.751524 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3744b7e0-d355-43b7-bbf3-853416fb4483-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.751621 4966 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3744b7e0-d355-43b7-bbf3-853416fb4483-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.751700 4966 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.806289 4966 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.811476 4966 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8") on node "crc" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.835773 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3744b7e0-d355-43b7-bbf3-853416fb4483-server-conf" (OuterVolumeSpecName: "server-conf") pod "3744b7e0-d355-43b7-bbf3-853416fb4483" (UID: "3744b7e0-d355-43b7-bbf3-853416fb4483"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.847163 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3744b7e0-d355-43b7-bbf3-853416fb4483" (UID: "3744b7e0-d355-43b7-bbf3-853416fb4483"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.856884 4966 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3744b7e0-d355-43b7-bbf3-853416fb4483-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.857008 4966 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3744b7e0-d355-43b7-bbf3-853416fb4483-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 16:11:50 crc kubenswrapper[4966]: I0127 16:11:50.857022 4966 reconciler_common.go:293] "Volume detached for volume \"pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8\") on node \"crc\" DevicePath \"\"" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.039029 4966 generic.go:334] "Generic (PLEG): container finished" podID="3744b7e0-d355-43b7-bbf3-853416fb4483" containerID="3018afa668b9549e8f42c06685b7e785300c78518b2be9c48c9e7d3a020be5b3" exitCode=0 Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.039071 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3744b7e0-d355-43b7-bbf3-853416fb4483","Type":"ContainerDied","Data":"3018afa668b9549e8f42c06685b7e785300c78518b2be9c48c9e7d3a020be5b3"} Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.039133 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3744b7e0-d355-43b7-bbf3-853416fb4483","Type":"ContainerDied","Data":"e7a40d9e06f6e977e7ae80d754d982e99c7131f7d0b7ed425d023abfb848c0a2"} Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.039150 4966 scope.go:117] "RemoveContainer" containerID="3018afa668b9549e8f42c06685b7e785300c78518b2be9c48c9e7d3a020be5b3" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.039642 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.083417 4966 scope.go:117] "RemoveContainer" containerID="90e10932884502890f4d6ef4308186deae8aa96e83c0dff67a14437ec9de223e" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.097225 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.111453 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.134468 4966 scope.go:117] "RemoveContainer" containerID="3018afa668b9549e8f42c06685b7e785300c78518b2be9c48c9e7d3a020be5b3" Jan 27 16:11:51 crc kubenswrapper[4966]: E0127 16:11:51.134950 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3018afa668b9549e8f42c06685b7e785300c78518b2be9c48c9e7d3a020be5b3\": container with ID starting with 3018afa668b9549e8f42c06685b7e785300c78518b2be9c48c9e7d3a020be5b3 not found: ID does not exist" containerID="3018afa668b9549e8f42c06685b7e785300c78518b2be9c48c9e7d3a020be5b3" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.134981 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3018afa668b9549e8f42c06685b7e785300c78518b2be9c48c9e7d3a020be5b3"} err="failed to get container status \"3018afa668b9549e8f42c06685b7e785300c78518b2be9c48c9e7d3a020be5b3\": rpc error: code = NotFound desc = could not find container \"3018afa668b9549e8f42c06685b7e785300c78518b2be9c48c9e7d3a020be5b3\": container with ID starting with 3018afa668b9549e8f42c06685b7e785300c78518b2be9c48c9e7d3a020be5b3 not found: ID does not exist" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.135009 4966 scope.go:117] "RemoveContainer" containerID="90e10932884502890f4d6ef4308186deae8aa96e83c0dff67a14437ec9de223e" Jan 27 16:11:51 crc kubenswrapper[4966]: E0127 16:11:51.136296 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90e10932884502890f4d6ef4308186deae8aa96e83c0dff67a14437ec9de223e\": container with ID starting with 90e10932884502890f4d6ef4308186deae8aa96e83c0dff67a14437ec9de223e not found: ID does not exist" containerID="90e10932884502890f4d6ef4308186deae8aa96e83c0dff67a14437ec9de223e" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.136351 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e10932884502890f4d6ef4308186deae8aa96e83c0dff67a14437ec9de223e"} err="failed to get container status \"90e10932884502890f4d6ef4308186deae8aa96e83c0dff67a14437ec9de223e\": rpc error: code = NotFound desc = could not find container \"90e10932884502890f4d6ef4308186deae8aa96e83c0dff67a14437ec9de223e\": container with ID starting with 90e10932884502890f4d6ef4308186deae8aa96e83c0dff67a14437ec9de223e not found: ID does not exist" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.143666 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 16:11:51 crc kubenswrapper[4966]: E0127 16:11:51.144662 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3744b7e0-d355-43b7-bbf3-853416fb4483" containerName="setup-container" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.144759 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="3744b7e0-d355-43b7-bbf3-853416fb4483" containerName="setup-container" Jan 27 16:11:51 crc kubenswrapper[4966]: E0127 16:11:51.144871 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3744b7e0-d355-43b7-bbf3-853416fb4483" containerName="rabbitmq" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.144988 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="3744b7e0-d355-43b7-bbf3-853416fb4483" containerName="rabbitmq" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.145382 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="3744b7e0-d355-43b7-bbf3-853416fb4483" containerName="rabbitmq" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.147209 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.163083 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/10d78543-5cf7-4e24-aa48-52feb8606492-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.163142 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/10d78543-5cf7-4e24-aa48-52feb8606492-pod-info\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.163178 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/10d78543-5cf7-4e24-aa48-52feb8606492-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.163262 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/10d78543-5cf7-4e24-aa48-52feb8606492-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.163295 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10d78543-5cf7-4e24-aa48-52feb8606492-config-data\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.163389 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7v72\" (UniqueName: \"kubernetes.io/projected/10d78543-5cf7-4e24-aa48-52feb8606492-kube-api-access-t7v72\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.163413 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/10d78543-5cf7-4e24-aa48-52feb8606492-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.163565 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/10d78543-5cf7-4e24-aa48-52feb8606492-server-conf\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.163868 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/10d78543-5cf7-4e24-aa48-52feb8606492-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.164011 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/10d78543-5cf7-4e24-aa48-52feb8606492-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.164064 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.170646 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.266580 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/10d78543-5cf7-4e24-aa48-52feb8606492-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.266646 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.266781 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/10d78543-5cf7-4e24-aa48-52feb8606492-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.266822 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/10d78543-5cf7-4e24-aa48-52feb8606492-pod-info\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.266852 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/10d78543-5cf7-4e24-aa48-52feb8606492-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.266912 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/10d78543-5cf7-4e24-aa48-52feb8606492-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.266963 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10d78543-5cf7-4e24-aa48-52feb8606492-config-data\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.267047 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7v72\" (UniqueName: \"kubernetes.io/projected/10d78543-5cf7-4e24-aa48-52feb8606492-kube-api-access-t7v72\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.267077 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/10d78543-5cf7-4e24-aa48-52feb8606492-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.267154 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/10d78543-5cf7-4e24-aa48-52feb8606492-server-conf\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.267201 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/10d78543-5cf7-4e24-aa48-52feb8606492-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.267744 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/10d78543-5cf7-4e24-aa48-52feb8606492-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.268713 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/10d78543-5cf7-4e24-aa48-52feb8606492-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.269081 4966 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.269136 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e106a5c20d0399fb230aa3c602806df4667723965fc672c68ac6f33cfc3bfd0c/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.271187 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/10d78543-5cf7-4e24-aa48-52feb8606492-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.271887 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/10d78543-5cf7-4e24-aa48-52feb8606492-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.271958 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/10d78543-5cf7-4e24-aa48-52feb8606492-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.272097 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10d78543-5cf7-4e24-aa48-52feb8606492-config-data\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.272463 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/10d78543-5cf7-4e24-aa48-52feb8606492-pod-info\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.272622 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/10d78543-5cf7-4e24-aa48-52feb8606492-server-conf\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.273085 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/10d78543-5cf7-4e24-aa48-52feb8606492-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.284728 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7v72\" (UniqueName: \"kubernetes.io/projected/10d78543-5cf7-4e24-aa48-52feb8606492-kube-api-access-t7v72\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.361640 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fe97e40-819f-4edc-abc0-1ab5fcd16fc8\") pod \"rabbitmq-server-0\" (UID: \"10d78543-5cf7-4e24-aa48-52feb8606492\") " pod="openstack/rabbitmq-server-0" Jan 27 16:11:51 crc kubenswrapper[4966]: I0127 16:11:51.509503 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 16:11:52 crc kubenswrapper[4966]: I0127 16:11:52.018129 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 16:11:52 crc kubenswrapper[4966]: I0127 16:11:52.050447 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"10d78543-5cf7-4e24-aa48-52feb8606492","Type":"ContainerStarted","Data":"1576707b38e3de46cb71c7d79705073ba123ae5a846883ce0898d410ecc96641"} Jan 27 16:11:52 crc kubenswrapper[4966]: I0127 16:11:52.538119 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3744b7e0-d355-43b7-bbf3-853416fb4483" path="/var/lib/kubelet/pods/3744b7e0-d355-43b7-bbf3-853416fb4483/volumes" Jan 27 16:11:54 crc kubenswrapper[4966]: I0127 16:11:54.078036 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"10d78543-5cf7-4e24-aa48-52feb8606492","Type":"ContainerStarted","Data":"a04ba92ce3ae166b2da88576ad5f7aaf9969a3947ffeffc915a3119f8944cf27"} Jan 27 16:11:55 crc kubenswrapper[4966]: I0127 16:11:55.522517 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:11:55 crc kubenswrapper[4966]: E0127 16:11:55.523695 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:12:09 crc kubenswrapper[4966]: I0127 16:12:09.521125 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:12:09 crc kubenswrapper[4966]: E0127 16:12:09.522046 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:12:22 crc kubenswrapper[4966]: I0127 16:12:22.521232 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:12:22 crc kubenswrapper[4966]: E0127 16:12:22.522119 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:12:26 crc kubenswrapper[4966]: I0127 16:12:26.484175 4966 generic.go:334] "Generic (PLEG): container finished" podID="10d78543-5cf7-4e24-aa48-52feb8606492" containerID="a04ba92ce3ae166b2da88576ad5f7aaf9969a3947ffeffc915a3119f8944cf27" exitCode=0 Jan 27 16:12:26 crc kubenswrapper[4966]: I0127 16:12:26.484277 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"10d78543-5cf7-4e24-aa48-52feb8606492","Type":"ContainerDied","Data":"a04ba92ce3ae166b2da88576ad5f7aaf9969a3947ffeffc915a3119f8944cf27"} Jan 27 16:12:27 crc kubenswrapper[4966]: I0127 16:12:27.497940 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"10d78543-5cf7-4e24-aa48-52feb8606492","Type":"ContainerStarted","Data":"efd4e6742fc9c63bcb85ad317469e1847dc6ae016647f814d76706d6085ea455"} Jan 27 16:12:27 crc kubenswrapper[4966]: I0127 16:12:27.498746 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 16:12:27 crc kubenswrapper[4966]: I0127 16:12:27.525039 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.525017598 podStartE2EDuration="36.525017598s" podCreationTimestamp="2026-01-27 16:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:12:27.519520426 +0000 UTC m=+1813.822313924" watchObservedRunningTime="2026-01-27 16:12:27.525017598 +0000 UTC m=+1813.827811086" Jan 27 16:12:36 crc kubenswrapper[4966]: I0127 16:12:36.522084 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:12:36 crc kubenswrapper[4966]: E0127 16:12:36.523095 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:12:41 crc kubenswrapper[4966]: I0127 16:12:41.363762 4966 scope.go:117] "RemoveContainer" containerID="042a0171df1179e3f83e0ecc77f468cc8390a8e6fc510489d15712ddea2d801b" Jan 27 16:12:41 crc kubenswrapper[4966]: I0127 16:12:41.512346 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="10d78543-5cf7-4e24-aa48-52feb8606492" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.25:5671: connect: connection refused" Jan 27 16:12:41 crc kubenswrapper[4966]: I0127 16:12:41.534218 4966 scope.go:117] "RemoveContainer" containerID="3b9cc635282d8d8bed4cd86330b8e46cca1139eac93cc04cbbf3db9735912118" Jan 27 16:12:41 crc kubenswrapper[4966]: I0127 16:12:41.559159 4966 scope.go:117] "RemoveContainer" containerID="7b2aca9d277f1dcec6e7503e5238350016f7c435118428be7c3d5f5e2834fa18" Jan 27 16:12:41 crc kubenswrapper[4966]: I0127 16:12:41.609079 4966 scope.go:117] "RemoveContainer" containerID="4aede79151d22fda9bfdcbd820539923d790f45a0123f8f7ce030fa9ce38640f" Jan 27 16:12:41 crc kubenswrapper[4966]: I0127 16:12:41.697334 4966 scope.go:117] "RemoveContainer" containerID="37ec43e7f1708e0f246642b4f42869cc8a0b5d2e5d0d8ecadb1cdf0ba4681644" Jan 27 16:12:50 crc kubenswrapper[4966]: I0127 16:12:50.521502 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:12:51 crc kubenswrapper[4966]: I0127 16:12:51.512048 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 16:12:51 crc kubenswrapper[4966]: I0127 16:12:51.827257 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"ccc6669efff84876a93e08ec54b68bd51a8c24361bb45057ba982fe790d465ea"} Jan 27 16:13:42 crc kubenswrapper[4966]: I0127 16:13:42.279137 4966 scope.go:117] "RemoveContainer" containerID="4e486ef5edbe46aa593b0e15003256cff3449f7f246f575816bfa12f1329b791" Jan 27 16:13:54 crc kubenswrapper[4966]: I0127 16:13:54.047638 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2a93-account-create-update-hz627"] Jan 27 16:13:54 crc kubenswrapper[4966]: I0127 16:13:54.059641 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-2a93-account-create-update-hz627"] Jan 27 16:13:54 crc kubenswrapper[4966]: I0127 16:13:54.535579 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd7137c-b5f0-4e7b-9fd3-d145e97493eb" path="/var/lib/kubelet/pods/6fd7137c-b5f0-4e7b-9fd3-d145e97493eb/volumes" Jan 27 16:13:55 crc kubenswrapper[4966]: I0127 16:13:55.036300 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-rlmr2"] Jan 27 16:13:55 crc kubenswrapper[4966]: I0127 16:13:55.054038 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-rlmr2"] Jan 27 16:13:56 crc kubenswrapper[4966]: I0127 16:13:56.034436 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-456c-account-create-update-s5w5c"] Jan 27 16:13:56 crc kubenswrapper[4966]: I0127 16:13:56.046701 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-c879q"] Jan 27 16:13:56 crc kubenswrapper[4966]: I0127 16:13:56.059612 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-456c-account-create-update-s5w5c"] Jan 27 16:13:56 crc kubenswrapper[4966]: I0127 16:13:56.072694 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vqw86"] Jan 27 16:13:56 crc kubenswrapper[4966]: I0127 16:13:56.084831 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-c879q"] Jan 27 16:13:56 crc kubenswrapper[4966]: I0127 16:13:56.096789 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f2b8-account-create-update-x59c5"] Jan 27 16:13:56 crc kubenswrapper[4966]: I0127 16:13:56.107514 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3f19-account-create-update-h6kd6"] Jan 27 16:13:56 crc kubenswrapper[4966]: I0127 16:13:56.117615 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vqw86"] Jan 27 16:13:56 crc kubenswrapper[4966]: I0127 16:13:56.130651 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f2b8-account-create-update-x59c5"] Jan 27 16:13:56 crc kubenswrapper[4966]: I0127 16:13:56.144776 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-3f19-account-create-update-h6kd6"] Jan 27 16:13:56 crc kubenswrapper[4966]: I0127 16:13:56.156796 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-6k7tg"] Jan 27 16:13:56 crc kubenswrapper[4966]: I0127 16:13:56.170712 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-6k7tg"] Jan 27 16:13:56 crc kubenswrapper[4966]: I0127 16:13:56.539215 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="006407e5-c179-4f35-874a-af73cc024106" path="/var/lib/kubelet/pods/006407e5-c179-4f35-874a-af73cc024106/volumes" Jan 27 16:13:56 crc kubenswrapper[4966]: I0127 16:13:56.558815 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b4d4d2d-fe32-40fc-a565-8cdf4acbb853" path="/var/lib/kubelet/pods/1b4d4d2d-fe32-40fc-a565-8cdf4acbb853/volumes" Jan 27 16:13:56 crc kubenswrapper[4966]: I0127 16:13:56.571546 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22399f16-c5ed-4686-9d3e-f73bc0bea72e" path="/var/lib/kubelet/pods/22399f16-c5ed-4686-9d3e-f73bc0bea72e/volumes" Jan 27 16:13:56 crc kubenswrapper[4966]: I0127 16:13:56.573246 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7033d440-7efe-4838-b31b-d84a86491a1f" path="/var/lib/kubelet/pods/7033d440-7efe-4838-b31b-d84a86491a1f/volumes" Jan 27 16:13:56 crc kubenswrapper[4966]: I0127 16:13:56.576499 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0c799f6-7ef0-4d2b-ae60-7808e22e9699" path="/var/lib/kubelet/pods/b0c799f6-7ef0-4d2b-ae60-7808e22e9699/volumes" Jan 27 16:13:56 crc kubenswrapper[4966]: I0127 16:13:56.582589 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e59f8f2a-53c9-4e5a-bb13-e79058d62972" path="/var/lib/kubelet/pods/e59f8f2a-53c9-4e5a-bb13-e79058d62972/volumes" Jan 27 16:13:56 crc kubenswrapper[4966]: I0127 16:13:56.584626 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd643b41-42c6-4d59-8ee9-591b9717a088" path="/var/lib/kubelet/pods/fd643b41-42c6-4d59-8ee9-591b9717a088/volumes" Jan 27 16:14:09 crc kubenswrapper[4966]: I0127 16:14:09.052140 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-b2ff-account-create-update-8jnqx"] Jan 27 16:14:09 crc kubenswrapper[4966]: I0127 16:14:09.067405 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-4q69b"] Jan 27 16:14:09 crc kubenswrapper[4966]: I0127 16:14:09.079961 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-b2ff-account-create-update-8jnqx"] Jan 27 16:14:09 crc kubenswrapper[4966]: I0127 16:14:09.096011 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-4q69b"] Jan 27 16:14:10 crc kubenswrapper[4966]: I0127 16:14:10.534667 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c6f7696-a033-4ffd-a248-08cc900c0def" path="/var/lib/kubelet/pods/4c6f7696-a033-4ffd-a248-08cc900c0def/volumes" Jan 27 16:14:10 crc kubenswrapper[4966]: I0127 16:14:10.535718 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5c4d7fc-d6ef-4408-84fe-d936ee3a0668" path="/var/lib/kubelet/pods/e5c4d7fc-d6ef-4408-84fe-d936ee3a0668/volumes" Jan 27 16:14:21 crc kubenswrapper[4966]: I0127 16:14:21.032691 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-m7krk"] Jan 27 16:14:21 crc kubenswrapper[4966]: I0127 16:14:21.044636 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-m7krk"] Jan 27 16:14:22 crc kubenswrapper[4966]: I0127 16:14:22.534694 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78c5e231-0221-4bc4-affd-f30db82bed7a" path="/var/lib/kubelet/pods/78c5e231-0221-4bc4-affd-f30db82bed7a/volumes" Jan 27 16:14:31 crc kubenswrapper[4966]: I0127 16:14:31.083984 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-s9cgg"] Jan 27 16:14:31 crc kubenswrapper[4966]: I0127 16:14:31.109028 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-s9cgg"] Jan 27 16:14:32 crc kubenswrapper[4966]: I0127 16:14:32.037391 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-5fdfx"] Jan 27 16:14:32 crc kubenswrapper[4966]: I0127 16:14:32.052353 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-5fdfx"] Jan 27 16:14:32 crc kubenswrapper[4966]: I0127 16:14:32.538188 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33f50afa-bf36-4363-8076-0b8271d89a85" path="/var/lib/kubelet/pods/33f50afa-bf36-4363-8076-0b8271d89a85/volumes" Jan 27 16:14:32 crc kubenswrapper[4966]: I0127 16:14:32.554825 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bca13755-0032-4941-bfcd-7550197712c7" path="/var/lib/kubelet/pods/bca13755-0032-4941-bfcd-7550197712c7/volumes" Jan 27 16:14:33 crc kubenswrapper[4966]: I0127 16:14:33.030092 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-9524-account-create-update-nwkpp"] Jan 27 16:14:33 crc kubenswrapper[4966]: I0127 16:14:33.042294 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-9524-account-create-update-nwkpp"] Jan 27 16:14:34 crc kubenswrapper[4966]: I0127 16:14:34.042626 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-tzbwt"] Jan 27 16:14:34 crc kubenswrapper[4966]: I0127 16:14:34.054224 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-nd6n6"] Jan 27 16:14:34 crc kubenswrapper[4966]: I0127 16:14:34.088861 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-tzbwt"] Jan 27 16:14:34 crc kubenswrapper[4966]: I0127 16:14:34.111962 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-nd6n6"] Jan 27 16:14:34 crc kubenswrapper[4966]: I0127 16:14:34.131867 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0818-account-create-update-sjz8w"] Jan 27 16:14:34 crc kubenswrapper[4966]: I0127 16:14:34.144548 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-5sjsx"] Jan 27 16:14:34 crc kubenswrapper[4966]: I0127 16:14:34.159404 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2dda-account-create-update-q4kct"] Jan 27 16:14:34 crc kubenswrapper[4966]: I0127 16:14:34.171982 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-5sjsx"] Jan 27 16:14:34 crc kubenswrapper[4966]: I0127 16:14:34.182019 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0818-account-create-update-sjz8w"] Jan 27 16:14:34 crc kubenswrapper[4966]: I0127 16:14:34.195204 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2dda-account-create-update-q4kct"] Jan 27 16:14:34 crc kubenswrapper[4966]: I0127 16:14:34.208967 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-9d6a-account-create-update-4cs7w"] Jan 27 16:14:34 crc kubenswrapper[4966]: I0127 16:14:34.221811 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-9d6a-account-create-update-4cs7w"] Jan 27 16:14:34 crc kubenswrapper[4966]: I0127 16:14:34.533658 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a15438a-18b1-4bca-9566-30c48526de56" path="/var/lib/kubelet/pods/1a15438a-18b1-4bca-9566-30c48526de56/volumes" Jan 27 16:14:34 crc kubenswrapper[4966]: I0127 16:14:34.537139 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fe63661-1f35-4598-ab4e-86f934127864" path="/var/lib/kubelet/pods/4fe63661-1f35-4598-ab4e-86f934127864/volumes" Jan 27 16:14:34 crc kubenswrapper[4966]: I0127 16:14:34.539605 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a" path="/var/lib/kubelet/pods/83f2360e-3cb8-4bda-b2f6-6d1fd30ea58a/volumes" Jan 27 16:14:34 crc kubenswrapper[4966]: I0127 16:14:34.541172 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dd5605-46b2-44d0-b2e8-30492c2049ea" path="/var/lib/kubelet/pods/92dd5605-46b2-44d0-b2e8-30492c2049ea/volumes" Jan 27 16:14:34 crc kubenswrapper[4966]: I0127 16:14:34.542770 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca52e453-b636-461d-b18f-1ca06af6a91f" path="/var/lib/kubelet/pods/ca52e453-b636-461d-b18f-1ca06af6a91f/volumes" Jan 27 16:14:34 crc kubenswrapper[4966]: I0127 16:14:34.546846 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7608642-7d28-462c-9838-2b1aa51a69ed" path="/var/lib/kubelet/pods/e7608642-7d28-462c-9838-2b1aa51a69ed/volumes" Jan 27 16:14:34 crc kubenswrapper[4966]: I0127 16:14:34.549532 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e84c4662-7140-4e74-beff-dc84f3c0b6c7" path="/var/lib/kubelet/pods/e84c4662-7140-4e74-beff-dc84f3c0b6c7/volumes" Jan 27 16:14:42 crc kubenswrapper[4966]: I0127 16:14:42.406167 4966 scope.go:117] "RemoveContainer" containerID="9d9640c6dcbdb3d048da5960606e7fa314e5605495c9072bbec9bd39ddb5e81b" Jan 27 16:14:42 crc kubenswrapper[4966]: I0127 16:14:42.444737 4966 scope.go:117] "RemoveContainer" containerID="bda92224ebd6d3388fad27d0138bd99e486f2675ae0fcdc4086bcf2f0b4bf9ff" Jan 27 16:14:42 crc kubenswrapper[4966]: I0127 16:14:42.517115 4966 scope.go:117] "RemoveContainer" containerID="fa888a72c5557dcf71a41ffb4493854d81e44297a48ca215cf16b236f8d766bb" Jan 27 16:14:42 crc kubenswrapper[4966]: I0127 16:14:42.570116 4966 scope.go:117] "RemoveContainer" containerID="1acdeccc416a100a99742414f605a35121ae3df1b644d120337569744e12ed5f" Jan 27 16:14:42 crc kubenswrapper[4966]: I0127 16:14:42.675040 4966 scope.go:117] "RemoveContainer" containerID="8915f48209e200d2336575236659726fd62dd8a48afeb76b4352e4b52aacfb63" Jan 27 16:14:42 crc kubenswrapper[4966]: I0127 16:14:42.748311 4966 scope.go:117] "RemoveContainer" containerID="f359cba5060bfa416a0561eaf54a394493f4360fd51117c5f640e97a2beb4779" Jan 27 16:14:42 crc kubenswrapper[4966]: I0127 16:14:42.786875 4966 scope.go:117] "RemoveContainer" containerID="23e15de12b5420e6c3ad9ad1150fe0d64f0c7535f06e964579051108d2799b9d" Jan 27 16:14:42 crc kubenswrapper[4966]: I0127 16:14:42.822701 4966 scope.go:117] "RemoveContainer" containerID="a031bf4240b8bc65ada94c839fa8f64f8e1ef091d965088373fa5b826154577a" Jan 27 16:14:42 crc kubenswrapper[4966]: I0127 16:14:42.867322 4966 scope.go:117] "RemoveContainer" containerID="fb967511cb241bec3eb11f46bdf77548652d12383d18266914afd69662ee2dbf" Jan 27 16:14:42 crc kubenswrapper[4966]: I0127 16:14:42.893180 4966 scope.go:117] "RemoveContainer" containerID="745ebd32475c1fc547532352669d70e75978f8ecbce7e457479630bbc6753e98" Jan 27 16:14:42 crc kubenswrapper[4966]: I0127 16:14:42.935445 4966 scope.go:117] "RemoveContainer" containerID="22b4cae50654a749e7d911158110b93d6dca19b876d96a32a403bfd2ff90bc22" Jan 27 16:14:42 crc kubenswrapper[4966]: I0127 16:14:42.963585 4966 scope.go:117] "RemoveContainer" containerID="dde934ed23745608fcb281716783c17ade0d45f63e3f471eb90462553e4cf4bf" Jan 27 16:14:42 crc kubenswrapper[4966]: I0127 16:14:42.989010 4966 scope.go:117] "RemoveContainer" containerID="d188a035bb6a62a4b62d1aa990a4f127b5d62b61a888cd49612e5ca36aeaae04" Jan 27 16:14:43 crc kubenswrapper[4966]: I0127 16:14:43.012609 4966 scope.go:117] "RemoveContainer" containerID="85e4314d666f44850abd9172f03c796f4c80d82f3df6ed3bdc9714be565b6ea7" Jan 27 16:14:43 crc kubenswrapper[4966]: I0127 16:14:43.038653 4966 scope.go:117] "RemoveContainer" containerID="ea031b24f60ca90180d7a09becc98dc20573b0c65688867bbcfe51ee475e50d1" Jan 27 16:14:43 crc kubenswrapper[4966]: I0127 16:14:43.069233 4966 scope.go:117] "RemoveContainer" containerID="3b05ef822c13e9be7561f4cdb501dc287ba12816f27d46cde20ef89b83d8bea7" Jan 27 16:14:43 crc kubenswrapper[4966]: I0127 16:14:43.098040 4966 scope.go:117] "RemoveContainer" containerID="32456e11f86d656c26bfac00572591159c5d650995fb917a810013c1f1170096" Jan 27 16:14:43 crc kubenswrapper[4966]: I0127 16:14:43.152953 4966 scope.go:117] "RemoveContainer" containerID="b5697f1c8ce68aff15fa82c90e3ff2661af5049db5fbc98beb823cfd854019cf" Jan 27 16:14:43 crc kubenswrapper[4966]: I0127 16:14:43.248674 4966 scope.go:117] "RemoveContainer" containerID="b317d37e1a5a7c3729faaa312ef90f0fd1be9693cf907574f629c044b8e9e8dd" Jan 27 16:14:43 crc kubenswrapper[4966]: I0127 16:14:43.309307 4966 scope.go:117] "RemoveContainer" containerID="46fd629d488d38bb799dfb8fdb924583cf5fa0273150b4714bf6e320c5b563ee" Jan 27 16:14:43 crc kubenswrapper[4966]: I0127 16:14:43.361182 4966 scope.go:117] "RemoveContainer" containerID="587e4147c19e024d12165e83fdaba9915d92637ded5bfefe5f8a1311922a42f6" Jan 27 16:14:43 crc kubenswrapper[4966]: I0127 16:14:43.401235 4966 scope.go:117] "RemoveContainer" containerID="76722416e8e1d9ef1654f06a700f693a846cb88706cb407c1b1d9aff39b48390" Jan 27 16:14:53 crc kubenswrapper[4966]: I0127 16:14:53.601126 4966 generic.go:334] "Generic (PLEG): container finished" podID="c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e" containerID="785f08d1824b05a8958d770b79286ac257d229e1f1ad4f41abb044b9cca6022a" exitCode=0 Jan 27 16:14:53 crc kubenswrapper[4966]: I0127 16:14:53.601204 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" event={"ID":"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e","Type":"ContainerDied","Data":"785f08d1824b05a8958d770b79286ac257d229e1f1ad4f41abb044b9cca6022a"} Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.216702 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.288603 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-inventory\") pod \"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e\" (UID: \"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e\") " Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.288661 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2l96\" (UniqueName: \"kubernetes.io/projected/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-kube-api-access-c2l96\") pod \"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e\" (UID: \"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e\") " Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.288786 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-ssh-key-openstack-edpm-ipam\") pod \"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e\" (UID: \"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e\") " Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.288926 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-bootstrap-combined-ca-bundle\") pod \"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e\" (UID: \"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e\") " Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.298639 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e" (UID: "c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.298696 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-kube-api-access-c2l96" (OuterVolumeSpecName: "kube-api-access-c2l96") pod "c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e" (UID: "c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e"). InnerVolumeSpecName "kube-api-access-c2l96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.335441 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-inventory" (OuterVolumeSpecName: "inventory") pod "c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e" (UID: "c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.348861 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e" (UID: "c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.393117 4966 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.393193 4966 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.393212 4966 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.393225 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2l96\" (UniqueName: \"kubernetes.io/projected/c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e-kube-api-access-c2l96\") on node \"crc\" DevicePath \"\"" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.625859 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" event={"ID":"c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e","Type":"ContainerDied","Data":"b76aa1ce37f13267a146b05283e988bb600dc55e627020cbeffa0c0b5edecd54"} Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.625915 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b76aa1ce37f13267a146b05283e988bb600dc55e627020cbeffa0c0b5edecd54" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.625965 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.743629 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6"] Jan 27 16:14:55 crc kubenswrapper[4966]: E0127 16:14:55.744332 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.744354 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.744648 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.745577 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.747996 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.749292 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.749555 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.753369 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hsd" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.755877 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6"] Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.801945 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0968ae26-d5aa-401f-b777-78d1ee76cbad-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6\" (UID: \"0968ae26-d5aa-401f-b777-78d1ee76cbad\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.802043 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0968ae26-d5aa-401f-b777-78d1ee76cbad-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6\" (UID: \"0968ae26-d5aa-401f-b777-78d1ee76cbad\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.802308 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8qfv\" (UniqueName: \"kubernetes.io/projected/0968ae26-d5aa-401f-b777-78d1ee76cbad-kube-api-access-d8qfv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6\" (UID: \"0968ae26-d5aa-401f-b777-78d1ee76cbad\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.905280 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0968ae26-d5aa-401f-b777-78d1ee76cbad-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6\" (UID: \"0968ae26-d5aa-401f-b777-78d1ee76cbad\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.905416 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8qfv\" (UniqueName: \"kubernetes.io/projected/0968ae26-d5aa-401f-b777-78d1ee76cbad-kube-api-access-d8qfv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6\" (UID: \"0968ae26-d5aa-401f-b777-78d1ee76cbad\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.905561 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0968ae26-d5aa-401f-b777-78d1ee76cbad-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6\" (UID: \"0968ae26-d5aa-401f-b777-78d1ee76cbad\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.910246 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0968ae26-d5aa-401f-b777-78d1ee76cbad-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6\" (UID: \"0968ae26-d5aa-401f-b777-78d1ee76cbad\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.910458 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0968ae26-d5aa-401f-b777-78d1ee76cbad-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6\" (UID: \"0968ae26-d5aa-401f-b777-78d1ee76cbad\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6" Jan 27 16:14:55 crc kubenswrapper[4966]: I0127 16:14:55.925730 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8qfv\" (UniqueName: \"kubernetes.io/projected/0968ae26-d5aa-401f-b777-78d1ee76cbad-kube-api-access-d8qfv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6\" (UID: \"0968ae26-d5aa-401f-b777-78d1ee76cbad\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6" Jan 27 16:14:56 crc kubenswrapper[4966]: I0127 16:14:56.079154 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6" Jan 27 16:14:56 crc kubenswrapper[4966]: I0127 16:14:56.629906 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6"] Jan 27 16:14:56 crc kubenswrapper[4966]: I0127 16:14:56.642190 4966 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:14:57 crc kubenswrapper[4966]: I0127 16:14:57.039036 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-sllzs"] Jan 27 16:14:57 crc kubenswrapper[4966]: I0127 16:14:57.054192 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-sllzs"] Jan 27 16:14:57 crc kubenswrapper[4966]: I0127 16:14:57.648699 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6" event={"ID":"0968ae26-d5aa-401f-b777-78d1ee76cbad","Type":"ContainerStarted","Data":"8723c8582cfaeee9db4aa335e3865f88d9091919d8b9187142486958d606c56f"} Jan 27 16:14:58 crc kubenswrapper[4966]: I0127 16:14:58.537361 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee305aa0-15fe-46a9-b62f-8936153daddf" path="/var/lib/kubelet/pods/ee305aa0-15fe-46a9-b62f-8936153daddf/volumes" Jan 27 16:14:58 crc kubenswrapper[4966]: I0127 16:14:58.667363 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6" event={"ID":"0968ae26-d5aa-401f-b777-78d1ee76cbad","Type":"ContainerStarted","Data":"cb21997bd0ec0a13658205b2b58cc9ec37f40a3883fc7dae7b8e0acb9a092a22"} Jan 27 16:14:58 crc kubenswrapper[4966]: I0127 16:14:58.694337 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6" podStartSLOduration=2.749167652 podStartE2EDuration="3.69431592s" podCreationTimestamp="2026-01-27 16:14:55 +0000 UTC" firstStartedPulling="2026-01-27 16:14:56.641987275 +0000 UTC m=+1962.944780763" lastFinishedPulling="2026-01-27 16:14:57.587135543 +0000 UTC m=+1963.889929031" observedRunningTime="2026-01-27 16:14:58.684396949 +0000 UTC m=+1964.987190437" watchObservedRunningTime="2026-01-27 16:14:58.69431592 +0000 UTC m=+1964.997109408" Jan 27 16:15:00 crc kubenswrapper[4966]: I0127 16:15:00.179435 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv"] Jan 27 16:15:00 crc kubenswrapper[4966]: I0127 16:15:00.181710 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv" Jan 27 16:15:00 crc kubenswrapper[4966]: I0127 16:15:00.185375 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 16:15:00 crc kubenswrapper[4966]: I0127 16:15:00.186167 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 16:15:00 crc kubenswrapper[4966]: I0127 16:15:00.192038 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv"] Jan 27 16:15:00 crc kubenswrapper[4966]: I0127 16:15:00.309726 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b761748d-7255-42cc-a966-7aff3bda5c8c-config-volume\") pod \"collect-profiles-29492175-6q6vv\" (UID: \"b761748d-7255-42cc-a966-7aff3bda5c8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv" Jan 27 16:15:00 crc kubenswrapper[4966]: I0127 16:15:00.309871 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdh4n\" (UniqueName: \"kubernetes.io/projected/b761748d-7255-42cc-a966-7aff3bda5c8c-kube-api-access-jdh4n\") pod \"collect-profiles-29492175-6q6vv\" (UID: \"b761748d-7255-42cc-a966-7aff3bda5c8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv" Jan 27 16:15:00 crc kubenswrapper[4966]: I0127 16:15:00.309975 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b761748d-7255-42cc-a966-7aff3bda5c8c-secret-volume\") pod \"collect-profiles-29492175-6q6vv\" (UID: \"b761748d-7255-42cc-a966-7aff3bda5c8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv" Jan 27 16:15:00 crc kubenswrapper[4966]: I0127 16:15:00.411853 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b761748d-7255-42cc-a966-7aff3bda5c8c-config-volume\") pod \"collect-profiles-29492175-6q6vv\" (UID: \"b761748d-7255-42cc-a966-7aff3bda5c8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv" Jan 27 16:15:00 crc kubenswrapper[4966]: I0127 16:15:00.412002 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdh4n\" (UniqueName: \"kubernetes.io/projected/b761748d-7255-42cc-a966-7aff3bda5c8c-kube-api-access-jdh4n\") pod \"collect-profiles-29492175-6q6vv\" (UID: \"b761748d-7255-42cc-a966-7aff3bda5c8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv" Jan 27 16:15:00 crc kubenswrapper[4966]: I0127 16:15:00.412090 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b761748d-7255-42cc-a966-7aff3bda5c8c-secret-volume\") pod \"collect-profiles-29492175-6q6vv\" (UID: \"b761748d-7255-42cc-a966-7aff3bda5c8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv" Jan 27 16:15:00 crc kubenswrapper[4966]: I0127 16:15:00.412743 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b761748d-7255-42cc-a966-7aff3bda5c8c-config-volume\") pod \"collect-profiles-29492175-6q6vv\" (UID: \"b761748d-7255-42cc-a966-7aff3bda5c8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv" Jan 27 16:15:00 crc kubenswrapper[4966]: I0127 16:15:00.418693 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b761748d-7255-42cc-a966-7aff3bda5c8c-secret-volume\") pod \"collect-profiles-29492175-6q6vv\" (UID: \"b761748d-7255-42cc-a966-7aff3bda5c8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv" Jan 27 16:15:00 crc kubenswrapper[4966]: I0127 16:15:00.431937 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdh4n\" (UniqueName: \"kubernetes.io/projected/b761748d-7255-42cc-a966-7aff3bda5c8c-kube-api-access-jdh4n\") pod \"collect-profiles-29492175-6q6vv\" (UID: \"b761748d-7255-42cc-a966-7aff3bda5c8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv" Jan 27 16:15:00 crc kubenswrapper[4966]: I0127 16:15:00.547501 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv" Jan 27 16:15:01 crc kubenswrapper[4966]: I0127 16:15:01.027692 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv"] Jan 27 16:15:01 crc kubenswrapper[4966]: W0127 16:15:01.036682 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb761748d_7255_42cc_a966_7aff3bda5c8c.slice/crio-df3642a148ef3373e795ab11b6d80932fd782ed54bf0a4572aeac724fd93ed57 WatchSource:0}: Error finding container df3642a148ef3373e795ab11b6d80932fd782ed54bf0a4572aeac724fd93ed57: Status 404 returned error can't find the container with id df3642a148ef3373e795ab11b6d80932fd782ed54bf0a4572aeac724fd93ed57 Jan 27 16:15:01 crc kubenswrapper[4966]: I0127 16:15:01.699484 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv" event={"ID":"b761748d-7255-42cc-a966-7aff3bda5c8c","Type":"ContainerStarted","Data":"4e9ad1f3fd2a08c212ef5e1fcd2ca015a7101d1706a9f50d663681c8f29b1aee"} Jan 27 16:15:01 crc kubenswrapper[4966]: I0127 16:15:01.699822 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv" event={"ID":"b761748d-7255-42cc-a966-7aff3bda5c8c","Type":"ContainerStarted","Data":"df3642a148ef3373e795ab11b6d80932fd782ed54bf0a4572aeac724fd93ed57"} Jan 27 16:15:01 crc kubenswrapper[4966]: I0127 16:15:01.720835 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv" podStartSLOduration=1.720790954 podStartE2EDuration="1.720790954s" podCreationTimestamp="2026-01-27 16:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:15:01.714036922 +0000 UTC m=+1968.016830420" watchObservedRunningTime="2026-01-27 16:15:01.720790954 +0000 UTC m=+1968.023584452" Jan 27 16:15:01 crc kubenswrapper[4966]: E0127 16:15:01.908153 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb761748d_7255_42cc_a966_7aff3bda5c8c.slice/crio-4e9ad1f3fd2a08c212ef5e1fcd2ca015a7101d1706a9f50d663681c8f29b1aee.scope\": RecentStats: unable to find data in memory cache]" Jan 27 16:15:02 crc kubenswrapper[4966]: I0127 16:15:02.713634 4966 generic.go:334] "Generic (PLEG): container finished" podID="b761748d-7255-42cc-a966-7aff3bda5c8c" containerID="4e9ad1f3fd2a08c212ef5e1fcd2ca015a7101d1706a9f50d663681c8f29b1aee" exitCode=0 Jan 27 16:15:02 crc kubenswrapper[4966]: I0127 16:15:02.714046 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv" event={"ID":"b761748d-7255-42cc-a966-7aff3bda5c8c","Type":"ContainerDied","Data":"4e9ad1f3fd2a08c212ef5e1fcd2ca015a7101d1706a9f50d663681c8f29b1aee"} Jan 27 16:15:04 crc kubenswrapper[4966]: I0127 16:15:04.131197 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv" Jan 27 16:15:04 crc kubenswrapper[4966]: I0127 16:15:04.200382 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdh4n\" (UniqueName: \"kubernetes.io/projected/b761748d-7255-42cc-a966-7aff3bda5c8c-kube-api-access-jdh4n\") pod \"b761748d-7255-42cc-a966-7aff3bda5c8c\" (UID: \"b761748d-7255-42cc-a966-7aff3bda5c8c\") " Jan 27 16:15:04 crc kubenswrapper[4966]: I0127 16:15:04.200466 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b761748d-7255-42cc-a966-7aff3bda5c8c-secret-volume\") pod \"b761748d-7255-42cc-a966-7aff3bda5c8c\" (UID: \"b761748d-7255-42cc-a966-7aff3bda5c8c\") " Jan 27 16:15:04 crc kubenswrapper[4966]: I0127 16:15:04.200515 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b761748d-7255-42cc-a966-7aff3bda5c8c-config-volume\") pod \"b761748d-7255-42cc-a966-7aff3bda5c8c\" (UID: \"b761748d-7255-42cc-a966-7aff3bda5c8c\") " Jan 27 16:15:04 crc kubenswrapper[4966]: I0127 16:15:04.202922 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b761748d-7255-42cc-a966-7aff3bda5c8c-config-volume" (OuterVolumeSpecName: "config-volume") pod "b761748d-7255-42cc-a966-7aff3bda5c8c" (UID: "b761748d-7255-42cc-a966-7aff3bda5c8c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:15:04 crc kubenswrapper[4966]: I0127 16:15:04.220709 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b761748d-7255-42cc-a966-7aff3bda5c8c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b761748d-7255-42cc-a966-7aff3bda5c8c" (UID: "b761748d-7255-42cc-a966-7aff3bda5c8c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:15:04 crc kubenswrapper[4966]: I0127 16:15:04.223485 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b761748d-7255-42cc-a966-7aff3bda5c8c-kube-api-access-jdh4n" (OuterVolumeSpecName: "kube-api-access-jdh4n") pod "b761748d-7255-42cc-a966-7aff3bda5c8c" (UID: "b761748d-7255-42cc-a966-7aff3bda5c8c"). InnerVolumeSpecName "kube-api-access-jdh4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:15:04 crc kubenswrapper[4966]: I0127 16:15:04.303166 4966 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b761748d-7255-42cc-a966-7aff3bda5c8c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:15:04 crc kubenswrapper[4966]: I0127 16:15:04.303209 4966 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b761748d-7255-42cc-a966-7aff3bda5c8c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:15:04 crc kubenswrapper[4966]: I0127 16:15:04.303222 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdh4n\" (UniqueName: \"kubernetes.io/projected/b761748d-7255-42cc-a966-7aff3bda5c8c-kube-api-access-jdh4n\") on node \"crc\" DevicePath \"\"" Jan 27 16:15:04 crc kubenswrapper[4966]: I0127 16:15:04.740328 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv" event={"ID":"b761748d-7255-42cc-a966-7aff3bda5c8c","Type":"ContainerDied","Data":"df3642a148ef3373e795ab11b6d80932fd782ed54bf0a4572aeac724fd93ed57"} Jan 27 16:15:04 crc kubenswrapper[4966]: I0127 16:15:04.740405 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df3642a148ef3373e795ab11b6d80932fd782ed54bf0a4572aeac724fd93ed57" Jan 27 16:15:04 crc kubenswrapper[4966]: I0127 16:15:04.740374 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv" Jan 27 16:15:04 crc kubenswrapper[4966]: I0127 16:15:04.798585 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr"] Jan 27 16:15:04 crc kubenswrapper[4966]: I0127 16:15:04.809407 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492130-nl8kr"] Jan 27 16:15:06 crc kubenswrapper[4966]: I0127 16:15:06.542572 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d3b7ce8-d257-4466-8385-0e506ba4cb38" path="/var/lib/kubelet/pods/4d3b7ce8-d257-4466-8385-0e506ba4cb38/volumes" Jan 27 16:15:10 crc kubenswrapper[4966]: I0127 16:15:10.119588 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:15:10 crc kubenswrapper[4966]: I0127 16:15:10.120063 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:15:40 crc kubenswrapper[4966]: I0127 16:15:40.123076 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:15:40 crc kubenswrapper[4966]: I0127 16:15:40.123713 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:15:42 crc kubenswrapper[4966]: I0127 16:15:42.052879 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-h2lbn"] Jan 27 16:15:42 crc kubenswrapper[4966]: I0127 16:15:42.065281 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-78x59"] Jan 27 16:15:42 crc kubenswrapper[4966]: I0127 16:15:42.086407 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-78x59"] Jan 27 16:15:42 crc kubenswrapper[4966]: I0127 16:15:42.097654 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-h2lbn"] Jan 27 16:15:42 crc kubenswrapper[4966]: I0127 16:15:42.557712 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="621d47f8-d9c4-4875-aad5-8dd30f215f16" path="/var/lib/kubelet/pods/621d47f8-d9c4-4875-aad5-8dd30f215f16/volumes" Jan 27 16:15:42 crc kubenswrapper[4966]: I0127 16:15:42.576539 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d064aa6-d9c2-4adf-a25b-33a70d86e728" path="/var/lib/kubelet/pods/8d064aa6-d9c2-4adf-a25b-33a70d86e728/volumes" Jan 27 16:15:43 crc kubenswrapper[4966]: I0127 16:15:43.857451 4966 scope.go:117] "RemoveContainer" containerID="416e2bc09b74b8effaa4823006b5e389917e67966f17dcdc13238a1ce62366dd" Jan 27 16:15:43 crc kubenswrapper[4966]: I0127 16:15:43.911869 4966 scope.go:117] "RemoveContainer" containerID="e68900a67d7b98ba28d68bda9b7727e6dba876e15292a6e8aa5cd9807e992361" Jan 27 16:15:43 crc kubenswrapper[4966]: I0127 16:15:43.982958 4966 scope.go:117] "RemoveContainer" containerID="b2932732ae140e9744ec03705f36c57581cecf5d768cbfba66569f3a87ee8a41" Jan 27 16:15:44 crc kubenswrapper[4966]: I0127 16:15:44.014851 4966 scope.go:117] "RemoveContainer" containerID="ceb8672ab8232d5c6af3237ca37794f668ee6284d02d458ebaf61ccce70b1033" Jan 27 16:15:46 crc kubenswrapper[4966]: I0127 16:15:46.032824 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-7bd6d"] Jan 27 16:15:46 crc kubenswrapper[4966]: I0127 16:15:46.045373 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-7bd6d"] Jan 27 16:15:46 crc kubenswrapper[4966]: I0127 16:15:46.537210 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71afeadb-1cd9-461f-b899-307f7dd34fca" path="/var/lib/kubelet/pods/71afeadb-1cd9-461f-b899-307f7dd34fca/volumes" Jan 27 16:15:59 crc kubenswrapper[4966]: I0127 16:15:59.060838 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-c4wdb"] Jan 27 16:15:59 crc kubenswrapper[4966]: I0127 16:15:59.073158 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-c4wdb"] Jan 27 16:16:00 crc kubenswrapper[4966]: I0127 16:16:00.534336 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5989d951-ed71-4800-9400-390cbe5513f9" path="/var/lib/kubelet/pods/5989d951-ed71-4800-9400-390cbe5513f9/volumes" Jan 27 16:16:01 crc kubenswrapper[4966]: I0127 16:16:01.030994 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-dp2rw"] Jan 27 16:16:01 crc kubenswrapper[4966]: I0127 16:16:01.041707 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-dp2rw"] Jan 27 16:16:02 crc kubenswrapper[4966]: I0127 16:16:02.534667 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d47c000-d7a4-4dca-a051-15a5d91f3ab9" path="/var/lib/kubelet/pods/4d47c000-d7a4-4dca-a051-15a5d91f3ab9/volumes" Jan 27 16:16:10 crc kubenswrapper[4966]: I0127 16:16:10.119410 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:16:10 crc kubenswrapper[4966]: I0127 16:16:10.120042 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:16:10 crc kubenswrapper[4966]: I0127 16:16:10.120095 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 16:16:10 crc kubenswrapper[4966]: I0127 16:16:10.121149 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ccc6669efff84876a93e08ec54b68bd51a8c24361bb45057ba982fe790d465ea"} pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:16:10 crc kubenswrapper[4966]: I0127 16:16:10.121207 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" containerID="cri-o://ccc6669efff84876a93e08ec54b68bd51a8c24361bb45057ba982fe790d465ea" gracePeriod=600 Jan 27 16:16:10 crc kubenswrapper[4966]: I0127 16:16:10.592278 4966 generic.go:334] "Generic (PLEG): container finished" podID="75889828-fc5d-4516-a3c4-db3affd4f810" containerID="ccc6669efff84876a93e08ec54b68bd51a8c24361bb45057ba982fe790d465ea" exitCode=0 Jan 27 16:16:10 crc kubenswrapper[4966]: I0127 16:16:10.592370 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerDied","Data":"ccc6669efff84876a93e08ec54b68bd51a8c24361bb45057ba982fe790d465ea"} Jan 27 16:16:10 crc kubenswrapper[4966]: I0127 16:16:10.592667 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54"} Jan 27 16:16:10 crc kubenswrapper[4966]: I0127 16:16:10.592691 4966 scope.go:117] "RemoveContainer" containerID="0f0ae6dc80fac80ec1d8b2b05fa06e8b540a9e986ead8bf8ee34a6699256b795" Jan 27 16:16:27 crc kubenswrapper[4966]: I0127 16:16:27.885537 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l97pv"] Jan 27 16:16:27 crc kubenswrapper[4966]: E0127 16:16:27.886624 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b761748d-7255-42cc-a966-7aff3bda5c8c" containerName="collect-profiles" Jan 27 16:16:27 crc kubenswrapper[4966]: I0127 16:16:27.886637 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b761748d-7255-42cc-a966-7aff3bda5c8c" containerName="collect-profiles" Jan 27 16:16:27 crc kubenswrapper[4966]: I0127 16:16:27.886866 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="b761748d-7255-42cc-a966-7aff3bda5c8c" containerName="collect-profiles" Jan 27 16:16:27 crc kubenswrapper[4966]: I0127 16:16:27.888590 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l97pv" Jan 27 16:16:27 crc kubenswrapper[4966]: I0127 16:16:27.918078 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l97pv"] Jan 27 16:16:27 crc kubenswrapper[4966]: I0127 16:16:27.965873 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/747a29a7-bbbb-466c-8cd3-80439f4543ce-utilities\") pod \"redhat-marketplace-l97pv\" (UID: \"747a29a7-bbbb-466c-8cd3-80439f4543ce\") " pod="openshift-marketplace/redhat-marketplace-l97pv" Jan 27 16:16:27 crc kubenswrapper[4966]: I0127 16:16:27.966029 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj9qj\" (UniqueName: \"kubernetes.io/projected/747a29a7-bbbb-466c-8cd3-80439f4543ce-kube-api-access-gj9qj\") pod \"redhat-marketplace-l97pv\" (UID: \"747a29a7-bbbb-466c-8cd3-80439f4543ce\") " pod="openshift-marketplace/redhat-marketplace-l97pv" Jan 27 16:16:27 crc kubenswrapper[4966]: I0127 16:16:27.966154 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/747a29a7-bbbb-466c-8cd3-80439f4543ce-catalog-content\") pod \"redhat-marketplace-l97pv\" (UID: \"747a29a7-bbbb-466c-8cd3-80439f4543ce\") " pod="openshift-marketplace/redhat-marketplace-l97pv" Jan 27 16:16:28 crc kubenswrapper[4966]: I0127 16:16:28.070066 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/747a29a7-bbbb-466c-8cd3-80439f4543ce-catalog-content\") pod \"redhat-marketplace-l97pv\" (UID: \"747a29a7-bbbb-466c-8cd3-80439f4543ce\") " pod="openshift-marketplace/redhat-marketplace-l97pv" Jan 27 16:16:28 crc kubenswrapper[4966]: I0127 16:16:28.070594 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/747a29a7-bbbb-466c-8cd3-80439f4543ce-catalog-content\") pod \"redhat-marketplace-l97pv\" (UID: \"747a29a7-bbbb-466c-8cd3-80439f4543ce\") " pod="openshift-marketplace/redhat-marketplace-l97pv" Jan 27 16:16:28 crc kubenswrapper[4966]: I0127 16:16:28.070871 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/747a29a7-bbbb-466c-8cd3-80439f4543ce-utilities\") pod \"redhat-marketplace-l97pv\" (UID: \"747a29a7-bbbb-466c-8cd3-80439f4543ce\") " pod="openshift-marketplace/redhat-marketplace-l97pv" Jan 27 16:16:28 crc kubenswrapper[4966]: I0127 16:16:28.070937 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj9qj\" (UniqueName: \"kubernetes.io/projected/747a29a7-bbbb-466c-8cd3-80439f4543ce-kube-api-access-gj9qj\") pod \"redhat-marketplace-l97pv\" (UID: \"747a29a7-bbbb-466c-8cd3-80439f4543ce\") " pod="openshift-marketplace/redhat-marketplace-l97pv" Jan 27 16:16:28 crc kubenswrapper[4966]: I0127 16:16:28.071184 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/747a29a7-bbbb-466c-8cd3-80439f4543ce-utilities\") pod \"redhat-marketplace-l97pv\" (UID: \"747a29a7-bbbb-466c-8cd3-80439f4543ce\") " pod="openshift-marketplace/redhat-marketplace-l97pv" Jan 27 16:16:28 crc kubenswrapper[4966]: I0127 16:16:28.092429 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jtwk2"] Jan 27 16:16:28 crc kubenswrapper[4966]: I0127 16:16:28.094709 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtwk2" Jan 27 16:16:28 crc kubenswrapper[4966]: I0127 16:16:28.099075 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj9qj\" (UniqueName: \"kubernetes.io/projected/747a29a7-bbbb-466c-8cd3-80439f4543ce-kube-api-access-gj9qj\") pod \"redhat-marketplace-l97pv\" (UID: \"747a29a7-bbbb-466c-8cd3-80439f4543ce\") " pod="openshift-marketplace/redhat-marketplace-l97pv" Jan 27 16:16:28 crc kubenswrapper[4966]: I0127 16:16:28.113224 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jtwk2"] Jan 27 16:16:28 crc kubenswrapper[4966]: I0127 16:16:28.173390 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff339561-ddf4-4c7f-a90a-02a356dc59ed-catalog-content\") pod \"redhat-operators-jtwk2\" (UID: \"ff339561-ddf4-4c7f-a90a-02a356dc59ed\") " pod="openshift-marketplace/redhat-operators-jtwk2" Jan 27 16:16:28 crc kubenswrapper[4966]: I0127 16:16:28.173465 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff339561-ddf4-4c7f-a90a-02a356dc59ed-utilities\") pod \"redhat-operators-jtwk2\" (UID: \"ff339561-ddf4-4c7f-a90a-02a356dc59ed\") " pod="openshift-marketplace/redhat-operators-jtwk2" Jan 27 16:16:28 crc kubenswrapper[4966]: I0127 16:16:28.173600 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dgrg\" (UniqueName: \"kubernetes.io/projected/ff339561-ddf4-4c7f-a90a-02a356dc59ed-kube-api-access-6dgrg\") pod \"redhat-operators-jtwk2\" (UID: \"ff339561-ddf4-4c7f-a90a-02a356dc59ed\") " pod="openshift-marketplace/redhat-operators-jtwk2" Jan 27 16:16:28 crc kubenswrapper[4966]: I0127 16:16:28.211072 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l97pv" Jan 27 16:16:28 crc kubenswrapper[4966]: I0127 16:16:28.277049 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff339561-ddf4-4c7f-a90a-02a356dc59ed-catalog-content\") pod \"redhat-operators-jtwk2\" (UID: \"ff339561-ddf4-4c7f-a90a-02a356dc59ed\") " pod="openshift-marketplace/redhat-operators-jtwk2" Jan 27 16:16:28 crc kubenswrapper[4966]: I0127 16:16:28.277477 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff339561-ddf4-4c7f-a90a-02a356dc59ed-utilities\") pod \"redhat-operators-jtwk2\" (UID: \"ff339561-ddf4-4c7f-a90a-02a356dc59ed\") " pod="openshift-marketplace/redhat-operators-jtwk2" Jan 27 16:16:28 crc kubenswrapper[4966]: I0127 16:16:28.277720 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dgrg\" (UniqueName: \"kubernetes.io/projected/ff339561-ddf4-4c7f-a90a-02a356dc59ed-kube-api-access-6dgrg\") pod \"redhat-operators-jtwk2\" (UID: \"ff339561-ddf4-4c7f-a90a-02a356dc59ed\") " pod="openshift-marketplace/redhat-operators-jtwk2" Jan 27 16:16:28 crc kubenswrapper[4966]: I0127 16:16:28.277727 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff339561-ddf4-4c7f-a90a-02a356dc59ed-catalog-content\") pod \"redhat-operators-jtwk2\" (UID: \"ff339561-ddf4-4c7f-a90a-02a356dc59ed\") " pod="openshift-marketplace/redhat-operators-jtwk2" Jan 27 16:16:28 crc kubenswrapper[4966]: I0127 16:16:28.278370 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff339561-ddf4-4c7f-a90a-02a356dc59ed-utilities\") pod \"redhat-operators-jtwk2\" (UID: \"ff339561-ddf4-4c7f-a90a-02a356dc59ed\") " pod="openshift-marketplace/redhat-operators-jtwk2" Jan 27 16:16:28 crc kubenswrapper[4966]: I0127 16:16:28.300788 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dgrg\" (UniqueName: \"kubernetes.io/projected/ff339561-ddf4-4c7f-a90a-02a356dc59ed-kube-api-access-6dgrg\") pod \"redhat-operators-jtwk2\" (UID: \"ff339561-ddf4-4c7f-a90a-02a356dc59ed\") " pod="openshift-marketplace/redhat-operators-jtwk2" Jan 27 16:16:28 crc kubenswrapper[4966]: I0127 16:16:28.501128 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtwk2" Jan 27 16:16:28 crc kubenswrapper[4966]: I0127 16:16:28.837507 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l97pv"] Jan 27 16:16:29 crc kubenswrapper[4966]: I0127 16:16:29.094622 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jtwk2"] Jan 27 16:16:29 crc kubenswrapper[4966]: I0127 16:16:29.804423 4966 generic.go:334] "Generic (PLEG): container finished" podID="747a29a7-bbbb-466c-8cd3-80439f4543ce" containerID="7bd175690428d76f4f7fdecf7bc99d98ebc9619af46fdab7c097c6d58d2c237f" exitCode=0 Jan 27 16:16:29 crc kubenswrapper[4966]: I0127 16:16:29.804538 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l97pv" event={"ID":"747a29a7-bbbb-466c-8cd3-80439f4543ce","Type":"ContainerDied","Data":"7bd175690428d76f4f7fdecf7bc99d98ebc9619af46fdab7c097c6d58d2c237f"} Jan 27 16:16:29 crc kubenswrapper[4966]: I0127 16:16:29.804758 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l97pv" event={"ID":"747a29a7-bbbb-466c-8cd3-80439f4543ce","Type":"ContainerStarted","Data":"96dbfa9a564d523f59008e4598599573dc4fca98294f021dbd11eb0f58d749f7"} Jan 27 16:16:29 crc kubenswrapper[4966]: I0127 16:16:29.807127 4966 generic.go:334] "Generic (PLEG): container finished" podID="ff339561-ddf4-4c7f-a90a-02a356dc59ed" containerID="d7101e908d8aaa35146a7ff5567aa1bff994ca9e8d5ac05781ca38994200699c" exitCode=0 Jan 27 16:16:29 crc kubenswrapper[4966]: I0127 16:16:29.807196 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtwk2" event={"ID":"ff339561-ddf4-4c7f-a90a-02a356dc59ed","Type":"ContainerDied","Data":"d7101e908d8aaa35146a7ff5567aa1bff994ca9e8d5ac05781ca38994200699c"} Jan 27 16:16:29 crc kubenswrapper[4966]: I0127 16:16:29.807245 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtwk2" event={"ID":"ff339561-ddf4-4c7f-a90a-02a356dc59ed","Type":"ContainerStarted","Data":"30386cbad8045b4df43eefa93afdd4fdd81f6df08ee4d31853f885566b90ee7b"} Jan 27 16:16:31 crc kubenswrapper[4966]: I0127 16:16:31.830763 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l97pv" event={"ID":"747a29a7-bbbb-466c-8cd3-80439f4543ce","Type":"ContainerStarted","Data":"7f2a9f8dbfe4fbd67ca9ea11b024c5921550b80f983ed79a765e1e6776c56fdf"} Jan 27 16:16:32 crc kubenswrapper[4966]: I0127 16:16:32.842463 4966 generic.go:334] "Generic (PLEG): container finished" podID="747a29a7-bbbb-466c-8cd3-80439f4543ce" containerID="7f2a9f8dbfe4fbd67ca9ea11b024c5921550b80f983ed79a765e1e6776c56fdf" exitCode=0 Jan 27 16:16:32 crc kubenswrapper[4966]: I0127 16:16:32.842522 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l97pv" event={"ID":"747a29a7-bbbb-466c-8cd3-80439f4543ce","Type":"ContainerDied","Data":"7f2a9f8dbfe4fbd67ca9ea11b024c5921550b80f983ed79a765e1e6776c56fdf"} Jan 27 16:16:32 crc kubenswrapper[4966]: I0127 16:16:32.845231 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtwk2" event={"ID":"ff339561-ddf4-4c7f-a90a-02a356dc59ed","Type":"ContainerStarted","Data":"6a537393c6bb7df0e94ad215eca751eabe6d54386e144ae47c69d9a2febafd25"} Jan 27 16:16:34 crc kubenswrapper[4966]: I0127 16:16:34.885384 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l97pv" event={"ID":"747a29a7-bbbb-466c-8cd3-80439f4543ce","Type":"ContainerStarted","Data":"d3dac5a706b3429cbde9e1ac1c373d65a40dca0ed43c2070bca864c81b1ae68e"} Jan 27 16:16:34 crc kubenswrapper[4966]: I0127 16:16:34.934913 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l97pv" podStartSLOduration=3.994578157 podStartE2EDuration="7.934880201s" podCreationTimestamp="2026-01-27 16:16:27 +0000 UTC" firstStartedPulling="2026-01-27 16:16:29.806454089 +0000 UTC m=+2056.109247577" lastFinishedPulling="2026-01-27 16:16:33.746756123 +0000 UTC m=+2060.049549621" observedRunningTime="2026-01-27 16:16:34.915126161 +0000 UTC m=+2061.217919659" watchObservedRunningTime="2026-01-27 16:16:34.934880201 +0000 UTC m=+2061.237673689" Jan 27 16:16:38 crc kubenswrapper[4966]: I0127 16:16:38.212036 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l97pv" Jan 27 16:16:38 crc kubenswrapper[4966]: I0127 16:16:38.213544 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l97pv" Jan 27 16:16:38 crc kubenswrapper[4966]: I0127 16:16:38.558189 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l97pv" Jan 27 16:16:39 crc kubenswrapper[4966]: I0127 16:16:39.988078 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l97pv" Jan 27 16:16:42 crc kubenswrapper[4966]: I0127 16:16:42.252865 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l97pv"] Jan 27 16:16:42 crc kubenswrapper[4966]: I0127 16:16:42.253795 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l97pv" podUID="747a29a7-bbbb-466c-8cd3-80439f4543ce" containerName="registry-server" containerID="cri-o://d3dac5a706b3429cbde9e1ac1c373d65a40dca0ed43c2070bca864c81b1ae68e" gracePeriod=2 Jan 27 16:16:42 crc kubenswrapper[4966]: I0127 16:16:42.967089 4966 generic.go:334] "Generic (PLEG): container finished" podID="ff339561-ddf4-4c7f-a90a-02a356dc59ed" containerID="6a537393c6bb7df0e94ad215eca751eabe6d54386e144ae47c69d9a2febafd25" exitCode=0 Jan 27 16:16:42 crc kubenswrapper[4966]: I0127 16:16:42.967176 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtwk2" event={"ID":"ff339561-ddf4-4c7f-a90a-02a356dc59ed","Type":"ContainerDied","Data":"6a537393c6bb7df0e94ad215eca751eabe6d54386e144ae47c69d9a2febafd25"} Jan 27 16:16:42 crc kubenswrapper[4966]: I0127 16:16:42.973346 4966 generic.go:334] "Generic (PLEG): container finished" podID="747a29a7-bbbb-466c-8cd3-80439f4543ce" containerID="d3dac5a706b3429cbde9e1ac1c373d65a40dca0ed43c2070bca864c81b1ae68e" exitCode=0 Jan 27 16:16:42 crc kubenswrapper[4966]: I0127 16:16:42.973384 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l97pv" event={"ID":"747a29a7-bbbb-466c-8cd3-80439f4543ce","Type":"ContainerDied","Data":"d3dac5a706b3429cbde9e1ac1c373d65a40dca0ed43c2070bca864c81b1ae68e"} Jan 27 16:16:43 crc kubenswrapper[4966]: I0127 16:16:43.486023 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l97pv" Jan 27 16:16:43 crc kubenswrapper[4966]: I0127 16:16:43.682481 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/747a29a7-bbbb-466c-8cd3-80439f4543ce-utilities\") pod \"747a29a7-bbbb-466c-8cd3-80439f4543ce\" (UID: \"747a29a7-bbbb-466c-8cd3-80439f4543ce\") " Jan 27 16:16:43 crc kubenswrapper[4966]: I0127 16:16:43.682632 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/747a29a7-bbbb-466c-8cd3-80439f4543ce-catalog-content\") pod \"747a29a7-bbbb-466c-8cd3-80439f4543ce\" (UID: \"747a29a7-bbbb-466c-8cd3-80439f4543ce\") " Jan 27 16:16:43 crc kubenswrapper[4966]: I0127 16:16:43.682737 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj9qj\" (UniqueName: \"kubernetes.io/projected/747a29a7-bbbb-466c-8cd3-80439f4543ce-kube-api-access-gj9qj\") pod \"747a29a7-bbbb-466c-8cd3-80439f4543ce\" (UID: \"747a29a7-bbbb-466c-8cd3-80439f4543ce\") " Jan 27 16:16:43 crc kubenswrapper[4966]: I0127 16:16:43.683066 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/747a29a7-bbbb-466c-8cd3-80439f4543ce-utilities" (OuterVolumeSpecName: "utilities") pod "747a29a7-bbbb-466c-8cd3-80439f4543ce" (UID: "747a29a7-bbbb-466c-8cd3-80439f4543ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:16:43 crc kubenswrapper[4966]: I0127 16:16:43.683666 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/747a29a7-bbbb-466c-8cd3-80439f4543ce-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:16:43 crc kubenswrapper[4966]: I0127 16:16:43.697503 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/747a29a7-bbbb-466c-8cd3-80439f4543ce-kube-api-access-gj9qj" (OuterVolumeSpecName: "kube-api-access-gj9qj") pod "747a29a7-bbbb-466c-8cd3-80439f4543ce" (UID: "747a29a7-bbbb-466c-8cd3-80439f4543ce"). InnerVolumeSpecName "kube-api-access-gj9qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:16:43 crc kubenswrapper[4966]: I0127 16:16:43.704641 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/747a29a7-bbbb-466c-8cd3-80439f4543ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "747a29a7-bbbb-466c-8cd3-80439f4543ce" (UID: "747a29a7-bbbb-466c-8cd3-80439f4543ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:16:43 crc kubenswrapper[4966]: I0127 16:16:43.785570 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj9qj\" (UniqueName: \"kubernetes.io/projected/747a29a7-bbbb-466c-8cd3-80439f4543ce-kube-api-access-gj9qj\") on node \"crc\" DevicePath \"\"" Jan 27 16:16:43 crc kubenswrapper[4966]: I0127 16:16:43.785614 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/747a29a7-bbbb-466c-8cd3-80439f4543ce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:16:43 crc kubenswrapper[4966]: I0127 16:16:43.993325 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l97pv" Jan 27 16:16:43 crc kubenswrapper[4966]: I0127 16:16:43.993327 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l97pv" event={"ID":"747a29a7-bbbb-466c-8cd3-80439f4543ce","Type":"ContainerDied","Data":"96dbfa9a564d523f59008e4598599573dc4fca98294f021dbd11eb0f58d749f7"} Jan 27 16:16:43 crc kubenswrapper[4966]: I0127 16:16:43.993784 4966 scope.go:117] "RemoveContainer" containerID="d3dac5a706b3429cbde9e1ac1c373d65a40dca0ed43c2070bca864c81b1ae68e" Jan 27 16:16:43 crc kubenswrapper[4966]: I0127 16:16:43.998297 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtwk2" event={"ID":"ff339561-ddf4-4c7f-a90a-02a356dc59ed","Type":"ContainerStarted","Data":"2af4b094ac05a5579eaa73893e1f64710f3602d299ff45f56981354df818fe6b"} Jan 27 16:16:44 crc kubenswrapper[4966]: I0127 16:16:44.020414 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jtwk2" podStartSLOduration=2.224978145 podStartE2EDuration="16.020393032s" podCreationTimestamp="2026-01-27 16:16:28 +0000 UTC" firstStartedPulling="2026-01-27 16:16:29.81002268 +0000 UTC m=+2056.112816178" lastFinishedPulling="2026-01-27 16:16:43.605437577 +0000 UTC m=+2069.908231065" observedRunningTime="2026-01-27 16:16:44.015705975 +0000 UTC m=+2070.318499483" watchObservedRunningTime="2026-01-27 16:16:44.020393032 +0000 UTC m=+2070.323186540" Jan 27 16:16:44 crc kubenswrapper[4966]: I0127 16:16:44.021057 4966 scope.go:117] "RemoveContainer" containerID="7f2a9f8dbfe4fbd67ca9ea11b024c5921550b80f983ed79a765e1e6776c56fdf" Jan 27 16:16:44 crc kubenswrapper[4966]: I0127 16:16:44.051969 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l97pv"] Jan 27 16:16:44 crc kubenswrapper[4966]: I0127 16:16:44.054859 4966 scope.go:117] "RemoveContainer" containerID="7bd175690428d76f4f7fdecf7bc99d98ebc9619af46fdab7c097c6d58d2c237f" Jan 27 16:16:44 crc kubenswrapper[4966]: I0127 16:16:44.061270 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l97pv"] Jan 27 16:16:44 crc kubenswrapper[4966]: I0127 16:16:44.218245 4966 scope.go:117] "RemoveContainer" containerID="f184896614b1c948c519e41aba00ef22569dfae9acedab34364b4b2e8ca228f4" Jan 27 16:16:44 crc kubenswrapper[4966]: I0127 16:16:44.246630 4966 scope.go:117] "RemoveContainer" containerID="41b7b1e528a76290c9029e5e699549001d5943d2e75e30d4136830e7c1f442e0" Jan 27 16:16:44 crc kubenswrapper[4966]: I0127 16:16:44.294964 4966 scope.go:117] "RemoveContainer" containerID="b1787c60db8a370a891efda62a6662c217861954221efe0b8d26c672d800384d" Jan 27 16:16:44 crc kubenswrapper[4966]: I0127 16:16:44.568729 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="747a29a7-bbbb-466c-8cd3-80439f4543ce" path="/var/lib/kubelet/pods/747a29a7-bbbb-466c-8cd3-80439f4543ce/volumes" Jan 27 16:16:45 crc kubenswrapper[4966]: I0127 16:16:45.054539 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-bh6sq"] Jan 27 16:16:45 crc kubenswrapper[4966]: I0127 16:16:45.065715 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-bh6sq"] Jan 27 16:16:46 crc kubenswrapper[4966]: I0127 16:16:46.538300 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4fc12a0-b687-4984-a892-6ce0cdf2c920" path="/var/lib/kubelet/pods/e4fc12a0-b687-4984-a892-6ce0cdf2c920/volumes" Jan 27 16:16:48 crc kubenswrapper[4966]: I0127 16:16:48.068955 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-57bt9"] Jan 27 16:16:48 crc kubenswrapper[4966]: I0127 16:16:48.091959 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-7djpj"] Jan 27 16:16:48 crc kubenswrapper[4966]: I0127 16:16:48.182520 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-7djpj"] Jan 27 16:16:48 crc kubenswrapper[4966]: I0127 16:16:48.266971 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-57bt9"] Jan 27 16:16:48 crc kubenswrapper[4966]: I0127 16:16:48.309829 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-039f-account-create-update-5k2qz"] Jan 27 16:16:48 crc kubenswrapper[4966]: I0127 16:16:48.320763 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-a74e-account-create-update-vg7tr"] Jan 27 16:16:48 crc kubenswrapper[4966]: I0127 16:16:48.331467 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-9a9a-account-create-update-zf625"] Jan 27 16:16:48 crc kubenswrapper[4966]: I0127 16:16:48.342669 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-039f-account-create-update-5k2qz"] Jan 27 16:16:48 crc kubenswrapper[4966]: I0127 16:16:48.352054 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-9a9a-account-create-update-zf625"] Jan 27 16:16:48 crc kubenswrapper[4966]: I0127 16:16:48.361428 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-a74e-account-create-update-vg7tr"] Jan 27 16:16:48 crc kubenswrapper[4966]: I0127 16:16:48.503018 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jtwk2" Jan 27 16:16:48 crc kubenswrapper[4966]: I0127 16:16:48.503068 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jtwk2" Jan 27 16:16:48 crc kubenswrapper[4966]: I0127 16:16:48.561010 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f3edb84-792d-4066-bfae-a43c8fefe6da" path="/var/lib/kubelet/pods/1f3edb84-792d-4066-bfae-a43c8fefe6da/volumes" Jan 27 16:16:48 crc kubenswrapper[4966]: I0127 16:16:48.562600 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b288ea-02c0-423e-b6d7-621f137afa58" path="/var/lib/kubelet/pods/20b288ea-02c0-423e-b6d7-621f137afa58/volumes" Jan 27 16:16:48 crc kubenswrapper[4966]: I0127 16:16:48.567989 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66aafada-a280-4e39-96bd-0171e2f190f7" path="/var/lib/kubelet/pods/66aafada-a280-4e39-96bd-0171e2f190f7/volumes" Jan 27 16:16:48 crc kubenswrapper[4966]: I0127 16:16:48.571433 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe" path="/var/lib/kubelet/pods/c3cf7364-2a74-42fe-bdb7-c00b4f21b5fe/volumes" Jan 27 16:16:48 crc kubenswrapper[4966]: I0127 16:16:48.573123 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d060b664-4536-45b0-921f-4bc5a759fb1a" path="/var/lib/kubelet/pods/d060b664-4536-45b0-921f-4bc5a759fb1a/volumes" Jan 27 16:16:49 crc kubenswrapper[4966]: I0127 16:16:49.573306 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jtwk2" podUID="ff339561-ddf4-4c7f-a90a-02a356dc59ed" containerName="registry-server" probeResult="failure" output=< Jan 27 16:16:49 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 16:16:49 crc kubenswrapper[4966]: > Jan 27 16:16:58 crc kubenswrapper[4966]: I0127 16:16:58.553976 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jtwk2" Jan 27 16:16:58 crc kubenswrapper[4966]: I0127 16:16:58.607908 4966 generic.go:334] "Generic (PLEG): container finished" podID="0968ae26-d5aa-401f-b777-78d1ee76cbad" containerID="cb21997bd0ec0a13658205b2b58cc9ec37f40a3883fc7dae7b8e0acb9a092a22" exitCode=0 Jan 27 16:16:58 crc kubenswrapper[4966]: I0127 16:16:58.607960 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6" event={"ID":"0968ae26-d5aa-401f-b777-78d1ee76cbad","Type":"ContainerDied","Data":"cb21997bd0ec0a13658205b2b58cc9ec37f40a3883fc7dae7b8e0acb9a092a22"} Jan 27 16:16:58 crc kubenswrapper[4966]: I0127 16:16:58.617055 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jtwk2" Jan 27 16:16:59 crc kubenswrapper[4966]: I0127 16:16:59.089601 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jtwk2"] Jan 27 16:16:59 crc kubenswrapper[4966]: I0127 16:16:59.624286 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jtwk2" podUID="ff339561-ddf4-4c7f-a90a-02a356dc59ed" containerName="registry-server" containerID="cri-o://2af4b094ac05a5579eaa73893e1f64710f3602d299ff45f56981354df818fe6b" gracePeriod=2 Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.293186 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtwk2" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.296970 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.350261 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8qfv\" (UniqueName: \"kubernetes.io/projected/0968ae26-d5aa-401f-b777-78d1ee76cbad-kube-api-access-d8qfv\") pod \"0968ae26-d5aa-401f-b777-78d1ee76cbad\" (UID: \"0968ae26-d5aa-401f-b777-78d1ee76cbad\") " Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.350401 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0968ae26-d5aa-401f-b777-78d1ee76cbad-ssh-key-openstack-edpm-ipam\") pod \"0968ae26-d5aa-401f-b777-78d1ee76cbad\" (UID: \"0968ae26-d5aa-401f-b777-78d1ee76cbad\") " Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.350560 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0968ae26-d5aa-401f-b777-78d1ee76cbad-inventory\") pod \"0968ae26-d5aa-401f-b777-78d1ee76cbad\" (UID: \"0968ae26-d5aa-401f-b777-78d1ee76cbad\") " Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.351047 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff339561-ddf4-4c7f-a90a-02a356dc59ed-utilities\") pod \"ff339561-ddf4-4c7f-a90a-02a356dc59ed\" (UID: \"ff339561-ddf4-4c7f-a90a-02a356dc59ed\") " Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.351092 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff339561-ddf4-4c7f-a90a-02a356dc59ed-catalog-content\") pod \"ff339561-ddf4-4c7f-a90a-02a356dc59ed\" (UID: \"ff339561-ddf4-4c7f-a90a-02a356dc59ed\") " Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.351188 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dgrg\" (UniqueName: \"kubernetes.io/projected/ff339561-ddf4-4c7f-a90a-02a356dc59ed-kube-api-access-6dgrg\") pod \"ff339561-ddf4-4c7f-a90a-02a356dc59ed\" (UID: \"ff339561-ddf4-4c7f-a90a-02a356dc59ed\") " Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.352819 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff339561-ddf4-4c7f-a90a-02a356dc59ed-utilities" (OuterVolumeSpecName: "utilities") pod "ff339561-ddf4-4c7f-a90a-02a356dc59ed" (UID: "ff339561-ddf4-4c7f-a90a-02a356dc59ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.363263 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff339561-ddf4-4c7f-a90a-02a356dc59ed-kube-api-access-6dgrg" (OuterVolumeSpecName: "kube-api-access-6dgrg") pod "ff339561-ddf4-4c7f-a90a-02a356dc59ed" (UID: "ff339561-ddf4-4c7f-a90a-02a356dc59ed"). InnerVolumeSpecName "kube-api-access-6dgrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.374078 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0968ae26-d5aa-401f-b777-78d1ee76cbad-kube-api-access-d8qfv" (OuterVolumeSpecName: "kube-api-access-d8qfv") pod "0968ae26-d5aa-401f-b777-78d1ee76cbad" (UID: "0968ae26-d5aa-401f-b777-78d1ee76cbad"). InnerVolumeSpecName "kube-api-access-d8qfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.403522 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0968ae26-d5aa-401f-b777-78d1ee76cbad-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0968ae26-d5aa-401f-b777-78d1ee76cbad" (UID: "0968ae26-d5aa-401f-b777-78d1ee76cbad"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.409007 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0968ae26-d5aa-401f-b777-78d1ee76cbad-inventory" (OuterVolumeSpecName: "inventory") pod "0968ae26-d5aa-401f-b777-78d1ee76cbad" (UID: "0968ae26-d5aa-401f-b777-78d1ee76cbad"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.456347 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff339561-ddf4-4c7f-a90a-02a356dc59ed-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.456386 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dgrg\" (UniqueName: \"kubernetes.io/projected/ff339561-ddf4-4c7f-a90a-02a356dc59ed-kube-api-access-6dgrg\") on node \"crc\" DevicePath \"\"" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.456402 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8qfv\" (UniqueName: \"kubernetes.io/projected/0968ae26-d5aa-401f-b777-78d1ee76cbad-kube-api-access-d8qfv\") on node \"crc\" DevicePath \"\"" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.456413 4966 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0968ae26-d5aa-401f-b777-78d1ee76cbad-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.456425 4966 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0968ae26-d5aa-401f-b777-78d1ee76cbad-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.528198 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff339561-ddf4-4c7f-a90a-02a356dc59ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff339561-ddf4-4c7f-a90a-02a356dc59ed" (UID: "ff339561-ddf4-4c7f-a90a-02a356dc59ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.561437 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff339561-ddf4-4c7f-a90a-02a356dc59ed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.636599 4966 generic.go:334] "Generic (PLEG): container finished" podID="ff339561-ddf4-4c7f-a90a-02a356dc59ed" containerID="2af4b094ac05a5579eaa73893e1f64710f3602d299ff45f56981354df818fe6b" exitCode=0 Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.636660 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtwk2" event={"ID":"ff339561-ddf4-4c7f-a90a-02a356dc59ed","Type":"ContainerDied","Data":"2af4b094ac05a5579eaa73893e1f64710f3602d299ff45f56981354df818fe6b"} Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.636704 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtwk2" event={"ID":"ff339561-ddf4-4c7f-a90a-02a356dc59ed","Type":"ContainerDied","Data":"30386cbad8045b4df43eefa93afdd4fdd81f6df08ee4d31853f885566b90ee7b"} Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.636723 4966 scope.go:117] "RemoveContainer" containerID="2af4b094ac05a5579eaa73893e1f64710f3602d299ff45f56981354df818fe6b" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.636786 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtwk2" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.639508 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6" event={"ID":"0968ae26-d5aa-401f-b777-78d1ee76cbad","Type":"ContainerDied","Data":"8723c8582cfaeee9db4aa335e3865f88d9091919d8b9187142486958d606c56f"} Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.639537 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8723c8582cfaeee9db4aa335e3865f88d9091919d8b9187142486958d606c56f" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.639683 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.699940 4966 scope.go:117] "RemoveContainer" containerID="6a537393c6bb7df0e94ad215eca751eabe6d54386e144ae47c69d9a2febafd25" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.709841 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jtwk2"] Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.727379 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jtwk2"] Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.757102 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2"] Jan 27 16:17:00 crc kubenswrapper[4966]: E0127 16:17:00.757784 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff339561-ddf4-4c7f-a90a-02a356dc59ed" containerName="extract-content" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.757808 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff339561-ddf4-4c7f-a90a-02a356dc59ed" containerName="extract-content" Jan 27 16:17:00 crc kubenswrapper[4966]: E0127 16:17:00.757837 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0968ae26-d5aa-401f-b777-78d1ee76cbad" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.757846 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="0968ae26-d5aa-401f-b777-78d1ee76cbad" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 16:17:00 crc kubenswrapper[4966]: E0127 16:17:00.757861 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747a29a7-bbbb-466c-8cd3-80439f4543ce" containerName="registry-server" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.757868 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="747a29a7-bbbb-466c-8cd3-80439f4543ce" containerName="registry-server" Jan 27 16:17:00 crc kubenswrapper[4966]: E0127 16:17:00.757889 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff339561-ddf4-4c7f-a90a-02a356dc59ed" containerName="extract-utilities" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.757914 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff339561-ddf4-4c7f-a90a-02a356dc59ed" containerName="extract-utilities" Jan 27 16:17:00 crc kubenswrapper[4966]: E0127 16:17:00.757927 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff339561-ddf4-4c7f-a90a-02a356dc59ed" containerName="registry-server" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.757935 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff339561-ddf4-4c7f-a90a-02a356dc59ed" containerName="registry-server" Jan 27 16:17:00 crc kubenswrapper[4966]: E0127 16:17:00.757950 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747a29a7-bbbb-466c-8cd3-80439f4543ce" containerName="extract-content" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.757958 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="747a29a7-bbbb-466c-8cd3-80439f4543ce" containerName="extract-content" Jan 27 16:17:00 crc kubenswrapper[4966]: E0127 16:17:00.757981 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747a29a7-bbbb-466c-8cd3-80439f4543ce" containerName="extract-utilities" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.757989 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="747a29a7-bbbb-466c-8cd3-80439f4543ce" containerName="extract-utilities" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.758219 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="0968ae26-d5aa-401f-b777-78d1ee76cbad" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.758239 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="747a29a7-bbbb-466c-8cd3-80439f4543ce" containerName="registry-server" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.758261 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff339561-ddf4-4c7f-a90a-02a356dc59ed" containerName="registry-server" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.759197 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.759513 4966 scope.go:117] "RemoveContainer" containerID="d7101e908d8aaa35146a7ff5567aa1bff994ca9e8d5ac05781ca38994200699c" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.768545 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.768837 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hsd" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.770983 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.771116 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.774840 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2"] Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.818824 4966 scope.go:117] "RemoveContainer" containerID="2af4b094ac05a5579eaa73893e1f64710f3602d299ff45f56981354df818fe6b" Jan 27 16:17:00 crc kubenswrapper[4966]: E0127 16:17:00.819315 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2af4b094ac05a5579eaa73893e1f64710f3602d299ff45f56981354df818fe6b\": container with ID starting with 2af4b094ac05a5579eaa73893e1f64710f3602d299ff45f56981354df818fe6b not found: ID does not exist" containerID="2af4b094ac05a5579eaa73893e1f64710f3602d299ff45f56981354df818fe6b" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.819367 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2af4b094ac05a5579eaa73893e1f64710f3602d299ff45f56981354df818fe6b"} err="failed to get container status \"2af4b094ac05a5579eaa73893e1f64710f3602d299ff45f56981354df818fe6b\": rpc error: code = NotFound desc = could not find container \"2af4b094ac05a5579eaa73893e1f64710f3602d299ff45f56981354df818fe6b\": container with ID starting with 2af4b094ac05a5579eaa73893e1f64710f3602d299ff45f56981354df818fe6b not found: ID does not exist" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.819396 4966 scope.go:117] "RemoveContainer" containerID="6a537393c6bb7df0e94ad215eca751eabe6d54386e144ae47c69d9a2febafd25" Jan 27 16:17:00 crc kubenswrapper[4966]: E0127 16:17:00.819619 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a537393c6bb7df0e94ad215eca751eabe6d54386e144ae47c69d9a2febafd25\": container with ID starting with 6a537393c6bb7df0e94ad215eca751eabe6d54386e144ae47c69d9a2febafd25 not found: ID does not exist" containerID="6a537393c6bb7df0e94ad215eca751eabe6d54386e144ae47c69d9a2febafd25" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.819637 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a537393c6bb7df0e94ad215eca751eabe6d54386e144ae47c69d9a2febafd25"} err="failed to get container status \"6a537393c6bb7df0e94ad215eca751eabe6d54386e144ae47c69d9a2febafd25\": rpc error: code = NotFound desc = could not find container \"6a537393c6bb7df0e94ad215eca751eabe6d54386e144ae47c69d9a2febafd25\": container with ID starting with 6a537393c6bb7df0e94ad215eca751eabe6d54386e144ae47c69d9a2febafd25 not found: ID does not exist" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.819649 4966 scope.go:117] "RemoveContainer" containerID="d7101e908d8aaa35146a7ff5567aa1bff994ca9e8d5ac05781ca38994200699c" Jan 27 16:17:00 crc kubenswrapper[4966]: E0127 16:17:00.819880 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7101e908d8aaa35146a7ff5567aa1bff994ca9e8d5ac05781ca38994200699c\": container with ID starting with d7101e908d8aaa35146a7ff5567aa1bff994ca9e8d5ac05781ca38994200699c not found: ID does not exist" containerID="d7101e908d8aaa35146a7ff5567aa1bff994ca9e8d5ac05781ca38994200699c" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.819907 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7101e908d8aaa35146a7ff5567aa1bff994ca9e8d5ac05781ca38994200699c"} err="failed to get container status \"d7101e908d8aaa35146a7ff5567aa1bff994ca9e8d5ac05781ca38994200699c\": rpc error: code = NotFound desc = could not find container \"d7101e908d8aaa35146a7ff5567aa1bff994ca9e8d5ac05781ca38994200699c\": container with ID starting with d7101e908d8aaa35146a7ff5567aa1bff994ca9e8d5ac05781ca38994200699c not found: ID does not exist" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.873575 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1273b23b-1970-4c55-93e5-aa72f8b416af-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2\" (UID: \"1273b23b-1970-4c55-93e5-aa72f8b416af\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.873680 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgttl\" (UniqueName: \"kubernetes.io/projected/1273b23b-1970-4c55-93e5-aa72f8b416af-kube-api-access-xgttl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2\" (UID: \"1273b23b-1970-4c55-93e5-aa72f8b416af\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.874165 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1273b23b-1970-4c55-93e5-aa72f8b416af-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2\" (UID: \"1273b23b-1970-4c55-93e5-aa72f8b416af\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.976589 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1273b23b-1970-4c55-93e5-aa72f8b416af-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2\" (UID: \"1273b23b-1970-4c55-93e5-aa72f8b416af\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.976814 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1273b23b-1970-4c55-93e5-aa72f8b416af-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2\" (UID: \"1273b23b-1970-4c55-93e5-aa72f8b416af\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.976856 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgttl\" (UniqueName: \"kubernetes.io/projected/1273b23b-1970-4c55-93e5-aa72f8b416af-kube-api-access-xgttl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2\" (UID: \"1273b23b-1970-4c55-93e5-aa72f8b416af\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.981463 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1273b23b-1970-4c55-93e5-aa72f8b416af-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2\" (UID: \"1273b23b-1970-4c55-93e5-aa72f8b416af\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.981689 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1273b23b-1970-4c55-93e5-aa72f8b416af-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2\" (UID: \"1273b23b-1970-4c55-93e5-aa72f8b416af\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2" Jan 27 16:17:00 crc kubenswrapper[4966]: I0127 16:17:00.996296 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgttl\" (UniqueName: \"kubernetes.io/projected/1273b23b-1970-4c55-93e5-aa72f8b416af-kube-api-access-xgttl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2\" (UID: \"1273b23b-1970-4c55-93e5-aa72f8b416af\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2" Jan 27 16:17:01 crc kubenswrapper[4966]: I0127 16:17:01.162732 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2" Jan 27 16:17:01 crc kubenswrapper[4966]: I0127 16:17:01.752846 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2"] Jan 27 16:17:02 crc kubenswrapper[4966]: I0127 16:17:02.538449 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff339561-ddf4-4c7f-a90a-02a356dc59ed" path="/var/lib/kubelet/pods/ff339561-ddf4-4c7f-a90a-02a356dc59ed/volumes" Jan 27 16:17:02 crc kubenswrapper[4966]: I0127 16:17:02.663612 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2" event={"ID":"1273b23b-1970-4c55-93e5-aa72f8b416af","Type":"ContainerStarted","Data":"0696eb3a66bb4745f3c652ba0b5263abd59c7753261f98f46811c60fdb894d12"} Jan 27 16:17:02 crc kubenswrapper[4966]: I0127 16:17:02.663964 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2" event={"ID":"1273b23b-1970-4c55-93e5-aa72f8b416af","Type":"ContainerStarted","Data":"8337216016a36c36e96852bc16f2f96259307857be6fbe4ec494ac7ab92af006"} Jan 27 16:17:02 crc kubenswrapper[4966]: I0127 16:17:02.719400 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2" podStartSLOduration=2.280107955 podStartE2EDuration="2.719376692s" podCreationTimestamp="2026-01-27 16:17:00 +0000 UTC" firstStartedPulling="2026-01-27 16:17:01.77553135 +0000 UTC m=+2088.078324838" lastFinishedPulling="2026-01-27 16:17:02.214800097 +0000 UTC m=+2088.517593575" observedRunningTime="2026-01-27 16:17:02.707716966 +0000 UTC m=+2089.010510454" watchObservedRunningTime="2026-01-27 16:17:02.719376692 +0000 UTC m=+2089.022170190" Jan 27 16:17:17 crc kubenswrapper[4966]: I0127 16:17:17.045392 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sd9k4"] Jan 27 16:17:17 crc kubenswrapper[4966]: I0127 16:17:17.060485 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sd9k4"] Jan 27 16:17:18 crc kubenswrapper[4966]: I0127 16:17:18.537888 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2352ad0-f232-4d71-bb33-4bc933b12ca6" path="/var/lib/kubelet/pods/b2352ad0-f232-4d71-bb33-4bc933b12ca6/volumes" Jan 27 16:17:24 crc kubenswrapper[4966]: I0127 16:17:24.035981 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-lfwmv"] Jan 27 16:17:24 crc kubenswrapper[4966]: I0127 16:17:24.049978 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-36ca-account-create-update-vctkf"] Jan 27 16:17:24 crc kubenswrapper[4966]: I0127 16:17:24.061718 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-36ca-account-create-update-vctkf"] Jan 27 16:17:24 crc kubenswrapper[4966]: I0127 16:17:24.071731 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-lfwmv"] Jan 27 16:17:24 crc kubenswrapper[4966]: I0127 16:17:24.533445 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b6c82fc-6625-40e2-ade7-e4c47f978ac8" path="/var/lib/kubelet/pods/1b6c82fc-6625-40e2-ade7-e4c47f978ac8/volumes" Jan 27 16:17:24 crc kubenswrapper[4966]: I0127 16:17:24.534387 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb334693-8f68-43f6-9d39-925d1dcd1e03" path="/var/lib/kubelet/pods/bb334693-8f68-43f6-9d39-925d1dcd1e03/volumes" Jan 27 16:17:40 crc kubenswrapper[4966]: I0127 16:17:40.048310 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-l5fz8"] Jan 27 16:17:40 crc kubenswrapper[4966]: I0127 16:17:40.069337 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-l5fz8"] Jan 27 16:17:40 crc kubenswrapper[4966]: I0127 16:17:40.534553 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6b603ed-ab2c-4dc4-ae4c-0990f706529f" path="/var/lib/kubelet/pods/f6b603ed-ab2c-4dc4-ae4c-0990f706529f/volumes" Jan 27 16:17:44 crc kubenswrapper[4966]: I0127 16:17:44.421273 4966 scope.go:117] "RemoveContainer" containerID="f404ed0f7c03450d64be1fb867a171539600f572c52bf45ae48b5e2694033e1b" Jan 27 16:17:44 crc kubenswrapper[4966]: I0127 16:17:44.457439 4966 scope.go:117] "RemoveContainer" containerID="1e845d1895b3637fc8815ffe8589396ce7e555f7811afda15e4f49275df47c5b" Jan 27 16:17:44 crc kubenswrapper[4966]: I0127 16:17:44.525098 4966 scope.go:117] "RemoveContainer" containerID="2ef2fede60daac4b85e698cc31af3aa07a817bee99e42dc9cdeb150190a6132f" Jan 27 16:17:44 crc kubenswrapper[4966]: I0127 16:17:44.589149 4966 scope.go:117] "RemoveContainer" containerID="3fdc5eb1f5d7d3d4f3e78e778416f6e48d035bdc8d2d5777718cc3cdba142e3e" Jan 27 16:17:44 crc kubenswrapper[4966]: I0127 16:17:44.636866 4966 scope.go:117] "RemoveContainer" containerID="eb0b24bc5997f3e6578816ab3d39d3ba4b520eaf856e176519b90ac75fd0f588" Jan 27 16:17:44 crc kubenswrapper[4966]: I0127 16:17:44.692585 4966 scope.go:117] "RemoveContainer" containerID="896ae9306b9d4c18ba3802f9b388c30ebde835563e201621a16126f893b022ad" Jan 27 16:17:44 crc kubenswrapper[4966]: I0127 16:17:44.740611 4966 scope.go:117] "RemoveContainer" containerID="81d0c9cc6a9cd65a06ad5c8aea269792723874b5c60dfee8b219db19e7626468" Jan 27 16:17:44 crc kubenswrapper[4966]: I0127 16:17:44.766123 4966 scope.go:117] "RemoveContainer" containerID="6259c5133188e54c1c26f1b0261906b9283ae4cf74b82bfcbcfe3f690a147dd4" Jan 27 16:17:44 crc kubenswrapper[4966]: I0127 16:17:44.788197 4966 scope.go:117] "RemoveContainer" containerID="dcfbbf86faa9752ddb4523bfbfe993503f3943cbe0e2fb222f8c9ffbecbc8280" Jan 27 16:17:44 crc kubenswrapper[4966]: I0127 16:17:44.816468 4966 scope.go:117] "RemoveContainer" containerID="6076e27de99e7a8620e839aa9ebb4c9b7fd2ea23a74f1df77ecaaf9403f6eb1e" Jan 27 16:17:49 crc kubenswrapper[4966]: I0127 16:17:49.039397 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t2fjt"] Jan 27 16:17:49 crc kubenswrapper[4966]: I0127 16:17:49.052355 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t2fjt"] Jan 27 16:17:50 crc kubenswrapper[4966]: I0127 16:17:50.539947 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e490f34-cd48-4988-9a8e-ea51b08268fc" path="/var/lib/kubelet/pods/4e490f34-cd48-4988-9a8e-ea51b08268fc/volumes" Jan 27 16:18:10 crc kubenswrapper[4966]: I0127 16:18:10.119770 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:18:10 crc kubenswrapper[4966]: I0127 16:18:10.120294 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:18:13 crc kubenswrapper[4966]: I0127 16:18:13.488161 4966 generic.go:334] "Generic (PLEG): container finished" podID="1273b23b-1970-4c55-93e5-aa72f8b416af" containerID="0696eb3a66bb4745f3c652ba0b5263abd59c7753261f98f46811c60fdb894d12" exitCode=0 Jan 27 16:18:13 crc kubenswrapper[4966]: I0127 16:18:13.488307 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2" event={"ID":"1273b23b-1970-4c55-93e5-aa72f8b416af","Type":"ContainerDied","Data":"0696eb3a66bb4745f3c652ba0b5263abd59c7753261f98f46811c60fdb894d12"} Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.010424 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.086607 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1273b23b-1970-4c55-93e5-aa72f8b416af-ssh-key-openstack-edpm-ipam\") pod \"1273b23b-1970-4c55-93e5-aa72f8b416af\" (UID: \"1273b23b-1970-4c55-93e5-aa72f8b416af\") " Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.086821 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgttl\" (UniqueName: \"kubernetes.io/projected/1273b23b-1970-4c55-93e5-aa72f8b416af-kube-api-access-xgttl\") pod \"1273b23b-1970-4c55-93e5-aa72f8b416af\" (UID: \"1273b23b-1970-4c55-93e5-aa72f8b416af\") " Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.086926 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1273b23b-1970-4c55-93e5-aa72f8b416af-inventory\") pod \"1273b23b-1970-4c55-93e5-aa72f8b416af\" (UID: \"1273b23b-1970-4c55-93e5-aa72f8b416af\") " Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.092108 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1273b23b-1970-4c55-93e5-aa72f8b416af-kube-api-access-xgttl" (OuterVolumeSpecName: "kube-api-access-xgttl") pod "1273b23b-1970-4c55-93e5-aa72f8b416af" (UID: "1273b23b-1970-4c55-93e5-aa72f8b416af"). InnerVolumeSpecName "kube-api-access-xgttl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.117726 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1273b23b-1970-4c55-93e5-aa72f8b416af-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1273b23b-1970-4c55-93e5-aa72f8b416af" (UID: "1273b23b-1970-4c55-93e5-aa72f8b416af"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.125857 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1273b23b-1970-4c55-93e5-aa72f8b416af-inventory" (OuterVolumeSpecName: "inventory") pod "1273b23b-1970-4c55-93e5-aa72f8b416af" (UID: "1273b23b-1970-4c55-93e5-aa72f8b416af"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.190386 4966 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1273b23b-1970-4c55-93e5-aa72f8b416af-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.190423 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgttl\" (UniqueName: \"kubernetes.io/projected/1273b23b-1970-4c55-93e5-aa72f8b416af-kube-api-access-xgttl\") on node \"crc\" DevicePath \"\"" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.190434 4966 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1273b23b-1970-4c55-93e5-aa72f8b416af-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.514576 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2" event={"ID":"1273b23b-1970-4c55-93e5-aa72f8b416af","Type":"ContainerDied","Data":"8337216016a36c36e96852bc16f2f96259307857be6fbe4ec494ac7ab92af006"} Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.514620 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8337216016a36c36e96852bc16f2f96259307857be6fbe4ec494ac7ab92af006" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.514673 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.609571 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xstc7"] Jan 27 16:18:15 crc kubenswrapper[4966]: E0127 16:18:15.610233 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1273b23b-1970-4c55-93e5-aa72f8b416af" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.610257 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="1273b23b-1970-4c55-93e5-aa72f8b416af" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.610552 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="1273b23b-1970-4c55-93e5-aa72f8b416af" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.611648 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xstc7" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.613457 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.613667 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.613794 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.614247 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hsd" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.620916 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xstc7"] Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.703074 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xstc7\" (UID: \"5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xstc7" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.703124 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pgzn\" (UniqueName: \"kubernetes.io/projected/5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b-kube-api-access-9pgzn\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xstc7\" (UID: \"5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xstc7" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.703232 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xstc7\" (UID: \"5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xstc7" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.805924 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xstc7\" (UID: \"5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xstc7" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.805968 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pgzn\" (UniqueName: \"kubernetes.io/projected/5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b-kube-api-access-9pgzn\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xstc7\" (UID: \"5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xstc7" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.806037 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xstc7\" (UID: \"5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xstc7" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.809542 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xstc7\" (UID: \"5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xstc7" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.811769 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xstc7\" (UID: \"5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xstc7" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.825936 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pgzn\" (UniqueName: \"kubernetes.io/projected/5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b-kube-api-access-9pgzn\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xstc7\" (UID: \"5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xstc7" Jan 27 16:18:15 crc kubenswrapper[4966]: I0127 16:18:15.941482 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xstc7" Jan 27 16:18:16 crc kubenswrapper[4966]: I0127 16:18:16.540878 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xstc7"] Jan 27 16:18:17 crc kubenswrapper[4966]: I0127 16:18:17.542447 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xstc7" event={"ID":"5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b","Type":"ContainerStarted","Data":"e00926a59b80123d44f1d3b69e12f21211a54b377c2720b6909fd08648b05e1c"} Jan 27 16:18:18 crc kubenswrapper[4966]: I0127 16:18:18.554056 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xstc7" event={"ID":"5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b","Type":"ContainerStarted","Data":"4323cdf3fc86e5abd6b5d5eb77b37248398e099922216672506d5b5b96b16971"} Jan 27 16:18:18 crc kubenswrapper[4966]: I0127 16:18:18.579732 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xstc7" podStartSLOduration=2.620702283 podStartE2EDuration="3.579714929s" podCreationTimestamp="2026-01-27 16:18:15 +0000 UTC" firstStartedPulling="2026-01-27 16:18:16.5450302 +0000 UTC m=+2162.847823688" lastFinishedPulling="2026-01-27 16:18:17.504042846 +0000 UTC m=+2163.806836334" observedRunningTime="2026-01-27 16:18:18.569437898 +0000 UTC m=+2164.872231406" watchObservedRunningTime="2026-01-27 16:18:18.579714929 +0000 UTC m=+2164.882508417" Jan 27 16:18:23 crc kubenswrapper[4966]: I0127 16:18:23.614718 4966 generic.go:334] "Generic (PLEG): container finished" podID="5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b" containerID="4323cdf3fc86e5abd6b5d5eb77b37248398e099922216672506d5b5b96b16971" exitCode=0 Jan 27 16:18:23 crc kubenswrapper[4966]: I0127 16:18:23.614788 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xstc7" event={"ID":"5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b","Type":"ContainerDied","Data":"4323cdf3fc86e5abd6b5d5eb77b37248398e099922216672506d5b5b96b16971"} Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.068779 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-bt86v"] Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.082309 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-bt86v"] Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.137292 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xstc7" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.252516 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pgzn\" (UniqueName: \"kubernetes.io/projected/5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b-kube-api-access-9pgzn\") pod \"5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b\" (UID: \"5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b\") " Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.253036 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b-inventory\") pod \"5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b\" (UID: \"5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b\") " Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.253483 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b-ssh-key-openstack-edpm-ipam\") pod \"5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b\" (UID: \"5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b\") " Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.258001 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b-kube-api-access-9pgzn" (OuterVolumeSpecName: "kube-api-access-9pgzn") pod "5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b" (UID: "5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b"). InnerVolumeSpecName "kube-api-access-9pgzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.292423 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b-inventory" (OuterVolumeSpecName: "inventory") pod "5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b" (UID: "5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.303401 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b" (UID: "5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.357027 4966 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.357067 4966 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.357085 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pgzn\" (UniqueName: \"kubernetes.io/projected/5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b-kube-api-access-9pgzn\") on node \"crc\" DevicePath \"\"" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.643689 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xstc7" event={"ID":"5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b","Type":"ContainerDied","Data":"e00926a59b80123d44f1d3b69e12f21211a54b377c2720b6909fd08648b05e1c"} Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.643743 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e00926a59b80123d44f1d3b69e12f21211a54b377c2720b6909fd08648b05e1c" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.643828 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xstc7" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.747179 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-zpfml"] Jan 27 16:18:25 crc kubenswrapper[4966]: E0127 16:18:25.747687 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.747705 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.747926 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.748693 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zpfml" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.757747 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.757949 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hsd" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.757998 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.758130 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.782621 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-zpfml"] Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.874297 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9d4h\" (UniqueName: \"kubernetes.io/projected/37e4ec2b-5f24-4dae-9e35-7ec76860c36c-kube-api-access-s9d4h\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zpfml\" (UID: \"37e4ec2b-5f24-4dae-9e35-7ec76860c36c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zpfml" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.874382 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37e4ec2b-5f24-4dae-9e35-7ec76860c36c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zpfml\" (UID: \"37e4ec2b-5f24-4dae-9e35-7ec76860c36c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zpfml" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.874547 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37e4ec2b-5f24-4dae-9e35-7ec76860c36c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zpfml\" (UID: \"37e4ec2b-5f24-4dae-9e35-7ec76860c36c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zpfml" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.977492 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9d4h\" (UniqueName: \"kubernetes.io/projected/37e4ec2b-5f24-4dae-9e35-7ec76860c36c-kube-api-access-s9d4h\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zpfml\" (UID: \"37e4ec2b-5f24-4dae-9e35-7ec76860c36c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zpfml" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.977569 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37e4ec2b-5f24-4dae-9e35-7ec76860c36c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zpfml\" (UID: \"37e4ec2b-5f24-4dae-9e35-7ec76860c36c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zpfml" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.977860 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37e4ec2b-5f24-4dae-9e35-7ec76860c36c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zpfml\" (UID: \"37e4ec2b-5f24-4dae-9e35-7ec76860c36c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zpfml" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.982595 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37e4ec2b-5f24-4dae-9e35-7ec76860c36c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zpfml\" (UID: \"37e4ec2b-5f24-4dae-9e35-7ec76860c36c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zpfml" Jan 27 16:18:25 crc kubenswrapper[4966]: I0127 16:18:25.983432 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37e4ec2b-5f24-4dae-9e35-7ec76860c36c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zpfml\" (UID: \"37e4ec2b-5f24-4dae-9e35-7ec76860c36c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zpfml" Jan 27 16:18:26 crc kubenswrapper[4966]: I0127 16:18:26.000949 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9d4h\" (UniqueName: \"kubernetes.io/projected/37e4ec2b-5f24-4dae-9e35-7ec76860c36c-kube-api-access-s9d4h\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zpfml\" (UID: \"37e4ec2b-5f24-4dae-9e35-7ec76860c36c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zpfml" Jan 27 16:18:26 crc kubenswrapper[4966]: I0127 16:18:26.077955 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zpfml" Jan 27 16:18:26 crc kubenswrapper[4966]: I0127 16:18:26.542342 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32c99749-de0a-41b0-a8e6-8d4bc6ada807" path="/var/lib/kubelet/pods/32c99749-de0a-41b0-a8e6-8d4bc6ada807/volumes" Jan 27 16:18:26 crc kubenswrapper[4966]: I0127 16:18:26.630241 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-zpfml"] Jan 27 16:18:26 crc kubenswrapper[4966]: I0127 16:18:26.653436 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zpfml" event={"ID":"37e4ec2b-5f24-4dae-9e35-7ec76860c36c","Type":"ContainerStarted","Data":"d305c28548489b7797caeb70aa9e8ae1a829dc5a8d0f28379bef83068044894d"} Jan 27 16:18:28 crc kubenswrapper[4966]: I0127 16:18:28.692640 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zpfml" event={"ID":"37e4ec2b-5f24-4dae-9e35-7ec76860c36c","Type":"ContainerStarted","Data":"74d5513891bb26a54421163541983ddeb4342108012764242c7756ba384c83db"} Jan 27 16:18:28 crc kubenswrapper[4966]: I0127 16:18:28.724240 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zpfml" podStartSLOduration=2.193545109 podStartE2EDuration="3.724218531s" podCreationTimestamp="2026-01-27 16:18:25 +0000 UTC" firstStartedPulling="2026-01-27 16:18:26.631866405 +0000 UTC m=+2172.934659893" lastFinishedPulling="2026-01-27 16:18:28.162539827 +0000 UTC m=+2174.465333315" observedRunningTime="2026-01-27 16:18:28.713821945 +0000 UTC m=+2175.016615453" watchObservedRunningTime="2026-01-27 16:18:28.724218531 +0000 UTC m=+2175.027012029" Jan 27 16:18:34 crc kubenswrapper[4966]: I0127 16:18:34.011647 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fm46n"] Jan 27 16:18:34 crc kubenswrapper[4966]: I0127 16:18:34.014751 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fm46n" Jan 27 16:18:34 crc kubenswrapper[4966]: I0127 16:18:34.023518 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fm46n"] Jan 27 16:18:34 crc kubenswrapper[4966]: I0127 16:18:34.087617 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e633dd15-957d-49c7-b604-f01bd9e73e64-utilities\") pod \"certified-operators-fm46n\" (UID: \"e633dd15-957d-49c7-b604-f01bd9e73e64\") " pod="openshift-marketplace/certified-operators-fm46n" Jan 27 16:18:34 crc kubenswrapper[4966]: I0127 16:18:34.087674 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnch9\" (UniqueName: \"kubernetes.io/projected/e633dd15-957d-49c7-b604-f01bd9e73e64-kube-api-access-rnch9\") pod \"certified-operators-fm46n\" (UID: \"e633dd15-957d-49c7-b604-f01bd9e73e64\") " pod="openshift-marketplace/certified-operators-fm46n" Jan 27 16:18:34 crc kubenswrapper[4966]: I0127 16:18:34.087800 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e633dd15-957d-49c7-b604-f01bd9e73e64-catalog-content\") pod \"certified-operators-fm46n\" (UID: \"e633dd15-957d-49c7-b604-f01bd9e73e64\") " pod="openshift-marketplace/certified-operators-fm46n" Jan 27 16:18:34 crc kubenswrapper[4966]: I0127 16:18:34.190045 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e633dd15-957d-49c7-b604-f01bd9e73e64-utilities\") pod \"certified-operators-fm46n\" (UID: \"e633dd15-957d-49c7-b604-f01bd9e73e64\") " pod="openshift-marketplace/certified-operators-fm46n" Jan 27 16:18:34 crc kubenswrapper[4966]: I0127 16:18:34.190522 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnch9\" (UniqueName: \"kubernetes.io/projected/e633dd15-957d-49c7-b604-f01bd9e73e64-kube-api-access-rnch9\") pod \"certified-operators-fm46n\" (UID: \"e633dd15-957d-49c7-b604-f01bd9e73e64\") " pod="openshift-marketplace/certified-operators-fm46n" Jan 27 16:18:34 crc kubenswrapper[4966]: I0127 16:18:34.190466 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e633dd15-957d-49c7-b604-f01bd9e73e64-utilities\") pod \"certified-operators-fm46n\" (UID: \"e633dd15-957d-49c7-b604-f01bd9e73e64\") " pod="openshift-marketplace/certified-operators-fm46n" Jan 27 16:18:34 crc kubenswrapper[4966]: I0127 16:18:34.191214 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e633dd15-957d-49c7-b604-f01bd9e73e64-catalog-content\") pod \"certified-operators-fm46n\" (UID: \"e633dd15-957d-49c7-b604-f01bd9e73e64\") " pod="openshift-marketplace/certified-operators-fm46n" Jan 27 16:18:34 crc kubenswrapper[4966]: I0127 16:18:34.191501 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e633dd15-957d-49c7-b604-f01bd9e73e64-catalog-content\") pod \"certified-operators-fm46n\" (UID: \"e633dd15-957d-49c7-b604-f01bd9e73e64\") " pod="openshift-marketplace/certified-operators-fm46n" Jan 27 16:18:34 crc kubenswrapper[4966]: I0127 16:18:34.213826 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnch9\" (UniqueName: \"kubernetes.io/projected/e633dd15-957d-49c7-b604-f01bd9e73e64-kube-api-access-rnch9\") pod \"certified-operators-fm46n\" (UID: \"e633dd15-957d-49c7-b604-f01bd9e73e64\") " pod="openshift-marketplace/certified-operators-fm46n" Jan 27 16:18:34 crc kubenswrapper[4966]: I0127 16:18:34.345296 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fm46n" Jan 27 16:18:34 crc kubenswrapper[4966]: I0127 16:18:34.972078 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fm46n"] Jan 27 16:18:35 crc kubenswrapper[4966]: I0127 16:18:35.770334 4966 generic.go:334] "Generic (PLEG): container finished" podID="e633dd15-957d-49c7-b604-f01bd9e73e64" containerID="14f0bbb178a79cb5e1947ae737e21d5aa420f0700d22de2c32e0467954d5a7df" exitCode=0 Jan 27 16:18:35 crc kubenswrapper[4966]: I0127 16:18:35.770439 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fm46n" event={"ID":"e633dd15-957d-49c7-b604-f01bd9e73e64","Type":"ContainerDied","Data":"14f0bbb178a79cb5e1947ae737e21d5aa420f0700d22de2c32e0467954d5a7df"} Jan 27 16:18:35 crc kubenswrapper[4966]: I0127 16:18:35.770679 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fm46n" event={"ID":"e633dd15-957d-49c7-b604-f01bd9e73e64","Type":"ContainerStarted","Data":"9d35c6fd655c894ba54eed3f5985ba38f9d17ab2ed85713845c99c8654689b9f"} Jan 27 16:18:37 crc kubenswrapper[4966]: I0127 16:18:37.797859 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fm46n" event={"ID":"e633dd15-957d-49c7-b604-f01bd9e73e64","Type":"ContainerStarted","Data":"21a036bcec172ddba8276b493ddbe654e458567794647bff2c60abbfbca0ff37"} Jan 27 16:18:38 crc kubenswrapper[4966]: I0127 16:18:38.809624 4966 generic.go:334] "Generic (PLEG): container finished" podID="e633dd15-957d-49c7-b604-f01bd9e73e64" containerID="21a036bcec172ddba8276b493ddbe654e458567794647bff2c60abbfbca0ff37" exitCode=0 Jan 27 16:18:38 crc kubenswrapper[4966]: I0127 16:18:38.809671 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fm46n" event={"ID":"e633dd15-957d-49c7-b604-f01bd9e73e64","Type":"ContainerDied","Data":"21a036bcec172ddba8276b493ddbe654e458567794647bff2c60abbfbca0ff37"} Jan 27 16:18:39 crc kubenswrapper[4966]: I0127 16:18:39.825793 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fm46n" event={"ID":"e633dd15-957d-49c7-b604-f01bd9e73e64","Type":"ContainerStarted","Data":"e679c167974f63404cef4e230580503c7607768b65e26097e44d3a9a0de1a718"} Jan 27 16:18:39 crc kubenswrapper[4966]: I0127 16:18:39.860271 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fm46n" podStartSLOduration=3.300042827 podStartE2EDuration="6.860246509s" podCreationTimestamp="2026-01-27 16:18:33 +0000 UTC" firstStartedPulling="2026-01-27 16:18:35.773597887 +0000 UTC m=+2182.076391375" lastFinishedPulling="2026-01-27 16:18:39.333801559 +0000 UTC m=+2185.636595057" observedRunningTime="2026-01-27 16:18:39.848088837 +0000 UTC m=+2186.150882335" watchObservedRunningTime="2026-01-27 16:18:39.860246509 +0000 UTC m=+2186.163040007" Jan 27 16:18:40 crc kubenswrapper[4966]: I0127 16:18:40.120718 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:18:40 crc kubenswrapper[4966]: I0127 16:18:40.121266 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:18:44 crc kubenswrapper[4966]: I0127 16:18:44.346315 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fm46n" Jan 27 16:18:44 crc kubenswrapper[4966]: I0127 16:18:44.346870 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fm46n" Jan 27 16:18:44 crc kubenswrapper[4966]: I0127 16:18:44.394560 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fm46n" Jan 27 16:18:44 crc kubenswrapper[4966]: I0127 16:18:44.932599 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fm46n" Jan 27 16:18:45 crc kubenswrapper[4966]: I0127 16:18:45.003827 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fm46n"] Jan 27 16:18:45 crc kubenswrapper[4966]: I0127 16:18:45.061536 4966 scope.go:117] "RemoveContainer" containerID="72d3d1c8f7e26e88a0d2c3f86a38135af1c493d3a994402eccb79d631ca88c54" Jan 27 16:18:45 crc kubenswrapper[4966]: I0127 16:18:45.095087 4966 scope.go:117] "RemoveContainer" containerID="62877e76b852eff89dfebb34eae1818ceec73fcc492d846838100c3684171451" Jan 27 16:18:46 crc kubenswrapper[4966]: I0127 16:18:46.901415 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fm46n" podUID="e633dd15-957d-49c7-b604-f01bd9e73e64" containerName="registry-server" containerID="cri-o://e679c167974f63404cef4e230580503c7607768b65e26097e44d3a9a0de1a718" gracePeriod=2 Jan 27 16:18:47 crc kubenswrapper[4966]: I0127 16:18:47.438068 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fm46n" Jan 27 16:18:47 crc kubenswrapper[4966]: I0127 16:18:47.550109 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnch9\" (UniqueName: \"kubernetes.io/projected/e633dd15-957d-49c7-b604-f01bd9e73e64-kube-api-access-rnch9\") pod \"e633dd15-957d-49c7-b604-f01bd9e73e64\" (UID: \"e633dd15-957d-49c7-b604-f01bd9e73e64\") " Jan 27 16:18:47 crc kubenswrapper[4966]: I0127 16:18:47.550595 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e633dd15-957d-49c7-b604-f01bd9e73e64-catalog-content\") pod \"e633dd15-957d-49c7-b604-f01bd9e73e64\" (UID: \"e633dd15-957d-49c7-b604-f01bd9e73e64\") " Jan 27 16:18:47 crc kubenswrapper[4966]: I0127 16:18:47.550741 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e633dd15-957d-49c7-b604-f01bd9e73e64-utilities\") pod \"e633dd15-957d-49c7-b604-f01bd9e73e64\" (UID: \"e633dd15-957d-49c7-b604-f01bd9e73e64\") " Jan 27 16:18:47 crc kubenswrapper[4966]: I0127 16:18:47.553452 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e633dd15-957d-49c7-b604-f01bd9e73e64-utilities" (OuterVolumeSpecName: "utilities") pod "e633dd15-957d-49c7-b604-f01bd9e73e64" (UID: "e633dd15-957d-49c7-b604-f01bd9e73e64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:18:47 crc kubenswrapper[4966]: I0127 16:18:47.560624 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e633dd15-957d-49c7-b604-f01bd9e73e64-kube-api-access-rnch9" (OuterVolumeSpecName: "kube-api-access-rnch9") pod "e633dd15-957d-49c7-b604-f01bd9e73e64" (UID: "e633dd15-957d-49c7-b604-f01bd9e73e64"). InnerVolumeSpecName "kube-api-access-rnch9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:18:47 crc kubenswrapper[4966]: I0127 16:18:47.613762 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e633dd15-957d-49c7-b604-f01bd9e73e64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e633dd15-957d-49c7-b604-f01bd9e73e64" (UID: "e633dd15-957d-49c7-b604-f01bd9e73e64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:18:47 crc kubenswrapper[4966]: I0127 16:18:47.655541 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e633dd15-957d-49c7-b604-f01bd9e73e64-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:18:47 crc kubenswrapper[4966]: I0127 16:18:47.655587 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e633dd15-957d-49c7-b604-f01bd9e73e64-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:18:47 crc kubenswrapper[4966]: I0127 16:18:47.655598 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnch9\" (UniqueName: \"kubernetes.io/projected/e633dd15-957d-49c7-b604-f01bd9e73e64-kube-api-access-rnch9\") on node \"crc\" DevicePath \"\"" Jan 27 16:18:47 crc kubenswrapper[4966]: I0127 16:18:47.913561 4966 generic.go:334] "Generic (PLEG): container finished" podID="e633dd15-957d-49c7-b604-f01bd9e73e64" containerID="e679c167974f63404cef4e230580503c7607768b65e26097e44d3a9a0de1a718" exitCode=0 Jan 27 16:18:47 crc kubenswrapper[4966]: I0127 16:18:47.913673 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fm46n" event={"ID":"e633dd15-957d-49c7-b604-f01bd9e73e64","Type":"ContainerDied","Data":"e679c167974f63404cef4e230580503c7607768b65e26097e44d3a9a0de1a718"} Jan 27 16:18:47 crc kubenswrapper[4966]: I0127 16:18:47.915164 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fm46n" event={"ID":"e633dd15-957d-49c7-b604-f01bd9e73e64","Type":"ContainerDied","Data":"9d35c6fd655c894ba54eed3f5985ba38f9d17ab2ed85713845c99c8654689b9f"} Jan 27 16:18:47 crc kubenswrapper[4966]: I0127 16:18:47.913735 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fm46n" Jan 27 16:18:47 crc kubenswrapper[4966]: I0127 16:18:47.915228 4966 scope.go:117] "RemoveContainer" containerID="e679c167974f63404cef4e230580503c7607768b65e26097e44d3a9a0de1a718" Jan 27 16:18:47 crc kubenswrapper[4966]: I0127 16:18:47.936845 4966 scope.go:117] "RemoveContainer" containerID="21a036bcec172ddba8276b493ddbe654e458567794647bff2c60abbfbca0ff37" Jan 27 16:18:47 crc kubenswrapper[4966]: I0127 16:18:47.959412 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fm46n"] Jan 27 16:18:47 crc kubenswrapper[4966]: I0127 16:18:47.974758 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fm46n"] Jan 27 16:18:47 crc kubenswrapper[4966]: I0127 16:18:47.979537 4966 scope.go:117] "RemoveContainer" containerID="14f0bbb178a79cb5e1947ae737e21d5aa420f0700d22de2c32e0467954d5a7df" Jan 27 16:18:48 crc kubenswrapper[4966]: I0127 16:18:48.044295 4966 scope.go:117] "RemoveContainer" containerID="e679c167974f63404cef4e230580503c7607768b65e26097e44d3a9a0de1a718" Jan 27 16:18:48 crc kubenswrapper[4966]: E0127 16:18:48.046176 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e679c167974f63404cef4e230580503c7607768b65e26097e44d3a9a0de1a718\": container with ID starting with e679c167974f63404cef4e230580503c7607768b65e26097e44d3a9a0de1a718 not found: ID does not exist" containerID="e679c167974f63404cef4e230580503c7607768b65e26097e44d3a9a0de1a718" Jan 27 16:18:48 crc kubenswrapper[4966]: I0127 16:18:48.046245 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e679c167974f63404cef4e230580503c7607768b65e26097e44d3a9a0de1a718"} err="failed to get container status \"e679c167974f63404cef4e230580503c7607768b65e26097e44d3a9a0de1a718\": rpc error: code = NotFound desc = could not find container \"e679c167974f63404cef4e230580503c7607768b65e26097e44d3a9a0de1a718\": container with ID starting with e679c167974f63404cef4e230580503c7607768b65e26097e44d3a9a0de1a718 not found: ID does not exist" Jan 27 16:18:48 crc kubenswrapper[4966]: I0127 16:18:48.046292 4966 scope.go:117] "RemoveContainer" containerID="21a036bcec172ddba8276b493ddbe654e458567794647bff2c60abbfbca0ff37" Jan 27 16:18:48 crc kubenswrapper[4966]: E0127 16:18:48.047074 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21a036bcec172ddba8276b493ddbe654e458567794647bff2c60abbfbca0ff37\": container with ID starting with 21a036bcec172ddba8276b493ddbe654e458567794647bff2c60abbfbca0ff37 not found: ID does not exist" containerID="21a036bcec172ddba8276b493ddbe654e458567794647bff2c60abbfbca0ff37" Jan 27 16:18:48 crc kubenswrapper[4966]: I0127 16:18:48.047115 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21a036bcec172ddba8276b493ddbe654e458567794647bff2c60abbfbca0ff37"} err="failed to get container status \"21a036bcec172ddba8276b493ddbe654e458567794647bff2c60abbfbca0ff37\": rpc error: code = NotFound desc = could not find container \"21a036bcec172ddba8276b493ddbe654e458567794647bff2c60abbfbca0ff37\": container with ID starting with 21a036bcec172ddba8276b493ddbe654e458567794647bff2c60abbfbca0ff37 not found: ID does not exist" Jan 27 16:18:48 crc kubenswrapper[4966]: I0127 16:18:48.047163 4966 scope.go:117] "RemoveContainer" containerID="14f0bbb178a79cb5e1947ae737e21d5aa420f0700d22de2c32e0467954d5a7df" Jan 27 16:18:48 crc kubenswrapper[4966]: E0127 16:18:48.047791 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14f0bbb178a79cb5e1947ae737e21d5aa420f0700d22de2c32e0467954d5a7df\": container with ID starting with 14f0bbb178a79cb5e1947ae737e21d5aa420f0700d22de2c32e0467954d5a7df not found: ID does not exist" containerID="14f0bbb178a79cb5e1947ae737e21d5aa420f0700d22de2c32e0467954d5a7df" Jan 27 16:18:48 crc kubenswrapper[4966]: I0127 16:18:48.047828 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14f0bbb178a79cb5e1947ae737e21d5aa420f0700d22de2c32e0467954d5a7df"} err="failed to get container status \"14f0bbb178a79cb5e1947ae737e21d5aa420f0700d22de2c32e0467954d5a7df\": rpc error: code = NotFound desc = could not find container \"14f0bbb178a79cb5e1947ae737e21d5aa420f0700d22de2c32e0467954d5a7df\": container with ID starting with 14f0bbb178a79cb5e1947ae737e21d5aa420f0700d22de2c32e0467954d5a7df not found: ID does not exist" Jan 27 16:18:48 crc kubenswrapper[4966]: I0127 16:18:48.533646 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e633dd15-957d-49c7-b604-f01bd9e73e64" path="/var/lib/kubelet/pods/e633dd15-957d-49c7-b604-f01bd9e73e64/volumes" Jan 27 16:19:10 crc kubenswrapper[4966]: I0127 16:19:10.121053 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:19:10 crc kubenswrapper[4966]: I0127 16:19:10.121691 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:19:10 crc kubenswrapper[4966]: I0127 16:19:10.121739 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 16:19:10 crc kubenswrapper[4966]: I0127 16:19:10.122634 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54"} pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:19:10 crc kubenswrapper[4966]: I0127 16:19:10.122726 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" containerID="cri-o://db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" gracePeriod=600 Jan 27 16:19:10 crc kubenswrapper[4966]: I0127 16:19:10.145796 4966 generic.go:334] "Generic (PLEG): container finished" podID="37e4ec2b-5f24-4dae-9e35-7ec76860c36c" containerID="74d5513891bb26a54421163541983ddeb4342108012764242c7756ba384c83db" exitCode=0 Jan 27 16:19:10 crc kubenswrapper[4966]: I0127 16:19:10.145851 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zpfml" event={"ID":"37e4ec2b-5f24-4dae-9e35-7ec76860c36c","Type":"ContainerDied","Data":"74d5513891bb26a54421163541983ddeb4342108012764242c7756ba384c83db"} Jan 27 16:19:10 crc kubenswrapper[4966]: E0127 16:19:10.801660 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:19:11 crc kubenswrapper[4966]: I0127 16:19:11.163074 4966 generic.go:334] "Generic (PLEG): container finished" podID="75889828-fc5d-4516-a3c4-db3affd4f810" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" exitCode=0 Jan 27 16:19:11 crc kubenswrapper[4966]: I0127 16:19:11.163273 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerDied","Data":"db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54"} Jan 27 16:19:11 crc kubenswrapper[4966]: I0127 16:19:11.163317 4966 scope.go:117] "RemoveContainer" containerID="ccc6669efff84876a93e08ec54b68bd51a8c24361bb45057ba982fe790d465ea" Jan 27 16:19:11 crc kubenswrapper[4966]: I0127 16:19:11.164250 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:19:11 crc kubenswrapper[4966]: E0127 16:19:11.164587 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:19:11 crc kubenswrapper[4966]: I0127 16:19:11.730992 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zpfml" Jan 27 16:19:11 crc kubenswrapper[4966]: I0127 16:19:11.755517 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37e4ec2b-5f24-4dae-9e35-7ec76860c36c-inventory\") pod \"37e4ec2b-5f24-4dae-9e35-7ec76860c36c\" (UID: \"37e4ec2b-5f24-4dae-9e35-7ec76860c36c\") " Jan 27 16:19:11 crc kubenswrapper[4966]: I0127 16:19:11.755603 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9d4h\" (UniqueName: \"kubernetes.io/projected/37e4ec2b-5f24-4dae-9e35-7ec76860c36c-kube-api-access-s9d4h\") pod \"37e4ec2b-5f24-4dae-9e35-7ec76860c36c\" (UID: \"37e4ec2b-5f24-4dae-9e35-7ec76860c36c\") " Jan 27 16:19:11 crc kubenswrapper[4966]: I0127 16:19:11.755691 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37e4ec2b-5f24-4dae-9e35-7ec76860c36c-ssh-key-openstack-edpm-ipam\") pod \"37e4ec2b-5f24-4dae-9e35-7ec76860c36c\" (UID: \"37e4ec2b-5f24-4dae-9e35-7ec76860c36c\") " Jan 27 16:19:11 crc kubenswrapper[4966]: I0127 16:19:11.772245 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37e4ec2b-5f24-4dae-9e35-7ec76860c36c-kube-api-access-s9d4h" (OuterVolumeSpecName: "kube-api-access-s9d4h") pod "37e4ec2b-5f24-4dae-9e35-7ec76860c36c" (UID: "37e4ec2b-5f24-4dae-9e35-7ec76860c36c"). InnerVolumeSpecName "kube-api-access-s9d4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:19:11 crc kubenswrapper[4966]: I0127 16:19:11.796716 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37e4ec2b-5f24-4dae-9e35-7ec76860c36c-inventory" (OuterVolumeSpecName: "inventory") pod "37e4ec2b-5f24-4dae-9e35-7ec76860c36c" (UID: "37e4ec2b-5f24-4dae-9e35-7ec76860c36c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:19:11 crc kubenswrapper[4966]: I0127 16:19:11.811878 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37e4ec2b-5f24-4dae-9e35-7ec76860c36c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "37e4ec2b-5f24-4dae-9e35-7ec76860c36c" (UID: "37e4ec2b-5f24-4dae-9e35-7ec76860c36c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:19:11 crc kubenswrapper[4966]: I0127 16:19:11.858415 4966 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37e4ec2b-5f24-4dae-9e35-7ec76860c36c-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 16:19:11 crc kubenswrapper[4966]: I0127 16:19:11.858464 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9d4h\" (UniqueName: \"kubernetes.io/projected/37e4ec2b-5f24-4dae-9e35-7ec76860c36c-kube-api-access-s9d4h\") on node \"crc\" DevicePath \"\"" Jan 27 16:19:11 crc kubenswrapper[4966]: I0127 16:19:11.858481 4966 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37e4ec2b-5f24-4dae-9e35-7ec76860c36c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.177278 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zpfml" event={"ID":"37e4ec2b-5f24-4dae-9e35-7ec76860c36c","Type":"ContainerDied","Data":"d305c28548489b7797caeb70aa9e8ae1a829dc5a8d0f28379bef83068044894d"} Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.177646 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d305c28548489b7797caeb70aa9e8ae1a829dc5a8d0f28379bef83068044894d" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.177746 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zpfml" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.264759 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7"] Jan 27 16:19:12 crc kubenswrapper[4966]: E0127 16:19:12.265363 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e633dd15-957d-49c7-b604-f01bd9e73e64" containerName="extract-utilities" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.265404 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="e633dd15-957d-49c7-b604-f01bd9e73e64" containerName="extract-utilities" Jan 27 16:19:12 crc kubenswrapper[4966]: E0127 16:19:12.265450 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e633dd15-957d-49c7-b604-f01bd9e73e64" containerName="extract-content" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.265459 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="e633dd15-957d-49c7-b604-f01bd9e73e64" containerName="extract-content" Jan 27 16:19:12 crc kubenswrapper[4966]: E0127 16:19:12.265488 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e633dd15-957d-49c7-b604-f01bd9e73e64" containerName="registry-server" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.265496 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="e633dd15-957d-49c7-b604-f01bd9e73e64" containerName="registry-server" Jan 27 16:19:12 crc kubenswrapper[4966]: E0127 16:19:12.265510 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e4ec2b-5f24-4dae-9e35-7ec76860c36c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.265519 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e4ec2b-5f24-4dae-9e35-7ec76860c36c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.265771 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="e633dd15-957d-49c7-b604-f01bd9e73e64" containerName="registry-server" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.265815 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="37e4ec2b-5f24-4dae-9e35-7ec76860c36c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.266852 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.269583 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5bb77603-0e4b-4d98-961e-a669b0ceee35-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7\" (UID: \"5bb77603-0e4b-4d98-961e-a669b0ceee35\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.270127 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5bb77603-0e4b-4d98-961e-a669b0ceee35-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7\" (UID: \"5bb77603-0e4b-4d98-961e-a669b0ceee35\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.270318 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.270378 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b7ln\" (UniqueName: \"kubernetes.io/projected/5bb77603-0e4b-4d98-961e-a669b0ceee35-kube-api-access-8b7ln\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7\" (UID: \"5bb77603-0e4b-4d98-961e-a669b0ceee35\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.271221 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.271506 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.271574 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hsd" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.275188 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7"] Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.372791 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5bb77603-0e4b-4d98-961e-a669b0ceee35-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7\" (UID: \"5bb77603-0e4b-4d98-961e-a669b0ceee35\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.372972 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5bb77603-0e4b-4d98-961e-a669b0ceee35-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7\" (UID: \"5bb77603-0e4b-4d98-961e-a669b0ceee35\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.373042 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b7ln\" (UniqueName: \"kubernetes.io/projected/5bb77603-0e4b-4d98-961e-a669b0ceee35-kube-api-access-8b7ln\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7\" (UID: \"5bb77603-0e4b-4d98-961e-a669b0ceee35\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.377820 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5bb77603-0e4b-4d98-961e-a669b0ceee35-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7\" (UID: \"5bb77603-0e4b-4d98-961e-a669b0ceee35\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.384068 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5bb77603-0e4b-4d98-961e-a669b0ceee35-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7\" (UID: \"5bb77603-0e4b-4d98-961e-a669b0ceee35\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.395252 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b7ln\" (UniqueName: \"kubernetes.io/projected/5bb77603-0e4b-4d98-961e-a669b0ceee35-kube-api-access-8b7ln\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7\" (UID: \"5bb77603-0e4b-4d98-961e-a669b0ceee35\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7" Jan 27 16:19:12 crc kubenswrapper[4966]: I0127 16:19:12.602331 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7" Jan 27 16:19:13 crc kubenswrapper[4966]: I0127 16:19:13.107271 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7"] Jan 27 16:19:13 crc kubenswrapper[4966]: I0127 16:19:13.189451 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7" event={"ID":"5bb77603-0e4b-4d98-961e-a669b0ceee35","Type":"ContainerStarted","Data":"ee62895a3d8f68e3c1b2f86c7996b64f1a8b10d1cf27d09a370b9f1e80637e07"} Jan 27 16:19:13 crc kubenswrapper[4966]: I0127 16:19:13.226976 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fhwnx"] Jan 27 16:19:13 crc kubenswrapper[4966]: I0127 16:19:13.230275 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhwnx" Jan 27 16:19:13 crc kubenswrapper[4966]: I0127 16:19:13.238565 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fhwnx"] Jan 27 16:19:13 crc kubenswrapper[4966]: I0127 16:19:13.409857 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjw2f\" (UniqueName: \"kubernetes.io/projected/133cc52f-c691-4080-b843-1a7c81cf7327-kube-api-access-bjw2f\") pod \"community-operators-fhwnx\" (UID: \"133cc52f-c691-4080-b843-1a7c81cf7327\") " pod="openshift-marketplace/community-operators-fhwnx" Jan 27 16:19:13 crc kubenswrapper[4966]: I0127 16:19:13.410082 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133cc52f-c691-4080-b843-1a7c81cf7327-catalog-content\") pod \"community-operators-fhwnx\" (UID: \"133cc52f-c691-4080-b843-1a7c81cf7327\") " pod="openshift-marketplace/community-operators-fhwnx" Jan 27 16:19:13 crc kubenswrapper[4966]: I0127 16:19:13.410143 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133cc52f-c691-4080-b843-1a7c81cf7327-utilities\") pod \"community-operators-fhwnx\" (UID: \"133cc52f-c691-4080-b843-1a7c81cf7327\") " pod="openshift-marketplace/community-operators-fhwnx" Jan 27 16:19:13 crc kubenswrapper[4966]: I0127 16:19:13.512220 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjw2f\" (UniqueName: \"kubernetes.io/projected/133cc52f-c691-4080-b843-1a7c81cf7327-kube-api-access-bjw2f\") pod \"community-operators-fhwnx\" (UID: \"133cc52f-c691-4080-b843-1a7c81cf7327\") " pod="openshift-marketplace/community-operators-fhwnx" Jan 27 16:19:13 crc kubenswrapper[4966]: I0127 16:19:13.512297 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133cc52f-c691-4080-b843-1a7c81cf7327-catalog-content\") pod \"community-operators-fhwnx\" (UID: \"133cc52f-c691-4080-b843-1a7c81cf7327\") " pod="openshift-marketplace/community-operators-fhwnx" Jan 27 16:19:13 crc kubenswrapper[4966]: I0127 16:19:13.512327 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133cc52f-c691-4080-b843-1a7c81cf7327-utilities\") pod \"community-operators-fhwnx\" (UID: \"133cc52f-c691-4080-b843-1a7c81cf7327\") " pod="openshift-marketplace/community-operators-fhwnx" Jan 27 16:19:13 crc kubenswrapper[4966]: I0127 16:19:13.512890 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133cc52f-c691-4080-b843-1a7c81cf7327-utilities\") pod \"community-operators-fhwnx\" (UID: \"133cc52f-c691-4080-b843-1a7c81cf7327\") " pod="openshift-marketplace/community-operators-fhwnx" Jan 27 16:19:13 crc kubenswrapper[4966]: I0127 16:19:13.513192 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133cc52f-c691-4080-b843-1a7c81cf7327-catalog-content\") pod \"community-operators-fhwnx\" (UID: \"133cc52f-c691-4080-b843-1a7c81cf7327\") " pod="openshift-marketplace/community-operators-fhwnx" Jan 27 16:19:13 crc kubenswrapper[4966]: I0127 16:19:13.543124 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjw2f\" (UniqueName: \"kubernetes.io/projected/133cc52f-c691-4080-b843-1a7c81cf7327-kube-api-access-bjw2f\") pod \"community-operators-fhwnx\" (UID: \"133cc52f-c691-4080-b843-1a7c81cf7327\") " pod="openshift-marketplace/community-operators-fhwnx" Jan 27 16:19:13 crc kubenswrapper[4966]: I0127 16:19:13.564376 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhwnx" Jan 27 16:19:14 crc kubenswrapper[4966]: I0127 16:19:14.317174 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fhwnx"] Jan 27 16:19:14 crc kubenswrapper[4966]: W0127 16:19:14.317411 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod133cc52f_c691_4080_b843_1a7c81cf7327.slice/crio-ac69848d309b40b6ec7c4e500b2f7906639cfef0d8e8812d82e5084b59834f46 WatchSource:0}: Error finding container ac69848d309b40b6ec7c4e500b2f7906639cfef0d8e8812d82e5084b59834f46: Status 404 returned error can't find the container with id ac69848d309b40b6ec7c4e500b2f7906639cfef0d8e8812d82e5084b59834f46 Jan 27 16:19:14 crc kubenswrapper[4966]: I0127 16:19:14.681257 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 16:19:15 crc kubenswrapper[4966]: I0127 16:19:15.214809 4966 generic.go:334] "Generic (PLEG): container finished" podID="133cc52f-c691-4080-b843-1a7c81cf7327" containerID="d70ead958cf7aea19e7645185630d9109c89faed996525ec698bf70466198cc4" exitCode=0 Jan 27 16:19:15 crc kubenswrapper[4966]: I0127 16:19:15.214940 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhwnx" event={"ID":"133cc52f-c691-4080-b843-1a7c81cf7327","Type":"ContainerDied","Data":"d70ead958cf7aea19e7645185630d9109c89faed996525ec698bf70466198cc4"} Jan 27 16:19:15 crc kubenswrapper[4966]: I0127 16:19:15.215251 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhwnx" event={"ID":"133cc52f-c691-4080-b843-1a7c81cf7327","Type":"ContainerStarted","Data":"ac69848d309b40b6ec7c4e500b2f7906639cfef0d8e8812d82e5084b59834f46"} Jan 27 16:19:15 crc kubenswrapper[4966]: I0127 16:19:15.217629 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7" event={"ID":"5bb77603-0e4b-4d98-961e-a669b0ceee35","Type":"ContainerStarted","Data":"925b8e3da08de817e069395f5ca2dd10cb8e4996c5fa1d870472c5290438103c"} Jan 27 16:19:15 crc kubenswrapper[4966]: I0127 16:19:15.285421 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7" podStartSLOduration=1.723826924 podStartE2EDuration="3.285391475s" podCreationTimestamp="2026-01-27 16:19:12 +0000 UTC" firstStartedPulling="2026-01-27 16:19:13.116545921 +0000 UTC m=+2219.419339409" lastFinishedPulling="2026-01-27 16:19:14.678110452 +0000 UTC m=+2220.980903960" observedRunningTime="2026-01-27 16:19:15.269030783 +0000 UTC m=+2221.571824311" watchObservedRunningTime="2026-01-27 16:19:15.285391475 +0000 UTC m=+2221.588184973" Jan 27 16:19:17 crc kubenswrapper[4966]: I0127 16:19:17.242608 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhwnx" event={"ID":"133cc52f-c691-4080-b843-1a7c81cf7327","Type":"ContainerStarted","Data":"72dd2c28e090489052c38b094702797dd24252bd067ba3c4a0f5cb2468f575f8"} Jan 27 16:19:20 crc kubenswrapper[4966]: I0127 16:19:20.277802 4966 generic.go:334] "Generic (PLEG): container finished" podID="133cc52f-c691-4080-b843-1a7c81cf7327" containerID="72dd2c28e090489052c38b094702797dd24252bd067ba3c4a0f5cb2468f575f8" exitCode=0 Jan 27 16:19:20 crc kubenswrapper[4966]: I0127 16:19:20.277948 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhwnx" event={"ID":"133cc52f-c691-4080-b843-1a7c81cf7327","Type":"ContainerDied","Data":"72dd2c28e090489052c38b094702797dd24252bd067ba3c4a0f5cb2468f575f8"} Jan 27 16:19:22 crc kubenswrapper[4966]: I0127 16:19:22.311842 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhwnx" event={"ID":"133cc52f-c691-4080-b843-1a7c81cf7327","Type":"ContainerStarted","Data":"301475f95f734364edaf8b72edb8181b638cb003b3a7527c6a57e67d89a796d2"} Jan 27 16:19:22 crc kubenswrapper[4966]: I0127 16:19:22.337444 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fhwnx" podStartSLOduration=3.5914409579999997 podStartE2EDuration="9.337427235s" podCreationTimestamp="2026-01-27 16:19:13 +0000 UTC" firstStartedPulling="2026-01-27 16:19:15.217562549 +0000 UTC m=+2221.520356047" lastFinishedPulling="2026-01-27 16:19:20.963548836 +0000 UTC m=+2227.266342324" observedRunningTime="2026-01-27 16:19:22.334527494 +0000 UTC m=+2228.637321022" watchObservedRunningTime="2026-01-27 16:19:22.337427235 +0000 UTC m=+2228.640220723" Jan 27 16:19:23 crc kubenswrapper[4966]: I0127 16:19:23.565373 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fhwnx" Jan 27 16:19:23 crc kubenswrapper[4966]: I0127 16:19:23.565690 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fhwnx" Jan 27 16:19:23 crc kubenswrapper[4966]: I0127 16:19:23.611568 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fhwnx" Jan 27 16:19:25 crc kubenswrapper[4966]: I0127 16:19:25.521268 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:19:25 crc kubenswrapper[4966]: E0127 16:19:25.521852 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:19:33 crc kubenswrapper[4966]: I0127 16:19:33.617335 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fhwnx" Jan 27 16:19:33 crc kubenswrapper[4966]: I0127 16:19:33.676118 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fhwnx"] Jan 27 16:19:34 crc kubenswrapper[4966]: I0127 16:19:34.445777 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fhwnx" podUID="133cc52f-c691-4080-b843-1a7c81cf7327" containerName="registry-server" containerID="cri-o://301475f95f734364edaf8b72edb8181b638cb003b3a7527c6a57e67d89a796d2" gracePeriod=2 Jan 27 16:19:35 crc kubenswrapper[4966]: I0127 16:19:35.457937 4966 generic.go:334] "Generic (PLEG): container finished" podID="133cc52f-c691-4080-b843-1a7c81cf7327" containerID="301475f95f734364edaf8b72edb8181b638cb003b3a7527c6a57e67d89a796d2" exitCode=0 Jan 27 16:19:35 crc kubenswrapper[4966]: I0127 16:19:35.457949 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhwnx" event={"ID":"133cc52f-c691-4080-b843-1a7c81cf7327","Type":"ContainerDied","Data":"301475f95f734364edaf8b72edb8181b638cb003b3a7527c6a57e67d89a796d2"} Jan 27 16:19:35 crc kubenswrapper[4966]: I0127 16:19:35.458256 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhwnx" event={"ID":"133cc52f-c691-4080-b843-1a7c81cf7327","Type":"ContainerDied","Data":"ac69848d309b40b6ec7c4e500b2f7906639cfef0d8e8812d82e5084b59834f46"} Jan 27 16:19:35 crc kubenswrapper[4966]: I0127 16:19:35.458275 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac69848d309b40b6ec7c4e500b2f7906639cfef0d8e8812d82e5084b59834f46" Jan 27 16:19:35 crc kubenswrapper[4966]: I0127 16:19:35.533796 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhwnx" Jan 27 16:19:35 crc kubenswrapper[4966]: I0127 16:19:35.610059 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133cc52f-c691-4080-b843-1a7c81cf7327-utilities\") pod \"133cc52f-c691-4080-b843-1a7c81cf7327\" (UID: \"133cc52f-c691-4080-b843-1a7c81cf7327\") " Jan 27 16:19:35 crc kubenswrapper[4966]: I0127 16:19:35.610169 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjw2f\" (UniqueName: \"kubernetes.io/projected/133cc52f-c691-4080-b843-1a7c81cf7327-kube-api-access-bjw2f\") pod \"133cc52f-c691-4080-b843-1a7c81cf7327\" (UID: \"133cc52f-c691-4080-b843-1a7c81cf7327\") " Jan 27 16:19:35 crc kubenswrapper[4966]: I0127 16:19:35.610263 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133cc52f-c691-4080-b843-1a7c81cf7327-catalog-content\") pod \"133cc52f-c691-4080-b843-1a7c81cf7327\" (UID: \"133cc52f-c691-4080-b843-1a7c81cf7327\") " Jan 27 16:19:35 crc kubenswrapper[4966]: I0127 16:19:35.611273 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/133cc52f-c691-4080-b843-1a7c81cf7327-utilities" (OuterVolumeSpecName: "utilities") pod "133cc52f-c691-4080-b843-1a7c81cf7327" (UID: "133cc52f-c691-4080-b843-1a7c81cf7327"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:19:35 crc kubenswrapper[4966]: I0127 16:19:35.612266 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133cc52f-c691-4080-b843-1a7c81cf7327-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:19:35 crc kubenswrapper[4966]: I0127 16:19:35.616059 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/133cc52f-c691-4080-b843-1a7c81cf7327-kube-api-access-bjw2f" (OuterVolumeSpecName: "kube-api-access-bjw2f") pod "133cc52f-c691-4080-b843-1a7c81cf7327" (UID: "133cc52f-c691-4080-b843-1a7c81cf7327"). InnerVolumeSpecName "kube-api-access-bjw2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:19:35 crc kubenswrapper[4966]: I0127 16:19:35.674098 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/133cc52f-c691-4080-b843-1a7c81cf7327-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "133cc52f-c691-4080-b843-1a7c81cf7327" (UID: "133cc52f-c691-4080-b843-1a7c81cf7327"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:19:35 crc kubenswrapper[4966]: I0127 16:19:35.715044 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjw2f\" (UniqueName: \"kubernetes.io/projected/133cc52f-c691-4080-b843-1a7c81cf7327-kube-api-access-bjw2f\") on node \"crc\" DevicePath \"\"" Jan 27 16:19:35 crc kubenswrapper[4966]: I0127 16:19:35.715305 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133cc52f-c691-4080-b843-1a7c81cf7327-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:19:36 crc kubenswrapper[4966]: I0127 16:19:36.469767 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhwnx" Jan 27 16:19:36 crc kubenswrapper[4966]: I0127 16:19:36.518129 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fhwnx"] Jan 27 16:19:36 crc kubenswrapper[4966]: I0127 16:19:36.539781 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fhwnx"] Jan 27 16:19:37 crc kubenswrapper[4966]: I0127 16:19:37.521116 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:19:37 crc kubenswrapper[4966]: E0127 16:19:37.521570 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:19:38 crc kubenswrapper[4966]: I0127 16:19:38.533417 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="133cc52f-c691-4080-b843-1a7c81cf7327" path="/var/lib/kubelet/pods/133cc52f-c691-4080-b843-1a7c81cf7327/volumes" Jan 27 16:19:46 crc kubenswrapper[4966]: I0127 16:19:46.043681 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-jjm6h"] Jan 27 16:19:46 crc kubenswrapper[4966]: I0127 16:19:46.063683 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-jjm6h"] Jan 27 16:19:46 crc kubenswrapper[4966]: I0127 16:19:46.567572 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65a82b08-ff78-4e1e-b183-e1d06925aa5e" path="/var/lib/kubelet/pods/65a82b08-ff78-4e1e-b183-e1d06925aa5e/volumes" Jan 27 16:19:51 crc kubenswrapper[4966]: I0127 16:19:51.521616 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:19:51 crc kubenswrapper[4966]: E0127 16:19:51.522810 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:20:05 crc kubenswrapper[4966]: I0127 16:20:05.521798 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:20:05 crc kubenswrapper[4966]: E0127 16:20:05.522880 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:20:06 crc kubenswrapper[4966]: I0127 16:20:06.778034 4966 generic.go:334] "Generic (PLEG): container finished" podID="5bb77603-0e4b-4d98-961e-a669b0ceee35" containerID="925b8e3da08de817e069395f5ca2dd10cb8e4996c5fa1d870472c5290438103c" exitCode=0 Jan 27 16:20:06 crc kubenswrapper[4966]: I0127 16:20:06.778213 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7" event={"ID":"5bb77603-0e4b-4d98-961e-a669b0ceee35","Type":"ContainerDied","Data":"925b8e3da08de817e069395f5ca2dd10cb8e4996c5fa1d870472c5290438103c"} Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.280120 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.344866 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5bb77603-0e4b-4d98-961e-a669b0ceee35-ssh-key-openstack-edpm-ipam\") pod \"5bb77603-0e4b-4d98-961e-a669b0ceee35\" (UID: \"5bb77603-0e4b-4d98-961e-a669b0ceee35\") " Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.345226 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5bb77603-0e4b-4d98-961e-a669b0ceee35-inventory\") pod \"5bb77603-0e4b-4d98-961e-a669b0ceee35\" (UID: \"5bb77603-0e4b-4d98-961e-a669b0ceee35\") " Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.345403 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b7ln\" (UniqueName: \"kubernetes.io/projected/5bb77603-0e4b-4d98-961e-a669b0ceee35-kube-api-access-8b7ln\") pod \"5bb77603-0e4b-4d98-961e-a669b0ceee35\" (UID: \"5bb77603-0e4b-4d98-961e-a669b0ceee35\") " Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.351276 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bb77603-0e4b-4d98-961e-a669b0ceee35-kube-api-access-8b7ln" (OuterVolumeSpecName: "kube-api-access-8b7ln") pod "5bb77603-0e4b-4d98-961e-a669b0ceee35" (UID: "5bb77603-0e4b-4d98-961e-a669b0ceee35"). InnerVolumeSpecName "kube-api-access-8b7ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.382855 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bb77603-0e4b-4d98-961e-a669b0ceee35-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5bb77603-0e4b-4d98-961e-a669b0ceee35" (UID: "5bb77603-0e4b-4d98-961e-a669b0ceee35"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.383566 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bb77603-0e4b-4d98-961e-a669b0ceee35-inventory" (OuterVolumeSpecName: "inventory") pod "5bb77603-0e4b-4d98-961e-a669b0ceee35" (UID: "5bb77603-0e4b-4d98-961e-a669b0ceee35"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.448982 4966 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5bb77603-0e4b-4d98-961e-a669b0ceee35-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.449019 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b7ln\" (UniqueName: \"kubernetes.io/projected/5bb77603-0e4b-4d98-961e-a669b0ceee35-kube-api-access-8b7ln\") on node \"crc\" DevicePath \"\"" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.449034 4966 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5bb77603-0e4b-4d98-961e-a669b0ceee35-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.800389 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7" event={"ID":"5bb77603-0e4b-4d98-961e-a669b0ceee35","Type":"ContainerDied","Data":"ee62895a3d8f68e3c1b2f86c7996b64f1a8b10d1cf27d09a370b9f1e80637e07"} Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.800711 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee62895a3d8f68e3c1b2f86c7996b64f1a8b10d1cf27d09a370b9f1e80637e07" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.800763 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.896090 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-c5hdk"] Jan 27 16:20:08 crc kubenswrapper[4966]: E0127 16:20:08.896789 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133cc52f-c691-4080-b843-1a7c81cf7327" containerName="extract-utilities" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.896817 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="133cc52f-c691-4080-b843-1a7c81cf7327" containerName="extract-utilities" Jan 27 16:20:08 crc kubenswrapper[4966]: E0127 16:20:08.896871 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bb77603-0e4b-4d98-961e-a669b0ceee35" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.896885 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bb77603-0e4b-4d98-961e-a669b0ceee35" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 16:20:08 crc kubenswrapper[4966]: E0127 16:20:08.896982 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133cc52f-c691-4080-b843-1a7c81cf7327" containerName="registry-server" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.896998 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="133cc52f-c691-4080-b843-1a7c81cf7327" containerName="registry-server" Jan 27 16:20:08 crc kubenswrapper[4966]: E0127 16:20:08.897014 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133cc52f-c691-4080-b843-1a7c81cf7327" containerName="extract-content" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.897023 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="133cc52f-c691-4080-b843-1a7c81cf7327" containerName="extract-content" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.897275 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="133cc52f-c691-4080-b843-1a7c81cf7327" containerName="registry-server" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.897289 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bb77603-0e4b-4d98-961e-a669b0ceee35" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.898260 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-c5hdk" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.901504 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.901743 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.901866 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hsd" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.902381 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.910649 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-c5hdk"] Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.970148 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5swlz\" (UniqueName: \"kubernetes.io/projected/eb03c226-9aea-45da-aa90-3243fce92eee-kube-api-access-5swlz\") pod \"ssh-known-hosts-edpm-deployment-c5hdk\" (UID: \"eb03c226-9aea-45da-aa90-3243fce92eee\") " pod="openstack/ssh-known-hosts-edpm-deployment-c5hdk" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.970348 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eb03c226-9aea-45da-aa90-3243fce92eee-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-c5hdk\" (UID: \"eb03c226-9aea-45da-aa90-3243fce92eee\") " pod="openstack/ssh-known-hosts-edpm-deployment-c5hdk" Jan 27 16:20:08 crc kubenswrapper[4966]: I0127 16:20:08.970448 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb03c226-9aea-45da-aa90-3243fce92eee-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-c5hdk\" (UID: \"eb03c226-9aea-45da-aa90-3243fce92eee\") " pod="openstack/ssh-known-hosts-edpm-deployment-c5hdk" Jan 27 16:20:09 crc kubenswrapper[4966]: I0127 16:20:09.072649 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eb03c226-9aea-45da-aa90-3243fce92eee-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-c5hdk\" (UID: \"eb03c226-9aea-45da-aa90-3243fce92eee\") " pod="openstack/ssh-known-hosts-edpm-deployment-c5hdk" Jan 27 16:20:09 crc kubenswrapper[4966]: I0127 16:20:09.072748 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb03c226-9aea-45da-aa90-3243fce92eee-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-c5hdk\" (UID: \"eb03c226-9aea-45da-aa90-3243fce92eee\") " pod="openstack/ssh-known-hosts-edpm-deployment-c5hdk" Jan 27 16:20:09 crc kubenswrapper[4966]: I0127 16:20:09.072852 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5swlz\" (UniqueName: \"kubernetes.io/projected/eb03c226-9aea-45da-aa90-3243fce92eee-kube-api-access-5swlz\") pod \"ssh-known-hosts-edpm-deployment-c5hdk\" (UID: \"eb03c226-9aea-45da-aa90-3243fce92eee\") " pod="openstack/ssh-known-hosts-edpm-deployment-c5hdk" Jan 27 16:20:09 crc kubenswrapper[4966]: I0127 16:20:09.084513 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb03c226-9aea-45da-aa90-3243fce92eee-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-c5hdk\" (UID: \"eb03c226-9aea-45da-aa90-3243fce92eee\") " pod="openstack/ssh-known-hosts-edpm-deployment-c5hdk" Jan 27 16:20:09 crc kubenswrapper[4966]: I0127 16:20:09.085735 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eb03c226-9aea-45da-aa90-3243fce92eee-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-c5hdk\" (UID: \"eb03c226-9aea-45da-aa90-3243fce92eee\") " pod="openstack/ssh-known-hosts-edpm-deployment-c5hdk" Jan 27 16:20:09 crc kubenswrapper[4966]: I0127 16:20:09.089372 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5swlz\" (UniqueName: \"kubernetes.io/projected/eb03c226-9aea-45da-aa90-3243fce92eee-kube-api-access-5swlz\") pod \"ssh-known-hosts-edpm-deployment-c5hdk\" (UID: \"eb03c226-9aea-45da-aa90-3243fce92eee\") " pod="openstack/ssh-known-hosts-edpm-deployment-c5hdk" Jan 27 16:20:09 crc kubenswrapper[4966]: I0127 16:20:09.218006 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-c5hdk" Jan 27 16:20:09 crc kubenswrapper[4966]: I0127 16:20:09.767537 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-c5hdk"] Jan 27 16:20:09 crc kubenswrapper[4966]: I0127 16:20:09.770121 4966 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:20:09 crc kubenswrapper[4966]: I0127 16:20:09.811119 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-c5hdk" event={"ID":"eb03c226-9aea-45da-aa90-3243fce92eee","Type":"ContainerStarted","Data":"3091e5cdcc9a85d2f74c6cf3b22aff9e1fe0a9bcabe6f1019fbf5de336b49092"} Jan 27 16:20:10 crc kubenswrapper[4966]: I0127 16:20:10.823863 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-c5hdk" event={"ID":"eb03c226-9aea-45da-aa90-3243fce92eee","Type":"ContainerStarted","Data":"68efae5fbc1b55b32265f7d54cae5b727c9466d61210190ad5535181f8ff5253"} Jan 27 16:20:10 crc kubenswrapper[4966]: I0127 16:20:10.843171 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-c5hdk" podStartSLOduration=2.145873905 podStartE2EDuration="2.843148373s" podCreationTimestamp="2026-01-27 16:20:08 +0000 UTC" firstStartedPulling="2026-01-27 16:20:09.769543062 +0000 UTC m=+2276.072336570" lastFinishedPulling="2026-01-27 16:20:10.46681755 +0000 UTC m=+2276.769611038" observedRunningTime="2026-01-27 16:20:10.836756462 +0000 UTC m=+2277.139549950" watchObservedRunningTime="2026-01-27 16:20:10.843148373 +0000 UTC m=+2277.145941871" Jan 27 16:20:17 crc kubenswrapper[4966]: I0127 16:20:17.902167 4966 generic.go:334] "Generic (PLEG): container finished" podID="eb03c226-9aea-45da-aa90-3243fce92eee" containerID="68efae5fbc1b55b32265f7d54cae5b727c9466d61210190ad5535181f8ff5253" exitCode=0 Jan 27 16:20:17 crc kubenswrapper[4966]: I0127 16:20:17.902259 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-c5hdk" event={"ID":"eb03c226-9aea-45da-aa90-3243fce92eee","Type":"ContainerDied","Data":"68efae5fbc1b55b32265f7d54cae5b727c9466d61210190ad5535181f8ff5253"} Jan 27 16:20:18 crc kubenswrapper[4966]: I0127 16:20:18.521433 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:20:18 crc kubenswrapper[4966]: E0127 16:20:18.522121 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:20:19 crc kubenswrapper[4966]: I0127 16:20:19.430266 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-c5hdk" Jan 27 16:20:19 crc kubenswrapper[4966]: I0127 16:20:19.531819 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eb03c226-9aea-45da-aa90-3243fce92eee-inventory-0\") pod \"eb03c226-9aea-45da-aa90-3243fce92eee\" (UID: \"eb03c226-9aea-45da-aa90-3243fce92eee\") " Jan 27 16:20:19 crc kubenswrapper[4966]: I0127 16:20:19.531951 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5swlz\" (UniqueName: \"kubernetes.io/projected/eb03c226-9aea-45da-aa90-3243fce92eee-kube-api-access-5swlz\") pod \"eb03c226-9aea-45da-aa90-3243fce92eee\" (UID: \"eb03c226-9aea-45da-aa90-3243fce92eee\") " Jan 27 16:20:19 crc kubenswrapper[4966]: I0127 16:20:19.532184 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb03c226-9aea-45da-aa90-3243fce92eee-ssh-key-openstack-edpm-ipam\") pod \"eb03c226-9aea-45da-aa90-3243fce92eee\" (UID: \"eb03c226-9aea-45da-aa90-3243fce92eee\") " Jan 27 16:20:19 crc kubenswrapper[4966]: I0127 16:20:19.540438 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb03c226-9aea-45da-aa90-3243fce92eee-kube-api-access-5swlz" (OuterVolumeSpecName: "kube-api-access-5swlz") pod "eb03c226-9aea-45da-aa90-3243fce92eee" (UID: "eb03c226-9aea-45da-aa90-3243fce92eee"). InnerVolumeSpecName "kube-api-access-5swlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:20:19 crc kubenswrapper[4966]: I0127 16:20:19.566519 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb03c226-9aea-45da-aa90-3243fce92eee-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "eb03c226-9aea-45da-aa90-3243fce92eee" (UID: "eb03c226-9aea-45da-aa90-3243fce92eee"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:20:19 crc kubenswrapper[4966]: I0127 16:20:19.569638 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb03c226-9aea-45da-aa90-3243fce92eee-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "eb03c226-9aea-45da-aa90-3243fce92eee" (UID: "eb03c226-9aea-45da-aa90-3243fce92eee"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:20:19 crc kubenswrapper[4966]: I0127 16:20:19.636561 4966 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eb03c226-9aea-45da-aa90-3243fce92eee-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:20:19 crc kubenswrapper[4966]: I0127 16:20:19.636608 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5swlz\" (UniqueName: \"kubernetes.io/projected/eb03c226-9aea-45da-aa90-3243fce92eee-kube-api-access-5swlz\") on node \"crc\" DevicePath \"\"" Jan 27 16:20:19 crc kubenswrapper[4966]: I0127 16:20:19.636625 4966 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb03c226-9aea-45da-aa90-3243fce92eee-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 16:20:19 crc kubenswrapper[4966]: I0127 16:20:19.925778 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-c5hdk" event={"ID":"eb03c226-9aea-45da-aa90-3243fce92eee","Type":"ContainerDied","Data":"3091e5cdcc9a85d2f74c6cf3b22aff9e1fe0a9bcabe6f1019fbf5de336b49092"} Jan 27 16:20:19 crc kubenswrapper[4966]: I0127 16:20:19.926189 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3091e5cdcc9a85d2f74c6cf3b22aff9e1fe0a9bcabe6f1019fbf5de336b49092" Jan 27 16:20:19 crc kubenswrapper[4966]: I0127 16:20:19.925879 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-c5hdk" Jan 27 16:20:20 crc kubenswrapper[4966]: I0127 16:20:20.007095 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-775hq"] Jan 27 16:20:20 crc kubenswrapper[4966]: E0127 16:20:20.007723 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb03c226-9aea-45da-aa90-3243fce92eee" containerName="ssh-known-hosts-edpm-deployment" Jan 27 16:20:20 crc kubenswrapper[4966]: I0127 16:20:20.007748 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb03c226-9aea-45da-aa90-3243fce92eee" containerName="ssh-known-hosts-edpm-deployment" Jan 27 16:20:20 crc kubenswrapper[4966]: I0127 16:20:20.008099 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb03c226-9aea-45da-aa90-3243fce92eee" containerName="ssh-known-hosts-edpm-deployment" Jan 27 16:20:20 crc kubenswrapper[4966]: I0127 16:20:20.009264 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-775hq" Jan 27 16:20:20 crc kubenswrapper[4966]: I0127 16:20:20.011111 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 16:20:20 crc kubenswrapper[4966]: I0127 16:20:20.011771 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hsd" Jan 27 16:20:20 crc kubenswrapper[4966]: I0127 16:20:20.012019 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 16:20:20 crc kubenswrapper[4966]: I0127 16:20:20.012301 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 16:20:20 crc kubenswrapper[4966]: I0127 16:20:20.023484 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-775hq"] Jan 27 16:20:20 crc kubenswrapper[4966]: I0127 16:20:20.045869 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebb6cac6-208d-407d-b0a5-5556e1968d8d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-775hq\" (UID: \"ebb6cac6-208d-407d-b0a5-5556e1968d8d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-775hq" Jan 27 16:20:20 crc kubenswrapper[4966]: I0127 16:20:20.045939 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dslzd\" (UniqueName: \"kubernetes.io/projected/ebb6cac6-208d-407d-b0a5-5556e1968d8d-kube-api-access-dslzd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-775hq\" (UID: \"ebb6cac6-208d-407d-b0a5-5556e1968d8d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-775hq" Jan 27 16:20:20 crc kubenswrapper[4966]: I0127 16:20:20.046101 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebb6cac6-208d-407d-b0a5-5556e1968d8d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-775hq\" (UID: \"ebb6cac6-208d-407d-b0a5-5556e1968d8d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-775hq" Jan 27 16:20:20 crc kubenswrapper[4966]: I0127 16:20:20.148023 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebb6cac6-208d-407d-b0a5-5556e1968d8d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-775hq\" (UID: \"ebb6cac6-208d-407d-b0a5-5556e1968d8d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-775hq" Jan 27 16:20:20 crc kubenswrapper[4966]: I0127 16:20:20.148212 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebb6cac6-208d-407d-b0a5-5556e1968d8d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-775hq\" (UID: \"ebb6cac6-208d-407d-b0a5-5556e1968d8d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-775hq" Jan 27 16:20:20 crc kubenswrapper[4966]: I0127 16:20:20.148255 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dslzd\" (UniqueName: \"kubernetes.io/projected/ebb6cac6-208d-407d-b0a5-5556e1968d8d-kube-api-access-dslzd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-775hq\" (UID: \"ebb6cac6-208d-407d-b0a5-5556e1968d8d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-775hq" Jan 27 16:20:20 crc kubenswrapper[4966]: I0127 16:20:20.152806 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebb6cac6-208d-407d-b0a5-5556e1968d8d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-775hq\" (UID: \"ebb6cac6-208d-407d-b0a5-5556e1968d8d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-775hq" Jan 27 16:20:20 crc kubenswrapper[4966]: I0127 16:20:20.160944 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebb6cac6-208d-407d-b0a5-5556e1968d8d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-775hq\" (UID: \"ebb6cac6-208d-407d-b0a5-5556e1968d8d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-775hq" Jan 27 16:20:20 crc kubenswrapper[4966]: I0127 16:20:20.164975 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dslzd\" (UniqueName: \"kubernetes.io/projected/ebb6cac6-208d-407d-b0a5-5556e1968d8d-kube-api-access-dslzd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-775hq\" (UID: \"ebb6cac6-208d-407d-b0a5-5556e1968d8d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-775hq" Jan 27 16:20:20 crc kubenswrapper[4966]: I0127 16:20:20.330015 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-775hq" Jan 27 16:20:20 crc kubenswrapper[4966]: W0127 16:20:20.991566 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebb6cac6_208d_407d_b0a5_5556e1968d8d.slice/crio-aecb272bc17a9292078e117f623eec814413c2b52c7424204b9e0edde5c010c6 WatchSource:0}: Error finding container aecb272bc17a9292078e117f623eec814413c2b52c7424204b9e0edde5c010c6: Status 404 returned error can't find the container with id aecb272bc17a9292078e117f623eec814413c2b52c7424204b9e0edde5c010c6 Jan 27 16:20:20 crc kubenswrapper[4966]: I0127 16:20:20.993176 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-775hq"] Jan 27 16:20:21 crc kubenswrapper[4966]: I0127 16:20:21.947708 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-775hq" event={"ID":"ebb6cac6-208d-407d-b0a5-5556e1968d8d","Type":"ContainerStarted","Data":"85437b92151175d416455ae0ea82f3f1dbc48292fc568bec1a347e5f6887ec1e"} Jan 27 16:20:21 crc kubenswrapper[4966]: I0127 16:20:21.948105 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-775hq" event={"ID":"ebb6cac6-208d-407d-b0a5-5556e1968d8d","Type":"ContainerStarted","Data":"aecb272bc17a9292078e117f623eec814413c2b52c7424204b9e0edde5c010c6"} Jan 27 16:20:21 crc kubenswrapper[4966]: I0127 16:20:21.972627 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-775hq" podStartSLOduration=2.437805608 podStartE2EDuration="2.972604061s" podCreationTimestamp="2026-01-27 16:20:19 +0000 UTC" firstStartedPulling="2026-01-27 16:20:21.022520315 +0000 UTC m=+2287.325313803" lastFinishedPulling="2026-01-27 16:20:21.557318758 +0000 UTC m=+2287.860112256" observedRunningTime="2026-01-27 16:20:21.96298314 +0000 UTC m=+2288.265776648" watchObservedRunningTime="2026-01-27 16:20:21.972604061 +0000 UTC m=+2288.275397559" Jan 27 16:20:28 crc kubenswrapper[4966]: I0127 16:20:28.042226 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-969r8"] Jan 27 16:20:28 crc kubenswrapper[4966]: I0127 16:20:28.055723 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-969r8"] Jan 27 16:20:28 crc kubenswrapper[4966]: I0127 16:20:28.534194 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70504957-f2da-439c-abb3-40ef116366ca" path="/var/lib/kubelet/pods/70504957-f2da-439c-abb3-40ef116366ca/volumes" Jan 27 16:20:29 crc kubenswrapper[4966]: I0127 16:20:29.521160 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:20:29 crc kubenswrapper[4966]: E0127 16:20:29.521511 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:20:30 crc kubenswrapper[4966]: I0127 16:20:30.044466 4966 generic.go:334] "Generic (PLEG): container finished" podID="ebb6cac6-208d-407d-b0a5-5556e1968d8d" containerID="85437b92151175d416455ae0ea82f3f1dbc48292fc568bec1a347e5f6887ec1e" exitCode=0 Jan 27 16:20:30 crc kubenswrapper[4966]: I0127 16:20:30.045677 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-775hq" event={"ID":"ebb6cac6-208d-407d-b0a5-5556e1968d8d","Type":"ContainerDied","Data":"85437b92151175d416455ae0ea82f3f1dbc48292fc568bec1a347e5f6887ec1e"} Jan 27 16:20:31 crc kubenswrapper[4966]: I0127 16:20:31.512808 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-775hq" Jan 27 16:20:31 crc kubenswrapper[4966]: I0127 16:20:31.576857 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebb6cac6-208d-407d-b0a5-5556e1968d8d-ssh-key-openstack-edpm-ipam\") pod \"ebb6cac6-208d-407d-b0a5-5556e1968d8d\" (UID: \"ebb6cac6-208d-407d-b0a5-5556e1968d8d\") " Jan 27 16:20:31 crc kubenswrapper[4966]: I0127 16:20:31.576958 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebb6cac6-208d-407d-b0a5-5556e1968d8d-inventory\") pod \"ebb6cac6-208d-407d-b0a5-5556e1968d8d\" (UID: \"ebb6cac6-208d-407d-b0a5-5556e1968d8d\") " Jan 27 16:20:31 crc kubenswrapper[4966]: I0127 16:20:31.576982 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dslzd\" (UniqueName: \"kubernetes.io/projected/ebb6cac6-208d-407d-b0a5-5556e1968d8d-kube-api-access-dslzd\") pod \"ebb6cac6-208d-407d-b0a5-5556e1968d8d\" (UID: \"ebb6cac6-208d-407d-b0a5-5556e1968d8d\") " Jan 27 16:20:31 crc kubenswrapper[4966]: I0127 16:20:31.583944 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb6cac6-208d-407d-b0a5-5556e1968d8d-kube-api-access-dslzd" (OuterVolumeSpecName: "kube-api-access-dslzd") pod "ebb6cac6-208d-407d-b0a5-5556e1968d8d" (UID: "ebb6cac6-208d-407d-b0a5-5556e1968d8d"). InnerVolumeSpecName "kube-api-access-dslzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:20:31 crc kubenswrapper[4966]: I0127 16:20:31.609991 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb6cac6-208d-407d-b0a5-5556e1968d8d-inventory" (OuterVolumeSpecName: "inventory") pod "ebb6cac6-208d-407d-b0a5-5556e1968d8d" (UID: "ebb6cac6-208d-407d-b0a5-5556e1968d8d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:20:31 crc kubenswrapper[4966]: I0127 16:20:31.619198 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb6cac6-208d-407d-b0a5-5556e1968d8d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ebb6cac6-208d-407d-b0a5-5556e1968d8d" (UID: "ebb6cac6-208d-407d-b0a5-5556e1968d8d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:20:31 crc kubenswrapper[4966]: I0127 16:20:31.680534 4966 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebb6cac6-208d-407d-b0a5-5556e1968d8d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 16:20:31 crc kubenswrapper[4966]: I0127 16:20:31.680581 4966 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebb6cac6-208d-407d-b0a5-5556e1968d8d-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 16:20:31 crc kubenswrapper[4966]: I0127 16:20:31.680597 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dslzd\" (UniqueName: \"kubernetes.io/projected/ebb6cac6-208d-407d-b0a5-5556e1968d8d-kube-api-access-dslzd\") on node \"crc\" DevicePath \"\"" Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.069323 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-775hq" event={"ID":"ebb6cac6-208d-407d-b0a5-5556e1968d8d","Type":"ContainerDied","Data":"aecb272bc17a9292078e117f623eec814413c2b52c7424204b9e0edde5c010c6"} Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.069370 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aecb272bc17a9292078e117f623eec814413c2b52c7424204b9e0edde5c010c6" Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.069401 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-775hq" Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.152509 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h"] Jan 27 16:20:32 crc kubenswrapper[4966]: E0127 16:20:32.153145 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb6cac6-208d-407d-b0a5-5556e1968d8d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.153174 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb6cac6-208d-407d-b0a5-5556e1968d8d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.153436 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb6cac6-208d-407d-b0a5-5556e1968d8d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.154305 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h" Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.157807 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.158244 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.158419 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.158575 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hsd" Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.166524 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h"] Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.190676 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f476647f-6bc8-43f4-804e-1349f84b2639-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h\" (UID: \"f476647f-6bc8-43f4-804e-1349f84b2639\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h" Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.190758 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scbb6\" (UniqueName: \"kubernetes.io/projected/f476647f-6bc8-43f4-804e-1349f84b2639-kube-api-access-scbb6\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h\" (UID: \"f476647f-6bc8-43f4-804e-1349f84b2639\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h" Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.190785 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f476647f-6bc8-43f4-804e-1349f84b2639-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h\" (UID: \"f476647f-6bc8-43f4-804e-1349f84b2639\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h" Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.293077 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scbb6\" (UniqueName: \"kubernetes.io/projected/f476647f-6bc8-43f4-804e-1349f84b2639-kube-api-access-scbb6\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h\" (UID: \"f476647f-6bc8-43f4-804e-1349f84b2639\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h" Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.293127 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f476647f-6bc8-43f4-804e-1349f84b2639-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h\" (UID: \"f476647f-6bc8-43f4-804e-1349f84b2639\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h" Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.293356 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f476647f-6bc8-43f4-804e-1349f84b2639-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h\" (UID: \"f476647f-6bc8-43f4-804e-1349f84b2639\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h" Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.300997 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f476647f-6bc8-43f4-804e-1349f84b2639-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h\" (UID: \"f476647f-6bc8-43f4-804e-1349f84b2639\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h" Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.305014 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f476647f-6bc8-43f4-804e-1349f84b2639-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h\" (UID: \"f476647f-6bc8-43f4-804e-1349f84b2639\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h" Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.309178 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scbb6\" (UniqueName: \"kubernetes.io/projected/f476647f-6bc8-43f4-804e-1349f84b2639-kube-api-access-scbb6\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h\" (UID: \"f476647f-6bc8-43f4-804e-1349f84b2639\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h" Jan 27 16:20:32 crc kubenswrapper[4966]: I0127 16:20:32.471810 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h" Jan 27 16:20:33 crc kubenswrapper[4966]: I0127 16:20:33.062543 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h"] Jan 27 16:20:33 crc kubenswrapper[4966]: I0127 16:20:33.083263 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h" event={"ID":"f476647f-6bc8-43f4-804e-1349f84b2639","Type":"ContainerStarted","Data":"12d62632803bdcbbf8fea2b1077cef6462d2fe3219083e207f8bcb3fd6e90b29"} Jan 27 16:20:34 crc kubenswrapper[4966]: I0127 16:20:34.095857 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h" event={"ID":"f476647f-6bc8-43f4-804e-1349f84b2639","Type":"ContainerStarted","Data":"c3d0023ca64c6529b980990e46fd1f7f310c5a6d4bbd777ded9f22068a30cea0"} Jan 27 16:20:34 crc kubenswrapper[4966]: I0127 16:20:34.123004 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h" podStartSLOduration=1.6239054849999999 podStartE2EDuration="2.122979997s" podCreationTimestamp="2026-01-27 16:20:32 +0000 UTC" firstStartedPulling="2026-01-27 16:20:33.057765961 +0000 UTC m=+2299.360559449" lastFinishedPulling="2026-01-27 16:20:33.556840463 +0000 UTC m=+2299.859633961" observedRunningTime="2026-01-27 16:20:34.116602998 +0000 UTC m=+2300.419396496" watchObservedRunningTime="2026-01-27 16:20:34.122979997 +0000 UTC m=+2300.425773495" Jan 27 16:20:40 crc kubenswrapper[4966]: I0127 16:20:40.522108 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:20:40 crc kubenswrapper[4966]: E0127 16:20:40.522964 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:20:43 crc kubenswrapper[4966]: I0127 16:20:43.202735 4966 generic.go:334] "Generic (PLEG): container finished" podID="f476647f-6bc8-43f4-804e-1349f84b2639" containerID="c3d0023ca64c6529b980990e46fd1f7f310c5a6d4bbd777ded9f22068a30cea0" exitCode=0 Jan 27 16:20:43 crc kubenswrapper[4966]: I0127 16:20:43.202818 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h" event={"ID":"f476647f-6bc8-43f4-804e-1349f84b2639","Type":"ContainerDied","Data":"c3d0023ca64c6529b980990e46fd1f7f310c5a6d4bbd777ded9f22068a30cea0"} Jan 27 16:20:44 crc kubenswrapper[4966]: I0127 16:20:44.704935 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h" Jan 27 16:20:44 crc kubenswrapper[4966]: I0127 16:20:44.798585 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f476647f-6bc8-43f4-804e-1349f84b2639-ssh-key-openstack-edpm-ipam\") pod \"f476647f-6bc8-43f4-804e-1349f84b2639\" (UID: \"f476647f-6bc8-43f4-804e-1349f84b2639\") " Jan 27 16:20:44 crc kubenswrapper[4966]: I0127 16:20:44.799124 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scbb6\" (UniqueName: \"kubernetes.io/projected/f476647f-6bc8-43f4-804e-1349f84b2639-kube-api-access-scbb6\") pod \"f476647f-6bc8-43f4-804e-1349f84b2639\" (UID: \"f476647f-6bc8-43f4-804e-1349f84b2639\") " Jan 27 16:20:44 crc kubenswrapper[4966]: I0127 16:20:44.799459 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f476647f-6bc8-43f4-804e-1349f84b2639-inventory\") pod \"f476647f-6bc8-43f4-804e-1349f84b2639\" (UID: \"f476647f-6bc8-43f4-804e-1349f84b2639\") " Jan 27 16:20:44 crc kubenswrapper[4966]: I0127 16:20:44.804704 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f476647f-6bc8-43f4-804e-1349f84b2639-kube-api-access-scbb6" (OuterVolumeSpecName: "kube-api-access-scbb6") pod "f476647f-6bc8-43f4-804e-1349f84b2639" (UID: "f476647f-6bc8-43f4-804e-1349f84b2639"). InnerVolumeSpecName "kube-api-access-scbb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:20:44 crc kubenswrapper[4966]: I0127 16:20:44.829678 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f476647f-6bc8-43f4-804e-1349f84b2639-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f476647f-6bc8-43f4-804e-1349f84b2639" (UID: "f476647f-6bc8-43f4-804e-1349f84b2639"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:20:44 crc kubenswrapper[4966]: I0127 16:20:44.830165 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f476647f-6bc8-43f4-804e-1349f84b2639-inventory" (OuterVolumeSpecName: "inventory") pod "f476647f-6bc8-43f4-804e-1349f84b2639" (UID: "f476647f-6bc8-43f4-804e-1349f84b2639"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:20:44 crc kubenswrapper[4966]: I0127 16:20:44.902595 4966 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f476647f-6bc8-43f4-804e-1349f84b2639-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 16:20:44 crc kubenswrapper[4966]: I0127 16:20:44.902635 4966 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f476647f-6bc8-43f4-804e-1349f84b2639-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 16:20:44 crc kubenswrapper[4966]: I0127 16:20:44.902645 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scbb6\" (UniqueName: \"kubernetes.io/projected/f476647f-6bc8-43f4-804e-1349f84b2639-kube-api-access-scbb6\") on node \"crc\" DevicePath \"\"" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.226832 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h" event={"ID":"f476647f-6bc8-43f4-804e-1349f84b2639","Type":"ContainerDied","Data":"12d62632803bdcbbf8fea2b1077cef6462d2fe3219083e207f8bcb3fd6e90b29"} Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.227342 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12d62632803bdcbbf8fea2b1077cef6462d2fe3219083e207f8bcb3fd6e90b29" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.227417 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.324986 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq"] Jan 27 16:20:45 crc kubenswrapper[4966]: E0127 16:20:45.325837 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f476647f-6bc8-43f4-804e-1349f84b2639" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.325965 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="f476647f-6bc8-43f4-804e-1349f84b2639" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.326265 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="f476647f-6bc8-43f4-804e-1349f84b2639" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.327147 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.330490 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.330868 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.331086 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.331246 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.331433 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hsd" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.331587 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.332349 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.332673 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.332919 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.342393 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq"] Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.371363 4966 scope.go:117] "RemoveContainer" containerID="901aa4e8c1cd20bf8bc8fa6bbd75e884c3ecf75c805160a742b20499f86cd6fd" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.414231 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.414294 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.414419 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.414459 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.414481 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.414554 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.414609 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd4vk\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-kube-api-access-gd4vk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.414632 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.414672 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.414696 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.414722 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.414760 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.414776 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.414812 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.414835 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.414856 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.516397 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.516456 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.516483 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.516518 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.516542 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.516591 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.516654 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd4vk\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-kube-api-access-gd4vk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.516678 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.516714 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.516740 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.516765 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.516800 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.516818 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.516875 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.516930 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.516964 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.522881 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.523056 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.523204 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.523269 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.523447 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.523620 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.523880 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.524482 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.525519 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.526011 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.526238 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.527044 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.527180 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.527315 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.527964 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.533872 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd4vk\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-kube-api-access-gd4vk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mftzq\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.966282 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:20:45 crc kubenswrapper[4966]: I0127 16:20:45.993197 4966 scope.go:117] "RemoveContainer" containerID="078a9912b217988550b34e53675cc48b1975547984519f14f6aac999492dc6f7" Jan 27 16:20:46 crc kubenswrapper[4966]: I0127 16:20:46.630762 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq"] Jan 27 16:20:47 crc kubenswrapper[4966]: I0127 16:20:47.249966 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" event={"ID":"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44","Type":"ContainerStarted","Data":"15af209e971c8490d6982d487f5a0a5025f7da9d6809a8ff8893ea5399976de4"} Jan 27 16:20:48 crc kubenswrapper[4966]: I0127 16:20:48.262629 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" event={"ID":"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44","Type":"ContainerStarted","Data":"a1fe65b0b27ae6fbdfac8635da73e83476e471eb6cf0e74978a8179b1ce68cd0"} Jan 27 16:20:48 crc kubenswrapper[4966]: I0127 16:20:48.293988 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" podStartSLOduration=2.826219293 podStartE2EDuration="3.293966433s" podCreationTimestamp="2026-01-27 16:20:45 +0000 UTC" firstStartedPulling="2026-01-27 16:20:46.634801529 +0000 UTC m=+2312.937595007" lastFinishedPulling="2026-01-27 16:20:47.102548649 +0000 UTC m=+2313.405342147" observedRunningTime="2026-01-27 16:20:48.281548774 +0000 UTC m=+2314.584342282" watchObservedRunningTime="2026-01-27 16:20:48.293966433 +0000 UTC m=+2314.596759931" Jan 27 16:20:51 crc kubenswrapper[4966]: I0127 16:20:51.521752 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:20:51 crc kubenswrapper[4966]: E0127 16:20:51.522489 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:21:04 crc kubenswrapper[4966]: I0127 16:21:04.521396 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:21:04 crc kubenswrapper[4966]: E0127 16:21:04.522649 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:21:18 crc kubenswrapper[4966]: I0127 16:21:18.521974 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:21:18 crc kubenswrapper[4966]: E0127 16:21:18.523195 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:21:33 crc kubenswrapper[4966]: I0127 16:21:33.521577 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:21:33 crc kubenswrapper[4966]: E0127 16:21:33.522434 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:21:33 crc kubenswrapper[4966]: I0127 16:21:33.739867 4966 generic.go:334] "Generic (PLEG): container finished" podID="7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44" containerID="a1fe65b0b27ae6fbdfac8635da73e83476e471eb6cf0e74978a8179b1ce68cd0" exitCode=0 Jan 27 16:21:33 crc kubenswrapper[4966]: I0127 16:21:33.739952 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" event={"ID":"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44","Type":"ContainerDied","Data":"a1fe65b0b27ae6fbdfac8635da73e83476e471eb6cf0e74978a8179b1ce68cd0"} Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.306578 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.352082 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-inventory\") pod \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.352176 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-nova-combined-ca-bundle\") pod \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.353091 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-telemetry-combined-ca-bundle\") pod \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.353262 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-repo-setup-combined-ca-bundle\") pod \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.353359 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-telemetry-power-monitoring-combined-ca-bundle\") pod \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.353398 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-ovn-default-certs-0\") pod \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.353440 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-bootstrap-combined-ca-bundle\") pod \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.353495 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.353525 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd4vk\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-kube-api-access-gd4vk\") pod \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.353561 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-libvirt-combined-ca-bundle\") pod \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.353608 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.353705 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-ssh-key-openstack-edpm-ipam\") pod \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.353755 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.353806 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.353831 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-ovn-combined-ca-bundle\") pod \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.353882 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-neutron-metadata-combined-ca-bundle\") pod \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\" (UID: \"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44\") " Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.358913 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44" (UID: "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.359142 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44" (UID: "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.360239 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44" (UID: "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.360465 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44" (UID: "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.361805 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44" (UID: "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.363564 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-kube-api-access-gd4vk" (OuterVolumeSpecName: "kube-api-access-gd4vk") pod "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44" (UID: "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44"). InnerVolumeSpecName "kube-api-access-gd4vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.364197 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44" (UID: "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.364784 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44" (UID: "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.366343 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44" (UID: "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.368156 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44" (UID: "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.368155 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44" (UID: "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.368269 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44" (UID: "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.375216 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44" (UID: "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.375327 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44" (UID: "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.401828 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-inventory" (OuterVolumeSpecName: "inventory") pod "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44" (UID: "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.403400 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44" (UID: "7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.457431 4966 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.457477 4966 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.457495 4966 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.457509 4966 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.457521 4966 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.457533 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd4vk\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-kube-api-access-gd4vk\") on node \"crc\" DevicePath \"\"" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.457544 4966 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.457555 4966 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.457565 4966 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.457576 4966 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.457589 4966 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.457606 4966 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.457620 4966 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.457631 4966 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.457641 4966 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.457654 4966 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.761601 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" event={"ID":"7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44","Type":"ContainerDied","Data":"15af209e971c8490d6982d487f5a0a5025f7da9d6809a8ff8893ea5399976de4"} Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.761642 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15af209e971c8490d6982d487f5a0a5025f7da9d6809a8ff8893ea5399976de4" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.761746 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mftzq" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.879189 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv"] Jan 27 16:21:35 crc kubenswrapper[4966]: E0127 16:21:35.879815 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.879841 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.880226 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.881044 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.884115 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.884312 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.884476 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.884624 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.884822 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hsd" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.890915 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv"] Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.971053 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e415840f-113a-4545-a340-f206e683b62c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x65gv\" (UID: \"e415840f-113a-4545-a340-f206e683b62c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.971100 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e415840f-113a-4545-a340-f206e683b62c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x65gv\" (UID: \"e415840f-113a-4545-a340-f206e683b62c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.971137 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c29n\" (UniqueName: \"kubernetes.io/projected/e415840f-113a-4545-a340-f206e683b62c-kube-api-access-4c29n\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x65gv\" (UID: \"e415840f-113a-4545-a340-f206e683b62c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.971269 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e415840f-113a-4545-a340-f206e683b62c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x65gv\" (UID: \"e415840f-113a-4545-a340-f206e683b62c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" Jan 27 16:21:35 crc kubenswrapper[4966]: I0127 16:21:35.971326 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e415840f-113a-4545-a340-f206e683b62c-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x65gv\" (UID: \"e415840f-113a-4545-a340-f206e683b62c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" Jan 27 16:21:36 crc kubenswrapper[4966]: I0127 16:21:36.072933 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e415840f-113a-4545-a340-f206e683b62c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x65gv\" (UID: \"e415840f-113a-4545-a340-f206e683b62c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" Jan 27 16:21:36 crc kubenswrapper[4966]: I0127 16:21:36.073005 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e415840f-113a-4545-a340-f206e683b62c-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x65gv\" (UID: \"e415840f-113a-4545-a340-f206e683b62c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" Jan 27 16:21:36 crc kubenswrapper[4966]: I0127 16:21:36.073099 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e415840f-113a-4545-a340-f206e683b62c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x65gv\" (UID: \"e415840f-113a-4545-a340-f206e683b62c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" Jan 27 16:21:36 crc kubenswrapper[4966]: I0127 16:21:36.073121 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e415840f-113a-4545-a340-f206e683b62c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x65gv\" (UID: \"e415840f-113a-4545-a340-f206e683b62c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" Jan 27 16:21:36 crc kubenswrapper[4966]: I0127 16:21:36.073156 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c29n\" (UniqueName: \"kubernetes.io/projected/e415840f-113a-4545-a340-f206e683b62c-kube-api-access-4c29n\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x65gv\" (UID: \"e415840f-113a-4545-a340-f206e683b62c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" Jan 27 16:21:36 crc kubenswrapper[4966]: I0127 16:21:36.073873 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e415840f-113a-4545-a340-f206e683b62c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x65gv\" (UID: \"e415840f-113a-4545-a340-f206e683b62c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" Jan 27 16:21:36 crc kubenswrapper[4966]: I0127 16:21:36.077540 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e415840f-113a-4545-a340-f206e683b62c-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x65gv\" (UID: \"e415840f-113a-4545-a340-f206e683b62c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" Jan 27 16:21:36 crc kubenswrapper[4966]: I0127 16:21:36.078147 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e415840f-113a-4545-a340-f206e683b62c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x65gv\" (UID: \"e415840f-113a-4545-a340-f206e683b62c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" Jan 27 16:21:36 crc kubenswrapper[4966]: I0127 16:21:36.079845 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e415840f-113a-4545-a340-f206e683b62c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x65gv\" (UID: \"e415840f-113a-4545-a340-f206e683b62c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" Jan 27 16:21:36 crc kubenswrapper[4966]: I0127 16:21:36.091608 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c29n\" (UniqueName: \"kubernetes.io/projected/e415840f-113a-4545-a340-f206e683b62c-kube-api-access-4c29n\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x65gv\" (UID: \"e415840f-113a-4545-a340-f206e683b62c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" Jan 27 16:21:36 crc kubenswrapper[4966]: I0127 16:21:36.212607 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" Jan 27 16:21:36 crc kubenswrapper[4966]: I0127 16:21:36.768614 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv"] Jan 27 16:21:37 crc kubenswrapper[4966]: I0127 16:21:37.779811 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" event={"ID":"e415840f-113a-4545-a340-f206e683b62c","Type":"ContainerStarted","Data":"01d464916d185094049a2e1e6b71152f4320fcda3cb9813daced4fdc3f0b776b"} Jan 27 16:21:38 crc kubenswrapper[4966]: I0127 16:21:38.793247 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" event={"ID":"e415840f-113a-4545-a340-f206e683b62c","Type":"ContainerStarted","Data":"ee52e35cf9be6d0a0c3e82a101b03d3f338d0886b11b5eeaa1c0f1f4cc2fb8d0"} Jan 27 16:21:38 crc kubenswrapper[4966]: I0127 16:21:38.825565 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" podStartSLOduration=2.343365857 podStartE2EDuration="3.82554295s" podCreationTimestamp="2026-01-27 16:21:35 +0000 UTC" firstStartedPulling="2026-01-27 16:21:36.772955457 +0000 UTC m=+2363.075748945" lastFinishedPulling="2026-01-27 16:21:38.25513255 +0000 UTC m=+2364.557926038" observedRunningTime="2026-01-27 16:21:38.816585519 +0000 UTC m=+2365.119379007" watchObservedRunningTime="2026-01-27 16:21:38.82554295 +0000 UTC m=+2365.128336448" Jan 27 16:21:44 crc kubenswrapper[4966]: I0127 16:21:44.531365 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:21:44 crc kubenswrapper[4966]: E0127 16:21:44.532441 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:21:55 crc kubenswrapper[4966]: I0127 16:21:55.522209 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:21:55 crc kubenswrapper[4966]: E0127 16:21:55.524436 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:22:10 crc kubenswrapper[4966]: I0127 16:22:10.521109 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:22:10 crc kubenswrapper[4966]: E0127 16:22:10.522087 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:22:24 crc kubenswrapper[4966]: I0127 16:22:24.529522 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:22:24 crc kubenswrapper[4966]: E0127 16:22:24.530400 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:22:36 crc kubenswrapper[4966]: I0127 16:22:36.521694 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:22:36 crc kubenswrapper[4966]: E0127 16:22:36.522633 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:22:41 crc kubenswrapper[4966]: I0127 16:22:41.456575 4966 generic.go:334] "Generic (PLEG): container finished" podID="e415840f-113a-4545-a340-f206e683b62c" containerID="ee52e35cf9be6d0a0c3e82a101b03d3f338d0886b11b5eeaa1c0f1f4cc2fb8d0" exitCode=0 Jan 27 16:22:41 crc kubenswrapper[4966]: I0127 16:22:41.456654 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" event={"ID":"e415840f-113a-4545-a340-f206e683b62c","Type":"ContainerDied","Data":"ee52e35cf9be6d0a0c3e82a101b03d3f338d0886b11b5eeaa1c0f1f4cc2fb8d0"} Jan 27 16:22:42 crc kubenswrapper[4966]: I0127 16:22:42.942961 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.119058 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c29n\" (UniqueName: \"kubernetes.io/projected/e415840f-113a-4545-a340-f206e683b62c-kube-api-access-4c29n\") pod \"e415840f-113a-4545-a340-f206e683b62c\" (UID: \"e415840f-113a-4545-a340-f206e683b62c\") " Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.119211 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e415840f-113a-4545-a340-f206e683b62c-ovn-combined-ca-bundle\") pod \"e415840f-113a-4545-a340-f206e683b62c\" (UID: \"e415840f-113a-4545-a340-f206e683b62c\") " Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.119263 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e415840f-113a-4545-a340-f206e683b62c-ssh-key-openstack-edpm-ipam\") pod \"e415840f-113a-4545-a340-f206e683b62c\" (UID: \"e415840f-113a-4545-a340-f206e683b62c\") " Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.119331 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e415840f-113a-4545-a340-f206e683b62c-inventory\") pod \"e415840f-113a-4545-a340-f206e683b62c\" (UID: \"e415840f-113a-4545-a340-f206e683b62c\") " Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.119376 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e415840f-113a-4545-a340-f206e683b62c-ovncontroller-config-0\") pod \"e415840f-113a-4545-a340-f206e683b62c\" (UID: \"e415840f-113a-4545-a340-f206e683b62c\") " Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.145689 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e415840f-113a-4545-a340-f206e683b62c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e415840f-113a-4545-a340-f206e683b62c" (UID: "e415840f-113a-4545-a340-f206e683b62c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.153176 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e415840f-113a-4545-a340-f206e683b62c-kube-api-access-4c29n" (OuterVolumeSpecName: "kube-api-access-4c29n") pod "e415840f-113a-4545-a340-f206e683b62c" (UID: "e415840f-113a-4545-a340-f206e683b62c"). InnerVolumeSpecName "kube-api-access-4c29n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.195968 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e415840f-113a-4545-a340-f206e683b62c-inventory" (OuterVolumeSpecName: "inventory") pod "e415840f-113a-4545-a340-f206e683b62c" (UID: "e415840f-113a-4545-a340-f206e683b62c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.222219 4966 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e415840f-113a-4545-a340-f206e683b62c-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.222245 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c29n\" (UniqueName: \"kubernetes.io/projected/e415840f-113a-4545-a340-f206e683b62c-kube-api-access-4c29n\") on node \"crc\" DevicePath \"\"" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.222257 4966 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e415840f-113a-4545-a340-f206e683b62c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.224128 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e415840f-113a-4545-a340-f206e683b62c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e415840f-113a-4545-a340-f206e683b62c" (UID: "e415840f-113a-4545-a340-f206e683b62c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.237698 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e415840f-113a-4545-a340-f206e683b62c-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "e415840f-113a-4545-a340-f206e683b62c" (UID: "e415840f-113a-4545-a340-f206e683b62c"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.324470 4966 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e415840f-113a-4545-a340-f206e683b62c-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.324511 4966 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e415840f-113a-4545-a340-f206e683b62c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.478275 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" event={"ID":"e415840f-113a-4545-a340-f206e683b62c","Type":"ContainerDied","Data":"01d464916d185094049a2e1e6b71152f4320fcda3cb9813daced4fdc3f0b776b"} Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.478317 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01d464916d185094049a2e1e6b71152f4320fcda3cb9813daced4fdc3f0b776b" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.478336 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x65gv" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.606304 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n"] Jan 27 16:22:43 crc kubenswrapper[4966]: E0127 16:22:43.606936 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e415840f-113a-4545-a340-f206e683b62c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.606959 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="e415840f-113a-4545-a340-f206e683b62c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.607214 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="e415840f-113a-4545-a340-f206e683b62c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.608102 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.610790 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.610860 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.611057 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.612268 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.612361 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hsd" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.614842 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.618807 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n"] Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.734339 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.734475 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b94p4\" (UniqueName: \"kubernetes.io/projected/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-kube-api-access-b94p4\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.734615 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.734732 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.734948 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.735050 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.837364 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.837496 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.837622 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.837711 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.837790 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.837872 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b94p4\" (UniqueName: \"kubernetes.io/projected/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-kube-api-access-b94p4\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.842294 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.843173 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.843474 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.846349 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.855256 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.856956 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b94p4\" (UniqueName: \"kubernetes.io/projected/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-kube-api-access-b94p4\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:22:43 crc kubenswrapper[4966]: I0127 16:22:43.925985 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:22:44 crc kubenswrapper[4966]: I0127 16:22:44.509890 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n"] Jan 27 16:22:45 crc kubenswrapper[4966]: I0127 16:22:45.498708 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" event={"ID":"f9e3c1ee-653a-41b5-8f96-ebdb1167552c","Type":"ContainerStarted","Data":"6f6b89209822aa1140ba67f833111c748250fdb0f95ebf3d0c9d0a13e972bfe6"} Jan 27 16:22:46 crc kubenswrapper[4966]: I0127 16:22:46.512486 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" event={"ID":"f9e3c1ee-653a-41b5-8f96-ebdb1167552c","Type":"ContainerStarted","Data":"60e0313f399f65f8955cef23203af4e05ba1b182c428bf0a1036a502b9308a4b"} Jan 27 16:22:46 crc kubenswrapper[4966]: I0127 16:22:46.547228 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" podStartSLOduration=2.828183647 podStartE2EDuration="3.547208317s" podCreationTimestamp="2026-01-27 16:22:43 +0000 UTC" firstStartedPulling="2026-01-27 16:22:44.516215411 +0000 UTC m=+2430.819008909" lastFinishedPulling="2026-01-27 16:22:45.235240091 +0000 UTC m=+2431.538033579" observedRunningTime="2026-01-27 16:22:46.539799304 +0000 UTC m=+2432.842592812" watchObservedRunningTime="2026-01-27 16:22:46.547208317 +0000 UTC m=+2432.850001805" Jan 27 16:22:48 crc kubenswrapper[4966]: I0127 16:22:48.521860 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:22:48 crc kubenswrapper[4966]: E0127 16:22:48.522466 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:22:59 crc kubenswrapper[4966]: I0127 16:22:59.521328 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:22:59 crc kubenswrapper[4966]: E0127 16:22:59.523034 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:23:11 crc kubenswrapper[4966]: I0127 16:23:11.521400 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:23:11 crc kubenswrapper[4966]: E0127 16:23:11.522462 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:23:26 crc kubenswrapper[4966]: I0127 16:23:26.521327 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:23:26 crc kubenswrapper[4966]: E0127 16:23:26.522483 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:23:34 crc kubenswrapper[4966]: I0127 16:23:34.066492 4966 generic.go:334] "Generic (PLEG): container finished" podID="f9e3c1ee-653a-41b5-8f96-ebdb1167552c" containerID="60e0313f399f65f8955cef23203af4e05ba1b182c428bf0a1036a502b9308a4b" exitCode=0 Jan 27 16:23:34 crc kubenswrapper[4966]: I0127 16:23:34.066555 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" event={"ID":"f9e3c1ee-653a-41b5-8f96-ebdb1167552c","Type":"ContainerDied","Data":"60e0313f399f65f8955cef23203af4e05ba1b182c428bf0a1036a502b9308a4b"} Jan 27 16:23:35 crc kubenswrapper[4966]: I0127 16:23:35.612849 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:23:35 crc kubenswrapper[4966]: I0127 16:23:35.732033 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-neutron-metadata-combined-ca-bundle\") pod \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " Jan 27 16:23:35 crc kubenswrapper[4966]: I0127 16:23:35.732135 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-ssh-key-openstack-edpm-ipam\") pod \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " Jan 27 16:23:35 crc kubenswrapper[4966]: I0127 16:23:35.732405 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-nova-metadata-neutron-config-0\") pod \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " Jan 27 16:23:35 crc kubenswrapper[4966]: I0127 16:23:35.732477 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b94p4\" (UniqueName: \"kubernetes.io/projected/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-kube-api-access-b94p4\") pod \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " Jan 27 16:23:35 crc kubenswrapper[4966]: I0127 16:23:35.732513 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " Jan 27 16:23:35 crc kubenswrapper[4966]: I0127 16:23:35.732653 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-inventory\") pod \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\" (UID: \"f9e3c1ee-653a-41b5-8f96-ebdb1167552c\") " Jan 27 16:23:35 crc kubenswrapper[4966]: I0127 16:23:35.737679 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "f9e3c1ee-653a-41b5-8f96-ebdb1167552c" (UID: "f9e3c1ee-653a-41b5-8f96-ebdb1167552c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:23:35 crc kubenswrapper[4966]: I0127 16:23:35.738550 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-kube-api-access-b94p4" (OuterVolumeSpecName: "kube-api-access-b94p4") pod "f9e3c1ee-653a-41b5-8f96-ebdb1167552c" (UID: "f9e3c1ee-653a-41b5-8f96-ebdb1167552c"). InnerVolumeSpecName "kube-api-access-b94p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:23:35 crc kubenswrapper[4966]: I0127 16:23:35.770647 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "f9e3c1ee-653a-41b5-8f96-ebdb1167552c" (UID: "f9e3c1ee-653a-41b5-8f96-ebdb1167552c"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:23:35 crc kubenswrapper[4966]: I0127 16:23:35.780796 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "f9e3c1ee-653a-41b5-8f96-ebdb1167552c" (UID: "f9e3c1ee-653a-41b5-8f96-ebdb1167552c"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:23:35 crc kubenswrapper[4966]: I0127 16:23:35.784348 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f9e3c1ee-653a-41b5-8f96-ebdb1167552c" (UID: "f9e3c1ee-653a-41b5-8f96-ebdb1167552c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:23:35 crc kubenswrapper[4966]: I0127 16:23:35.785465 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-inventory" (OuterVolumeSpecName: "inventory") pod "f9e3c1ee-653a-41b5-8f96-ebdb1167552c" (UID: "f9e3c1ee-653a-41b5-8f96-ebdb1167552c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:23:35 crc kubenswrapper[4966]: I0127 16:23:35.836673 4966 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 16:23:35 crc kubenswrapper[4966]: I0127 16:23:35.836738 4966 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:23:35 crc kubenswrapper[4966]: I0127 16:23:35.836764 4966 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 16:23:35 crc kubenswrapper[4966]: I0127 16:23:35.836787 4966 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:23:35 crc kubenswrapper[4966]: I0127 16:23:35.836808 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b94p4\" (UniqueName: \"kubernetes.io/projected/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-kube-api-access-b94p4\") on node \"crc\" DevicePath \"\"" Jan 27 16:23:35 crc kubenswrapper[4966]: I0127 16:23:35.836831 4966 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f9e3c1ee-653a-41b5-8f96-ebdb1167552c-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.123434 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" event={"ID":"f9e3c1ee-653a-41b5-8f96-ebdb1167552c","Type":"ContainerDied","Data":"6f6b89209822aa1140ba67f833111c748250fdb0f95ebf3d0c9d0a13e972bfe6"} Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.123502 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f6b89209822aa1140ba67f833111c748250fdb0f95ebf3d0c9d0a13e972bfe6" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.123535 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.203238 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r"] Jan 27 16:23:36 crc kubenswrapper[4966]: E0127 16:23:36.204080 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e3c1ee-653a-41b5-8f96-ebdb1167552c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.204118 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e3c1ee-653a-41b5-8f96-ebdb1167552c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.204471 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9e3c1ee-653a-41b5-8f96-ebdb1167552c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.205481 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.207738 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.208477 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.209403 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.209438 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.211970 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hsd" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.236806 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r"] Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.348430 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9td6\" (UniqueName: \"kubernetes.io/projected/8d3c294b-3e69-4b63-8d6a-471ed67944bc-kube-api-access-b9td6\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r\" (UID: \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.348963 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r\" (UID: \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.349051 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r\" (UID: \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.349198 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r\" (UID: \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.349239 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r\" (UID: \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.451563 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r\" (UID: \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.451615 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r\" (UID: \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.451697 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r\" (UID: \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.451725 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r\" (UID: \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.451793 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9td6\" (UniqueName: \"kubernetes.io/projected/8d3c294b-3e69-4b63-8d6a-471ed67944bc-kube-api-access-b9td6\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r\" (UID: \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.456156 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r\" (UID: \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.456479 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r\" (UID: \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.456482 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r\" (UID: \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.458128 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r\" (UID: \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.472797 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9td6\" (UniqueName: \"kubernetes.io/projected/8d3c294b-3e69-4b63-8d6a-471ed67944bc-kube-api-access-b9td6\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r\" (UID: \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" Jan 27 16:23:36 crc kubenswrapper[4966]: I0127 16:23:36.531679 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" Jan 27 16:23:37 crc kubenswrapper[4966]: I0127 16:23:37.058880 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r"] Jan 27 16:23:37 crc kubenswrapper[4966]: I0127 16:23:37.134277 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" event={"ID":"8d3c294b-3e69-4b63-8d6a-471ed67944bc","Type":"ContainerStarted","Data":"631e821de5c15ea507a6b1bbf8a9abe01a251d41819320e332d7ec996ff87db8"} Jan 27 16:23:37 crc kubenswrapper[4966]: I0127 16:23:37.520778 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:23:37 crc kubenswrapper[4966]: E0127 16:23:37.521319 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:23:39 crc kubenswrapper[4966]: I0127 16:23:39.156731 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" event={"ID":"8d3c294b-3e69-4b63-8d6a-471ed67944bc","Type":"ContainerStarted","Data":"c72f32d6697755e1f28fe282dc8cc7dc3dd5937d8a4c0fa8e09ef14110892967"} Jan 27 16:23:39 crc kubenswrapper[4966]: I0127 16:23:39.183035 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" podStartSLOduration=1.7741480410000001 podStartE2EDuration="3.183007914s" podCreationTimestamp="2026-01-27 16:23:36 +0000 UTC" firstStartedPulling="2026-01-27 16:23:37.065853277 +0000 UTC m=+2483.368646755" lastFinishedPulling="2026-01-27 16:23:38.47471314 +0000 UTC m=+2484.777506628" observedRunningTime="2026-01-27 16:23:39.171982838 +0000 UTC m=+2485.474776336" watchObservedRunningTime="2026-01-27 16:23:39.183007914 +0000 UTC m=+2485.485801402" Jan 27 16:23:49 crc kubenswrapper[4966]: I0127 16:23:49.521479 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:23:49 crc kubenswrapper[4966]: E0127 16:23:49.522339 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:24:02 crc kubenswrapper[4966]: I0127 16:24:02.520663 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:24:02 crc kubenswrapper[4966]: E0127 16:24:02.521430 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:24:15 crc kubenswrapper[4966]: I0127 16:24:15.521523 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:24:16 crc kubenswrapper[4966]: I0127 16:24:16.579092 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"bd619a964f88149c2264863e47143f093900ae2ffb4faa0bcd78f7964962f4fe"} Jan 27 16:25:46 crc kubenswrapper[4966]: I0127 16:25:46.982776 4966 scope.go:117] "RemoveContainer" containerID="301475f95f734364edaf8b72edb8181b638cb003b3a7527c6a57e67d89a796d2" Jan 27 16:25:47 crc kubenswrapper[4966]: I0127 16:25:47.049009 4966 scope.go:117] "RemoveContainer" containerID="72dd2c28e090489052c38b094702797dd24252bd067ba3c4a0f5cb2468f575f8" Jan 27 16:25:47 crc kubenswrapper[4966]: I0127 16:25:47.214165 4966 scope.go:117] "RemoveContainer" containerID="d70ead958cf7aea19e7645185630d9109c89faed996525ec698bf70466198cc4" Jan 27 16:26:40 crc kubenswrapper[4966]: I0127 16:26:40.119817 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:26:40 crc kubenswrapper[4966]: I0127 16:26:40.120632 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:26:41 crc kubenswrapper[4966]: I0127 16:26:41.788787 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4kqhv"] Jan 27 16:26:41 crc kubenswrapper[4966]: I0127 16:26:41.794754 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kqhv" Jan 27 16:26:41 crc kubenswrapper[4966]: I0127 16:26:41.804068 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kqhv"] Jan 27 16:26:41 crc kubenswrapper[4966]: I0127 16:26:41.973393 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decb60d7-51c5-4ae4-b27d-40d0da62b0c9-utilities\") pod \"redhat-marketplace-4kqhv\" (UID: \"decb60d7-51c5-4ae4-b27d-40d0da62b0c9\") " pod="openshift-marketplace/redhat-marketplace-4kqhv" Jan 27 16:26:41 crc kubenswrapper[4966]: I0127 16:26:41.973521 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8q85\" (UniqueName: \"kubernetes.io/projected/decb60d7-51c5-4ae4-b27d-40d0da62b0c9-kube-api-access-s8q85\") pod \"redhat-marketplace-4kqhv\" (UID: \"decb60d7-51c5-4ae4-b27d-40d0da62b0c9\") " pod="openshift-marketplace/redhat-marketplace-4kqhv" Jan 27 16:26:41 crc kubenswrapper[4966]: I0127 16:26:41.973595 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decb60d7-51c5-4ae4-b27d-40d0da62b0c9-catalog-content\") pod \"redhat-marketplace-4kqhv\" (UID: \"decb60d7-51c5-4ae4-b27d-40d0da62b0c9\") " pod="openshift-marketplace/redhat-marketplace-4kqhv" Jan 27 16:26:42 crc kubenswrapper[4966]: I0127 16:26:42.076327 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decb60d7-51c5-4ae4-b27d-40d0da62b0c9-utilities\") pod \"redhat-marketplace-4kqhv\" (UID: \"decb60d7-51c5-4ae4-b27d-40d0da62b0c9\") " pod="openshift-marketplace/redhat-marketplace-4kqhv" Jan 27 16:26:42 crc kubenswrapper[4966]: I0127 16:26:42.076396 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8q85\" (UniqueName: \"kubernetes.io/projected/decb60d7-51c5-4ae4-b27d-40d0da62b0c9-kube-api-access-s8q85\") pod \"redhat-marketplace-4kqhv\" (UID: \"decb60d7-51c5-4ae4-b27d-40d0da62b0c9\") " pod="openshift-marketplace/redhat-marketplace-4kqhv" Jan 27 16:26:42 crc kubenswrapper[4966]: I0127 16:26:42.076452 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decb60d7-51c5-4ae4-b27d-40d0da62b0c9-catalog-content\") pod \"redhat-marketplace-4kqhv\" (UID: \"decb60d7-51c5-4ae4-b27d-40d0da62b0c9\") " pod="openshift-marketplace/redhat-marketplace-4kqhv" Jan 27 16:26:42 crc kubenswrapper[4966]: I0127 16:26:42.077306 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decb60d7-51c5-4ae4-b27d-40d0da62b0c9-utilities\") pod \"redhat-marketplace-4kqhv\" (UID: \"decb60d7-51c5-4ae4-b27d-40d0da62b0c9\") " pod="openshift-marketplace/redhat-marketplace-4kqhv" Jan 27 16:26:42 crc kubenswrapper[4966]: I0127 16:26:42.077335 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decb60d7-51c5-4ae4-b27d-40d0da62b0c9-catalog-content\") pod \"redhat-marketplace-4kqhv\" (UID: \"decb60d7-51c5-4ae4-b27d-40d0da62b0c9\") " pod="openshift-marketplace/redhat-marketplace-4kqhv" Jan 27 16:26:42 crc kubenswrapper[4966]: I0127 16:26:42.100499 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8q85\" (UniqueName: \"kubernetes.io/projected/decb60d7-51c5-4ae4-b27d-40d0da62b0c9-kube-api-access-s8q85\") pod \"redhat-marketplace-4kqhv\" (UID: \"decb60d7-51c5-4ae4-b27d-40d0da62b0c9\") " pod="openshift-marketplace/redhat-marketplace-4kqhv" Jan 27 16:26:42 crc kubenswrapper[4966]: I0127 16:26:42.129183 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kqhv" Jan 27 16:26:42 crc kubenswrapper[4966]: I0127 16:26:42.708867 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kqhv"] Jan 27 16:26:43 crc kubenswrapper[4966]: I0127 16:26:43.556778 4966 generic.go:334] "Generic (PLEG): container finished" podID="decb60d7-51c5-4ae4-b27d-40d0da62b0c9" containerID="567a198028a0c6477001fa2e37203b118ac586ad36ae59a24787f07767efdb07" exitCode=0 Jan 27 16:26:43 crc kubenswrapper[4966]: I0127 16:26:43.557037 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kqhv" event={"ID":"decb60d7-51c5-4ae4-b27d-40d0da62b0c9","Type":"ContainerDied","Data":"567a198028a0c6477001fa2e37203b118ac586ad36ae59a24787f07767efdb07"} Jan 27 16:26:43 crc kubenswrapper[4966]: I0127 16:26:43.558426 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kqhv" event={"ID":"decb60d7-51c5-4ae4-b27d-40d0da62b0c9","Type":"ContainerStarted","Data":"ecb8491ce5fab8ad5f1ddde661ee78dfd923e3f41af7ab15baccc4d21e2d4dd7"} Jan 27 16:26:43 crc kubenswrapper[4966]: I0127 16:26:43.559598 4966 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:26:45 crc kubenswrapper[4966]: I0127 16:26:45.584326 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kqhv" event={"ID":"decb60d7-51c5-4ae4-b27d-40d0da62b0c9","Type":"ContainerStarted","Data":"a1c9e0be1f51c7bc8cf94c01a56af69e6d7cfd7be53a548c9f7c1f0a9aef9cfe"} Jan 27 16:26:46 crc kubenswrapper[4966]: I0127 16:26:46.609585 4966 generic.go:334] "Generic (PLEG): container finished" podID="decb60d7-51c5-4ae4-b27d-40d0da62b0c9" containerID="a1c9e0be1f51c7bc8cf94c01a56af69e6d7cfd7be53a548c9f7c1f0a9aef9cfe" exitCode=0 Jan 27 16:26:46 crc kubenswrapper[4966]: I0127 16:26:46.609633 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kqhv" event={"ID":"decb60d7-51c5-4ae4-b27d-40d0da62b0c9","Type":"ContainerDied","Data":"a1c9e0be1f51c7bc8cf94c01a56af69e6d7cfd7be53a548c9f7c1f0a9aef9cfe"} Jan 27 16:26:47 crc kubenswrapper[4966]: I0127 16:26:47.620912 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kqhv" event={"ID":"decb60d7-51c5-4ae4-b27d-40d0da62b0c9","Type":"ContainerStarted","Data":"f3e80e2f4734046d0af5b58fa05da22d1fc0fb0359d8b11d1ddb327cd34da2d9"} Jan 27 16:26:47 crc kubenswrapper[4966]: I0127 16:26:47.647038 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4kqhv" podStartSLOduration=3.144260356 podStartE2EDuration="6.647014758s" podCreationTimestamp="2026-01-27 16:26:41 +0000 UTC" firstStartedPulling="2026-01-27 16:26:43.559369703 +0000 UTC m=+2669.862163191" lastFinishedPulling="2026-01-27 16:26:47.062124105 +0000 UTC m=+2673.364917593" observedRunningTime="2026-01-27 16:26:47.637291653 +0000 UTC m=+2673.940085161" watchObservedRunningTime="2026-01-27 16:26:47.647014758 +0000 UTC m=+2673.949808246" Jan 27 16:26:52 crc kubenswrapper[4966]: I0127 16:26:52.130562 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4kqhv" Jan 27 16:26:52 crc kubenswrapper[4966]: I0127 16:26:52.131219 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4kqhv" Jan 27 16:26:52 crc kubenswrapper[4966]: I0127 16:26:52.183085 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4kqhv" Jan 27 16:26:52 crc kubenswrapper[4966]: I0127 16:26:52.768879 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4kqhv" Jan 27 16:26:52 crc kubenswrapper[4966]: I0127 16:26:52.839468 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kqhv"] Jan 27 16:26:54 crc kubenswrapper[4966]: I0127 16:26:54.687095 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4kqhv" podUID="decb60d7-51c5-4ae4-b27d-40d0da62b0c9" containerName="registry-server" containerID="cri-o://f3e80e2f4734046d0af5b58fa05da22d1fc0fb0359d8b11d1ddb327cd34da2d9" gracePeriod=2 Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.201014 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kqhv" Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.333139 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decb60d7-51c5-4ae4-b27d-40d0da62b0c9-utilities\") pod \"decb60d7-51c5-4ae4-b27d-40d0da62b0c9\" (UID: \"decb60d7-51c5-4ae4-b27d-40d0da62b0c9\") " Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.333214 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decb60d7-51c5-4ae4-b27d-40d0da62b0c9-catalog-content\") pod \"decb60d7-51c5-4ae4-b27d-40d0da62b0c9\" (UID: \"decb60d7-51c5-4ae4-b27d-40d0da62b0c9\") " Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.333345 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8q85\" (UniqueName: \"kubernetes.io/projected/decb60d7-51c5-4ae4-b27d-40d0da62b0c9-kube-api-access-s8q85\") pod \"decb60d7-51c5-4ae4-b27d-40d0da62b0c9\" (UID: \"decb60d7-51c5-4ae4-b27d-40d0da62b0c9\") " Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.333938 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/decb60d7-51c5-4ae4-b27d-40d0da62b0c9-utilities" (OuterVolumeSpecName: "utilities") pod "decb60d7-51c5-4ae4-b27d-40d0da62b0c9" (UID: "decb60d7-51c5-4ae4-b27d-40d0da62b0c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.334257 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decb60d7-51c5-4ae4-b27d-40d0da62b0c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.353250 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/decb60d7-51c5-4ae4-b27d-40d0da62b0c9-kube-api-access-s8q85" (OuterVolumeSpecName: "kube-api-access-s8q85") pod "decb60d7-51c5-4ae4-b27d-40d0da62b0c9" (UID: "decb60d7-51c5-4ae4-b27d-40d0da62b0c9"). InnerVolumeSpecName "kube-api-access-s8q85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.358779 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/decb60d7-51c5-4ae4-b27d-40d0da62b0c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "decb60d7-51c5-4ae4-b27d-40d0da62b0c9" (UID: "decb60d7-51c5-4ae4-b27d-40d0da62b0c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.436774 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decb60d7-51c5-4ae4-b27d-40d0da62b0c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.436830 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8q85\" (UniqueName: \"kubernetes.io/projected/decb60d7-51c5-4ae4-b27d-40d0da62b0c9-kube-api-access-s8q85\") on node \"crc\" DevicePath \"\"" Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.701863 4966 generic.go:334] "Generic (PLEG): container finished" podID="decb60d7-51c5-4ae4-b27d-40d0da62b0c9" containerID="f3e80e2f4734046d0af5b58fa05da22d1fc0fb0359d8b11d1ddb327cd34da2d9" exitCode=0 Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.701922 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kqhv" event={"ID":"decb60d7-51c5-4ae4-b27d-40d0da62b0c9","Type":"ContainerDied","Data":"f3e80e2f4734046d0af5b58fa05da22d1fc0fb0359d8b11d1ddb327cd34da2d9"} Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.701949 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kqhv" event={"ID":"decb60d7-51c5-4ae4-b27d-40d0da62b0c9","Type":"ContainerDied","Data":"ecb8491ce5fab8ad5f1ddde661ee78dfd923e3f41af7ab15baccc4d21e2d4dd7"} Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.701965 4966 scope.go:117] "RemoveContainer" containerID="f3e80e2f4734046d0af5b58fa05da22d1fc0fb0359d8b11d1ddb327cd34da2d9" Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.702122 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kqhv" Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.736468 4966 scope.go:117] "RemoveContainer" containerID="a1c9e0be1f51c7bc8cf94c01a56af69e6d7cfd7be53a548c9f7c1f0a9aef9cfe" Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.738859 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kqhv"] Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.759997 4966 scope.go:117] "RemoveContainer" containerID="567a198028a0c6477001fa2e37203b118ac586ad36ae59a24787f07767efdb07" Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.768547 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kqhv"] Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.817209 4966 scope.go:117] "RemoveContainer" containerID="f3e80e2f4734046d0af5b58fa05da22d1fc0fb0359d8b11d1ddb327cd34da2d9" Jan 27 16:26:55 crc kubenswrapper[4966]: E0127 16:26:55.817662 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3e80e2f4734046d0af5b58fa05da22d1fc0fb0359d8b11d1ddb327cd34da2d9\": container with ID starting with f3e80e2f4734046d0af5b58fa05da22d1fc0fb0359d8b11d1ddb327cd34da2d9 not found: ID does not exist" containerID="f3e80e2f4734046d0af5b58fa05da22d1fc0fb0359d8b11d1ddb327cd34da2d9" Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.817720 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3e80e2f4734046d0af5b58fa05da22d1fc0fb0359d8b11d1ddb327cd34da2d9"} err="failed to get container status \"f3e80e2f4734046d0af5b58fa05da22d1fc0fb0359d8b11d1ddb327cd34da2d9\": rpc error: code = NotFound desc = could not find container \"f3e80e2f4734046d0af5b58fa05da22d1fc0fb0359d8b11d1ddb327cd34da2d9\": container with ID starting with f3e80e2f4734046d0af5b58fa05da22d1fc0fb0359d8b11d1ddb327cd34da2d9 not found: ID does not exist" Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.817759 4966 scope.go:117] "RemoveContainer" containerID="a1c9e0be1f51c7bc8cf94c01a56af69e6d7cfd7be53a548c9f7c1f0a9aef9cfe" Jan 27 16:26:55 crc kubenswrapper[4966]: E0127 16:26:55.818195 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1c9e0be1f51c7bc8cf94c01a56af69e6d7cfd7be53a548c9f7c1f0a9aef9cfe\": container with ID starting with a1c9e0be1f51c7bc8cf94c01a56af69e6d7cfd7be53a548c9f7c1f0a9aef9cfe not found: ID does not exist" containerID="a1c9e0be1f51c7bc8cf94c01a56af69e6d7cfd7be53a548c9f7c1f0a9aef9cfe" Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.818264 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1c9e0be1f51c7bc8cf94c01a56af69e6d7cfd7be53a548c9f7c1f0a9aef9cfe"} err="failed to get container status \"a1c9e0be1f51c7bc8cf94c01a56af69e6d7cfd7be53a548c9f7c1f0a9aef9cfe\": rpc error: code = NotFound desc = could not find container \"a1c9e0be1f51c7bc8cf94c01a56af69e6d7cfd7be53a548c9f7c1f0a9aef9cfe\": container with ID starting with a1c9e0be1f51c7bc8cf94c01a56af69e6d7cfd7be53a548c9f7c1f0a9aef9cfe not found: ID does not exist" Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.818296 4966 scope.go:117] "RemoveContainer" containerID="567a198028a0c6477001fa2e37203b118ac586ad36ae59a24787f07767efdb07" Jan 27 16:26:55 crc kubenswrapper[4966]: E0127 16:26:55.818552 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"567a198028a0c6477001fa2e37203b118ac586ad36ae59a24787f07767efdb07\": container with ID starting with 567a198028a0c6477001fa2e37203b118ac586ad36ae59a24787f07767efdb07 not found: ID does not exist" containerID="567a198028a0c6477001fa2e37203b118ac586ad36ae59a24787f07767efdb07" Jan 27 16:26:55 crc kubenswrapper[4966]: I0127 16:26:55.818587 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"567a198028a0c6477001fa2e37203b118ac586ad36ae59a24787f07767efdb07"} err="failed to get container status \"567a198028a0c6477001fa2e37203b118ac586ad36ae59a24787f07767efdb07\": rpc error: code = NotFound desc = could not find container \"567a198028a0c6477001fa2e37203b118ac586ad36ae59a24787f07767efdb07\": container with ID starting with 567a198028a0c6477001fa2e37203b118ac586ad36ae59a24787f07767efdb07 not found: ID does not exist" Jan 27 16:26:56 crc kubenswrapper[4966]: I0127 16:26:56.540958 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="decb60d7-51c5-4ae4-b27d-40d0da62b0c9" path="/var/lib/kubelet/pods/decb60d7-51c5-4ae4-b27d-40d0da62b0c9/volumes" Jan 27 16:27:10 crc kubenswrapper[4966]: I0127 16:27:10.119886 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:27:10 crc kubenswrapper[4966]: I0127 16:27:10.120480 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:27:40 crc kubenswrapper[4966]: I0127 16:27:40.125399 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:27:40 crc kubenswrapper[4966]: I0127 16:27:40.126116 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:27:40 crc kubenswrapper[4966]: I0127 16:27:40.126172 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 16:27:40 crc kubenswrapper[4966]: I0127 16:27:40.127530 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bd619a964f88149c2264863e47143f093900ae2ffb4faa0bcd78f7964962f4fe"} pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:27:40 crc kubenswrapper[4966]: I0127 16:27:40.127650 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" containerID="cri-o://bd619a964f88149c2264863e47143f093900ae2ffb4faa0bcd78f7964962f4fe" gracePeriod=600 Jan 27 16:27:41 crc kubenswrapper[4966]: I0127 16:27:41.185409 4966 generic.go:334] "Generic (PLEG): container finished" podID="75889828-fc5d-4516-a3c4-db3affd4f810" containerID="bd619a964f88149c2264863e47143f093900ae2ffb4faa0bcd78f7964962f4fe" exitCode=0 Jan 27 16:27:41 crc kubenswrapper[4966]: I0127 16:27:41.185501 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerDied","Data":"bd619a964f88149c2264863e47143f093900ae2ffb4faa0bcd78f7964962f4fe"} Jan 27 16:27:41 crc kubenswrapper[4966]: I0127 16:27:41.186106 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d"} Jan 27 16:27:41 crc kubenswrapper[4966]: I0127 16:27:41.186130 4966 scope.go:117] "RemoveContainer" containerID="db81131cdf17cc163a6ad2a54384afd4f845bf59f936e9ca33411b7116e0ad54" Jan 27 16:27:47 crc kubenswrapper[4966]: I0127 16:27:47.246959 4966 generic.go:334] "Generic (PLEG): container finished" podID="8d3c294b-3e69-4b63-8d6a-471ed67944bc" containerID="c72f32d6697755e1f28fe282dc8cc7dc3dd5937d8a4c0fa8e09ef14110892967" exitCode=0 Jan 27 16:27:47 crc kubenswrapper[4966]: I0127 16:27:47.247044 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" event={"ID":"8d3c294b-3e69-4b63-8d6a-471ed67944bc","Type":"ContainerDied","Data":"c72f32d6697755e1f28fe282dc8cc7dc3dd5937d8a4c0fa8e09ef14110892967"} Jan 27 16:27:48 crc kubenswrapper[4966]: I0127 16:27:48.751862 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" Jan 27 16:27:48 crc kubenswrapper[4966]: I0127 16:27:48.915553 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-libvirt-secret-0\") pod \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\" (UID: \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\") " Jan 27 16:27:48 crc kubenswrapper[4966]: I0127 16:27:48.915918 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-libvirt-combined-ca-bundle\") pod \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\" (UID: \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\") " Jan 27 16:27:48 crc kubenswrapper[4966]: I0127 16:27:48.916021 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9td6\" (UniqueName: \"kubernetes.io/projected/8d3c294b-3e69-4b63-8d6a-471ed67944bc-kube-api-access-b9td6\") pod \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\" (UID: \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\") " Jan 27 16:27:48 crc kubenswrapper[4966]: I0127 16:27:48.916113 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-inventory\") pod \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\" (UID: \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\") " Jan 27 16:27:48 crc kubenswrapper[4966]: I0127 16:27:48.916327 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-ssh-key-openstack-edpm-ipam\") pod \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\" (UID: \"8d3c294b-3e69-4b63-8d6a-471ed67944bc\") " Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.396460 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "8d3c294b-3e69-4b63-8d6a-471ed67944bc" (UID: "8d3c294b-3e69-4b63-8d6a-471ed67944bc"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.396744 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d3c294b-3e69-4b63-8d6a-471ed67944bc-kube-api-access-b9td6" (OuterVolumeSpecName: "kube-api-access-b9td6") pod "8d3c294b-3e69-4b63-8d6a-471ed67944bc" (UID: "8d3c294b-3e69-4b63-8d6a-471ed67944bc"). InnerVolumeSpecName "kube-api-access-b9td6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.405825 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9td6\" (UniqueName: \"kubernetes.io/projected/8d3c294b-3e69-4b63-8d6a-471ed67944bc-kube-api-access-b9td6\") on node \"crc\" DevicePath \"\"" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.412084 4966 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.419009 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5"] Jan 27 16:27:49 crc kubenswrapper[4966]: E0127 16:27:49.419607 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="decb60d7-51c5-4ae4-b27d-40d0da62b0c9" containerName="extract-utilities" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.419627 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="decb60d7-51c5-4ae4-b27d-40d0da62b0c9" containerName="extract-utilities" Jan 27 16:27:49 crc kubenswrapper[4966]: E0127 16:27:49.419638 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="decb60d7-51c5-4ae4-b27d-40d0da62b0c9" containerName="extract-content" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.419645 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="decb60d7-51c5-4ae4-b27d-40d0da62b0c9" containerName="extract-content" Jan 27 16:27:49 crc kubenswrapper[4966]: E0127 16:27:49.419662 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="decb60d7-51c5-4ae4-b27d-40d0da62b0c9" containerName="registry-server" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.419669 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="decb60d7-51c5-4ae4-b27d-40d0da62b0c9" containerName="registry-server" Jan 27 16:27:49 crc kubenswrapper[4966]: E0127 16:27:49.419689 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d3c294b-3e69-4b63-8d6a-471ed67944bc" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.419698 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d3c294b-3e69-4b63-8d6a-471ed67944bc" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.420048 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d3c294b-3e69-4b63-8d6a-471ed67944bc" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.420086 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="decb60d7-51c5-4ae4-b27d-40d0da62b0c9" containerName="registry-server" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.420946 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.429700 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.431836 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.432027 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.439432 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "8d3c294b-3e69-4b63-8d6a-471ed67944bc" (UID: "8d3c294b-3e69-4b63-8d6a-471ed67944bc"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.441789 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-inventory" (OuterVolumeSpecName: "inventory") pod "8d3c294b-3e69-4b63-8d6a-471ed67944bc" (UID: "8d3c294b-3e69-4b63-8d6a-471ed67944bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.459197 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8d3c294b-3e69-4b63-8d6a-471ed67944bc" (UID: "8d3c294b-3e69-4b63-8d6a-471ed67944bc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.477600 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" event={"ID":"8d3c294b-3e69-4b63-8d6a-471ed67944bc","Type":"ContainerDied","Data":"631e821de5c15ea507a6b1bbf8a9abe01a251d41819320e332d7ec996ff87db8"} Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.477645 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="631e821de5c15ea507a6b1bbf8a9abe01a251d41819320e332d7ec996ff87db8" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.477698 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.514128 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.514183 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.514247 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.514279 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.514309 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.514340 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.514375 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.514497 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbkhr\" (UniqueName: \"kubernetes.io/projected/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-kube-api-access-lbkhr\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.514558 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.515093 4966 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.515213 4966 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.515295 4966 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d3c294b-3e69-4b63-8d6a-471ed67944bc-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.573295 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5"] Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.621979 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.622176 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.622495 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.622638 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.622767 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.623419 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.623577 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.624091 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbkhr\" (UniqueName: \"kubernetes.io/projected/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-kube-api-access-lbkhr\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.624188 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.632851 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.633828 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.639428 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.639863 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.646843 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.650367 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.654485 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.660265 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbkhr\" (UniqueName: \"kubernetes.io/projected/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-kube-api-access-lbkhr\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.663498 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cvnb5\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:49 crc kubenswrapper[4966]: I0127 16:27:49.814479 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:27:50 crc kubenswrapper[4966]: I0127 16:27:50.396155 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5"] Jan 27 16:27:50 crc kubenswrapper[4966]: I0127 16:27:50.488119 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" event={"ID":"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe","Type":"ContainerStarted","Data":"dc0e2d832691a677244da11db1a181dc29f29e720dc91d64203f1ddae87aa577"} Jan 27 16:27:51 crc kubenswrapper[4966]: I0127 16:27:51.501440 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" event={"ID":"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe","Type":"ContainerStarted","Data":"7b694d888ebd3ae8e50ece30fdfabbf163500e0f07a41b0d52891ac5f2359aed"} Jan 27 16:27:51 crc kubenswrapper[4966]: I0127 16:27:51.522390 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" podStartSLOduration=2.116722475 podStartE2EDuration="2.522373087s" podCreationTimestamp="2026-01-27 16:27:49 +0000 UTC" firstStartedPulling="2026-01-27 16:27:50.405095358 +0000 UTC m=+2736.707888846" lastFinishedPulling="2026-01-27 16:27:50.81074597 +0000 UTC m=+2737.113539458" observedRunningTime="2026-01-27 16:27:51.517811765 +0000 UTC m=+2737.820605273" watchObservedRunningTime="2026-01-27 16:27:51.522373087 +0000 UTC m=+2737.825166575" Jan 27 16:28:26 crc kubenswrapper[4966]: I0127 16:28:26.739761 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q7pgb"] Jan 27 16:28:26 crc kubenswrapper[4966]: I0127 16:28:26.743370 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7pgb" Jan 27 16:28:26 crc kubenswrapper[4966]: I0127 16:28:26.759543 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q7pgb"] Jan 27 16:28:26 crc kubenswrapper[4966]: I0127 16:28:26.881504 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2fc58d7-1031-43db-93ea-bb524ce24398-catalog-content\") pod \"redhat-operators-q7pgb\" (UID: \"d2fc58d7-1031-43db-93ea-bb524ce24398\") " pod="openshift-marketplace/redhat-operators-q7pgb" Jan 27 16:28:26 crc kubenswrapper[4966]: I0127 16:28:26.881852 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2fc58d7-1031-43db-93ea-bb524ce24398-utilities\") pod \"redhat-operators-q7pgb\" (UID: \"d2fc58d7-1031-43db-93ea-bb524ce24398\") " pod="openshift-marketplace/redhat-operators-q7pgb" Jan 27 16:28:26 crc kubenswrapper[4966]: I0127 16:28:26.882158 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8pq2\" (UniqueName: \"kubernetes.io/projected/d2fc58d7-1031-43db-93ea-bb524ce24398-kube-api-access-v8pq2\") pod \"redhat-operators-q7pgb\" (UID: \"d2fc58d7-1031-43db-93ea-bb524ce24398\") " pod="openshift-marketplace/redhat-operators-q7pgb" Jan 27 16:28:26 crc kubenswrapper[4966]: I0127 16:28:26.992465 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8pq2\" (UniqueName: \"kubernetes.io/projected/d2fc58d7-1031-43db-93ea-bb524ce24398-kube-api-access-v8pq2\") pod \"redhat-operators-q7pgb\" (UID: \"d2fc58d7-1031-43db-93ea-bb524ce24398\") " pod="openshift-marketplace/redhat-operators-q7pgb" Jan 27 16:28:26 crc kubenswrapper[4966]: I0127 16:28:26.992891 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2fc58d7-1031-43db-93ea-bb524ce24398-catalog-content\") pod \"redhat-operators-q7pgb\" (UID: \"d2fc58d7-1031-43db-93ea-bb524ce24398\") " pod="openshift-marketplace/redhat-operators-q7pgb" Jan 27 16:28:26 crc kubenswrapper[4966]: I0127 16:28:26.993011 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2fc58d7-1031-43db-93ea-bb524ce24398-utilities\") pod \"redhat-operators-q7pgb\" (UID: \"d2fc58d7-1031-43db-93ea-bb524ce24398\") " pod="openshift-marketplace/redhat-operators-q7pgb" Jan 27 16:28:26 crc kubenswrapper[4966]: I0127 16:28:26.993774 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2fc58d7-1031-43db-93ea-bb524ce24398-catalog-content\") pod \"redhat-operators-q7pgb\" (UID: \"d2fc58d7-1031-43db-93ea-bb524ce24398\") " pod="openshift-marketplace/redhat-operators-q7pgb" Jan 27 16:28:26 crc kubenswrapper[4966]: I0127 16:28:26.994031 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2fc58d7-1031-43db-93ea-bb524ce24398-utilities\") pod \"redhat-operators-q7pgb\" (UID: \"d2fc58d7-1031-43db-93ea-bb524ce24398\") " pod="openshift-marketplace/redhat-operators-q7pgb" Jan 27 16:28:27 crc kubenswrapper[4966]: I0127 16:28:27.045250 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8pq2\" (UniqueName: \"kubernetes.io/projected/d2fc58d7-1031-43db-93ea-bb524ce24398-kube-api-access-v8pq2\") pod \"redhat-operators-q7pgb\" (UID: \"d2fc58d7-1031-43db-93ea-bb524ce24398\") " pod="openshift-marketplace/redhat-operators-q7pgb" Jan 27 16:28:27 crc kubenswrapper[4966]: I0127 16:28:27.079305 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7pgb" Jan 27 16:28:27 crc kubenswrapper[4966]: I0127 16:28:27.627976 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q7pgb"] Jan 27 16:28:27 crc kubenswrapper[4966]: I0127 16:28:27.861514 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7pgb" event={"ID":"d2fc58d7-1031-43db-93ea-bb524ce24398","Type":"ContainerStarted","Data":"64f2f544a83e13726c2a6474f8495168b05d5aab531a7b497045e2cf492ea07f"} Jan 27 16:28:28 crc kubenswrapper[4966]: I0127 16:28:28.873102 4966 generic.go:334] "Generic (PLEG): container finished" podID="d2fc58d7-1031-43db-93ea-bb524ce24398" containerID="a90827decc2cef37ed0f28fa097f8dedc565ff7342cd1d374dbb645e36441dae" exitCode=0 Jan 27 16:28:28 crc kubenswrapper[4966]: I0127 16:28:28.873145 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7pgb" event={"ID":"d2fc58d7-1031-43db-93ea-bb524ce24398","Type":"ContainerDied","Data":"a90827decc2cef37ed0f28fa097f8dedc565ff7342cd1d374dbb645e36441dae"} Jan 27 16:28:29 crc kubenswrapper[4966]: I0127 16:28:29.884838 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7pgb" event={"ID":"d2fc58d7-1031-43db-93ea-bb524ce24398","Type":"ContainerStarted","Data":"85b1f98cc2816dcb12934dbd4d01a4ca5a995ec42406343e7bdf86f797f427d1"} Jan 27 16:28:34 crc kubenswrapper[4966]: I0127 16:28:34.988008 4966 generic.go:334] "Generic (PLEG): container finished" podID="d2fc58d7-1031-43db-93ea-bb524ce24398" containerID="85b1f98cc2816dcb12934dbd4d01a4ca5a995ec42406343e7bdf86f797f427d1" exitCode=0 Jan 27 16:28:34 crc kubenswrapper[4966]: I0127 16:28:34.988082 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7pgb" event={"ID":"d2fc58d7-1031-43db-93ea-bb524ce24398","Type":"ContainerDied","Data":"85b1f98cc2816dcb12934dbd4d01a4ca5a995ec42406343e7bdf86f797f427d1"} Jan 27 16:28:36 crc kubenswrapper[4966]: I0127 16:28:36.003913 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7pgb" event={"ID":"d2fc58d7-1031-43db-93ea-bb524ce24398","Type":"ContainerStarted","Data":"81faf1403b182343a33a58e5cdc358692a5708de6e94d8e440364584d5391598"} Jan 27 16:28:36 crc kubenswrapper[4966]: I0127 16:28:36.024423 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q7pgb" podStartSLOduration=3.514902654 podStartE2EDuration="10.024408071s" podCreationTimestamp="2026-01-27 16:28:26 +0000 UTC" firstStartedPulling="2026-01-27 16:28:28.874831804 +0000 UTC m=+2775.177625292" lastFinishedPulling="2026-01-27 16:28:35.384337221 +0000 UTC m=+2781.687130709" observedRunningTime="2026-01-27 16:28:36.021364516 +0000 UTC m=+2782.324158004" watchObservedRunningTime="2026-01-27 16:28:36.024408071 +0000 UTC m=+2782.327201559" Jan 27 16:28:37 crc kubenswrapper[4966]: I0127 16:28:37.082080 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q7pgb" Jan 27 16:28:37 crc kubenswrapper[4966]: I0127 16:28:37.082127 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q7pgb" Jan 27 16:28:38 crc kubenswrapper[4966]: I0127 16:28:38.137089 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q7pgb" podUID="d2fc58d7-1031-43db-93ea-bb524ce24398" containerName="registry-server" probeResult="failure" output=< Jan 27 16:28:38 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 16:28:38 crc kubenswrapper[4966]: > Jan 27 16:28:47 crc kubenswrapper[4966]: I0127 16:28:47.129532 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q7pgb" Jan 27 16:28:47 crc kubenswrapper[4966]: I0127 16:28:47.190783 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q7pgb" Jan 27 16:28:47 crc kubenswrapper[4966]: I0127 16:28:47.366137 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q7pgb"] Jan 27 16:28:48 crc kubenswrapper[4966]: I0127 16:28:48.178613 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q7pgb" podUID="d2fc58d7-1031-43db-93ea-bb524ce24398" containerName="registry-server" containerID="cri-o://81faf1403b182343a33a58e5cdc358692a5708de6e94d8e440364584d5391598" gracePeriod=2 Jan 27 16:28:48 crc kubenswrapper[4966]: I0127 16:28:48.676234 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7pgb" Jan 27 16:28:48 crc kubenswrapper[4966]: I0127 16:28:48.854229 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2fc58d7-1031-43db-93ea-bb524ce24398-catalog-content\") pod \"d2fc58d7-1031-43db-93ea-bb524ce24398\" (UID: \"d2fc58d7-1031-43db-93ea-bb524ce24398\") " Jan 27 16:28:48 crc kubenswrapper[4966]: I0127 16:28:48.854295 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2fc58d7-1031-43db-93ea-bb524ce24398-utilities\") pod \"d2fc58d7-1031-43db-93ea-bb524ce24398\" (UID: \"d2fc58d7-1031-43db-93ea-bb524ce24398\") " Jan 27 16:28:48 crc kubenswrapper[4966]: I0127 16:28:48.854500 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8pq2\" (UniqueName: \"kubernetes.io/projected/d2fc58d7-1031-43db-93ea-bb524ce24398-kube-api-access-v8pq2\") pod \"d2fc58d7-1031-43db-93ea-bb524ce24398\" (UID: \"d2fc58d7-1031-43db-93ea-bb524ce24398\") " Jan 27 16:28:48 crc kubenswrapper[4966]: I0127 16:28:48.855151 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2fc58d7-1031-43db-93ea-bb524ce24398-utilities" (OuterVolumeSpecName: "utilities") pod "d2fc58d7-1031-43db-93ea-bb524ce24398" (UID: "d2fc58d7-1031-43db-93ea-bb524ce24398"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:28:48 crc kubenswrapper[4966]: I0127 16:28:48.863302 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2fc58d7-1031-43db-93ea-bb524ce24398-kube-api-access-v8pq2" (OuterVolumeSpecName: "kube-api-access-v8pq2") pod "d2fc58d7-1031-43db-93ea-bb524ce24398" (UID: "d2fc58d7-1031-43db-93ea-bb524ce24398"). InnerVolumeSpecName "kube-api-access-v8pq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:28:48 crc kubenswrapper[4966]: I0127 16:28:48.957943 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2fc58d7-1031-43db-93ea-bb524ce24398-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:28:48 crc kubenswrapper[4966]: I0127 16:28:48.957983 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8pq2\" (UniqueName: \"kubernetes.io/projected/d2fc58d7-1031-43db-93ea-bb524ce24398-kube-api-access-v8pq2\") on node \"crc\" DevicePath \"\"" Jan 27 16:28:48 crc kubenswrapper[4966]: I0127 16:28:48.987106 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2fc58d7-1031-43db-93ea-bb524ce24398-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2fc58d7-1031-43db-93ea-bb524ce24398" (UID: "d2fc58d7-1031-43db-93ea-bb524ce24398"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:28:49 crc kubenswrapper[4966]: I0127 16:28:49.061269 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2fc58d7-1031-43db-93ea-bb524ce24398-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:28:49 crc kubenswrapper[4966]: I0127 16:28:49.190265 4966 generic.go:334] "Generic (PLEG): container finished" podID="d2fc58d7-1031-43db-93ea-bb524ce24398" containerID="81faf1403b182343a33a58e5cdc358692a5708de6e94d8e440364584d5391598" exitCode=0 Jan 27 16:28:49 crc kubenswrapper[4966]: I0127 16:28:49.190347 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7pgb" Jan 27 16:28:49 crc kubenswrapper[4966]: I0127 16:28:49.190351 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7pgb" event={"ID":"d2fc58d7-1031-43db-93ea-bb524ce24398","Type":"ContainerDied","Data":"81faf1403b182343a33a58e5cdc358692a5708de6e94d8e440364584d5391598"} Jan 27 16:28:49 crc kubenswrapper[4966]: I0127 16:28:49.191696 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7pgb" event={"ID":"d2fc58d7-1031-43db-93ea-bb524ce24398","Type":"ContainerDied","Data":"64f2f544a83e13726c2a6474f8495168b05d5aab531a7b497045e2cf492ea07f"} Jan 27 16:28:49 crc kubenswrapper[4966]: I0127 16:28:49.191727 4966 scope.go:117] "RemoveContainer" containerID="81faf1403b182343a33a58e5cdc358692a5708de6e94d8e440364584d5391598" Jan 27 16:28:49 crc kubenswrapper[4966]: I0127 16:28:49.232046 4966 scope.go:117] "RemoveContainer" containerID="85b1f98cc2816dcb12934dbd4d01a4ca5a995ec42406343e7bdf86f797f427d1" Jan 27 16:28:49 crc kubenswrapper[4966]: I0127 16:28:49.245886 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q7pgb"] Jan 27 16:28:49 crc kubenswrapper[4966]: I0127 16:28:49.262684 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q7pgb"] Jan 27 16:28:49 crc kubenswrapper[4966]: I0127 16:28:49.266766 4966 scope.go:117] "RemoveContainer" containerID="a90827decc2cef37ed0f28fa097f8dedc565ff7342cd1d374dbb645e36441dae" Jan 27 16:28:49 crc kubenswrapper[4966]: I0127 16:28:49.339344 4966 scope.go:117] "RemoveContainer" containerID="81faf1403b182343a33a58e5cdc358692a5708de6e94d8e440364584d5391598" Jan 27 16:28:49 crc kubenswrapper[4966]: E0127 16:28:49.339738 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81faf1403b182343a33a58e5cdc358692a5708de6e94d8e440364584d5391598\": container with ID starting with 81faf1403b182343a33a58e5cdc358692a5708de6e94d8e440364584d5391598 not found: ID does not exist" containerID="81faf1403b182343a33a58e5cdc358692a5708de6e94d8e440364584d5391598" Jan 27 16:28:49 crc kubenswrapper[4966]: I0127 16:28:49.339775 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81faf1403b182343a33a58e5cdc358692a5708de6e94d8e440364584d5391598"} err="failed to get container status \"81faf1403b182343a33a58e5cdc358692a5708de6e94d8e440364584d5391598\": rpc error: code = NotFound desc = could not find container \"81faf1403b182343a33a58e5cdc358692a5708de6e94d8e440364584d5391598\": container with ID starting with 81faf1403b182343a33a58e5cdc358692a5708de6e94d8e440364584d5391598 not found: ID does not exist" Jan 27 16:28:49 crc kubenswrapper[4966]: I0127 16:28:49.339800 4966 scope.go:117] "RemoveContainer" containerID="85b1f98cc2816dcb12934dbd4d01a4ca5a995ec42406343e7bdf86f797f427d1" Jan 27 16:28:49 crc kubenswrapper[4966]: E0127 16:28:49.340117 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85b1f98cc2816dcb12934dbd4d01a4ca5a995ec42406343e7bdf86f797f427d1\": container with ID starting with 85b1f98cc2816dcb12934dbd4d01a4ca5a995ec42406343e7bdf86f797f427d1 not found: ID does not exist" containerID="85b1f98cc2816dcb12934dbd4d01a4ca5a995ec42406343e7bdf86f797f427d1" Jan 27 16:28:49 crc kubenswrapper[4966]: I0127 16:28:49.340215 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85b1f98cc2816dcb12934dbd4d01a4ca5a995ec42406343e7bdf86f797f427d1"} err="failed to get container status \"85b1f98cc2816dcb12934dbd4d01a4ca5a995ec42406343e7bdf86f797f427d1\": rpc error: code = NotFound desc = could not find container \"85b1f98cc2816dcb12934dbd4d01a4ca5a995ec42406343e7bdf86f797f427d1\": container with ID starting with 85b1f98cc2816dcb12934dbd4d01a4ca5a995ec42406343e7bdf86f797f427d1 not found: ID does not exist" Jan 27 16:28:49 crc kubenswrapper[4966]: I0127 16:28:49.340247 4966 scope.go:117] "RemoveContainer" containerID="a90827decc2cef37ed0f28fa097f8dedc565ff7342cd1d374dbb645e36441dae" Jan 27 16:28:49 crc kubenswrapper[4966]: E0127 16:28:49.340774 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a90827decc2cef37ed0f28fa097f8dedc565ff7342cd1d374dbb645e36441dae\": container with ID starting with a90827decc2cef37ed0f28fa097f8dedc565ff7342cd1d374dbb645e36441dae not found: ID does not exist" containerID="a90827decc2cef37ed0f28fa097f8dedc565ff7342cd1d374dbb645e36441dae" Jan 27 16:28:49 crc kubenswrapper[4966]: I0127 16:28:49.340826 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a90827decc2cef37ed0f28fa097f8dedc565ff7342cd1d374dbb645e36441dae"} err="failed to get container status \"a90827decc2cef37ed0f28fa097f8dedc565ff7342cd1d374dbb645e36441dae\": rpc error: code = NotFound desc = could not find container \"a90827decc2cef37ed0f28fa097f8dedc565ff7342cd1d374dbb645e36441dae\": container with ID starting with a90827decc2cef37ed0f28fa097f8dedc565ff7342cd1d374dbb645e36441dae not found: ID does not exist" Jan 27 16:28:50 crc kubenswrapper[4966]: I0127 16:28:50.532956 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2fc58d7-1031-43db-93ea-bb524ce24398" path="/var/lib/kubelet/pods/d2fc58d7-1031-43db-93ea-bb524ce24398/volumes" Jan 27 16:29:40 crc kubenswrapper[4966]: I0127 16:29:40.119174 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:29:40 crc kubenswrapper[4966]: I0127 16:29:40.119642 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:30:00 crc kubenswrapper[4966]: I0127 16:30:00.162805 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492190-gp2zc"] Jan 27 16:30:00 crc kubenswrapper[4966]: E0127 16:30:00.164110 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2fc58d7-1031-43db-93ea-bb524ce24398" containerName="registry-server" Jan 27 16:30:00 crc kubenswrapper[4966]: I0127 16:30:00.164132 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2fc58d7-1031-43db-93ea-bb524ce24398" containerName="registry-server" Jan 27 16:30:00 crc kubenswrapper[4966]: E0127 16:30:00.164167 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2fc58d7-1031-43db-93ea-bb524ce24398" containerName="extract-content" Jan 27 16:30:00 crc kubenswrapper[4966]: I0127 16:30:00.164187 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2fc58d7-1031-43db-93ea-bb524ce24398" containerName="extract-content" Jan 27 16:30:00 crc kubenswrapper[4966]: E0127 16:30:00.164204 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2fc58d7-1031-43db-93ea-bb524ce24398" containerName="extract-utilities" Jan 27 16:30:00 crc kubenswrapper[4966]: I0127 16:30:00.164213 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2fc58d7-1031-43db-93ea-bb524ce24398" containerName="extract-utilities" Jan 27 16:30:00 crc kubenswrapper[4966]: I0127 16:30:00.164710 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2fc58d7-1031-43db-93ea-bb524ce24398" containerName="registry-server" Jan 27 16:30:00 crc kubenswrapper[4966]: I0127 16:30:00.165822 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-gp2zc" Jan 27 16:30:00 crc kubenswrapper[4966]: I0127 16:30:00.169591 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 16:30:00 crc kubenswrapper[4966]: I0127 16:30:00.176244 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492190-gp2zc"] Jan 27 16:30:00 crc kubenswrapper[4966]: I0127 16:30:00.177602 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 16:30:00 crc kubenswrapper[4966]: I0127 16:30:00.250019 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3e03c47b-3202-4b85-a3dd-af2ab3f241c6-secret-volume\") pod \"collect-profiles-29492190-gp2zc\" (UID: \"3e03c47b-3202-4b85-a3dd-af2ab3f241c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-gp2zc" Jan 27 16:30:00 crc kubenswrapper[4966]: I0127 16:30:00.250158 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slv5k\" (UniqueName: \"kubernetes.io/projected/3e03c47b-3202-4b85-a3dd-af2ab3f241c6-kube-api-access-slv5k\") pod \"collect-profiles-29492190-gp2zc\" (UID: \"3e03c47b-3202-4b85-a3dd-af2ab3f241c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-gp2zc" Jan 27 16:30:00 crc kubenswrapper[4966]: I0127 16:30:00.250397 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e03c47b-3202-4b85-a3dd-af2ab3f241c6-config-volume\") pod \"collect-profiles-29492190-gp2zc\" (UID: \"3e03c47b-3202-4b85-a3dd-af2ab3f241c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-gp2zc" Jan 27 16:30:00 crc kubenswrapper[4966]: I0127 16:30:00.352728 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e03c47b-3202-4b85-a3dd-af2ab3f241c6-config-volume\") pod \"collect-profiles-29492190-gp2zc\" (UID: \"3e03c47b-3202-4b85-a3dd-af2ab3f241c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-gp2zc" Jan 27 16:30:00 crc kubenswrapper[4966]: I0127 16:30:00.352878 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3e03c47b-3202-4b85-a3dd-af2ab3f241c6-secret-volume\") pod \"collect-profiles-29492190-gp2zc\" (UID: \"3e03c47b-3202-4b85-a3dd-af2ab3f241c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-gp2zc" Jan 27 16:30:00 crc kubenswrapper[4966]: I0127 16:30:00.352951 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slv5k\" (UniqueName: \"kubernetes.io/projected/3e03c47b-3202-4b85-a3dd-af2ab3f241c6-kube-api-access-slv5k\") pod \"collect-profiles-29492190-gp2zc\" (UID: \"3e03c47b-3202-4b85-a3dd-af2ab3f241c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-gp2zc" Jan 27 16:30:00 crc kubenswrapper[4966]: I0127 16:30:00.353699 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e03c47b-3202-4b85-a3dd-af2ab3f241c6-config-volume\") pod \"collect-profiles-29492190-gp2zc\" (UID: \"3e03c47b-3202-4b85-a3dd-af2ab3f241c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-gp2zc" Jan 27 16:30:00 crc kubenswrapper[4966]: I0127 16:30:00.360694 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3e03c47b-3202-4b85-a3dd-af2ab3f241c6-secret-volume\") pod \"collect-profiles-29492190-gp2zc\" (UID: \"3e03c47b-3202-4b85-a3dd-af2ab3f241c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-gp2zc" Jan 27 16:30:00 crc kubenswrapper[4966]: I0127 16:30:00.370821 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slv5k\" (UniqueName: \"kubernetes.io/projected/3e03c47b-3202-4b85-a3dd-af2ab3f241c6-kube-api-access-slv5k\") pod \"collect-profiles-29492190-gp2zc\" (UID: \"3e03c47b-3202-4b85-a3dd-af2ab3f241c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-gp2zc" Jan 27 16:30:00 crc kubenswrapper[4966]: I0127 16:30:00.501933 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-gp2zc" Jan 27 16:30:00 crc kubenswrapper[4966]: I0127 16:30:00.969500 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492190-gp2zc"] Jan 27 16:30:01 crc kubenswrapper[4966]: I0127 16:30:01.959250 4966 generic.go:334] "Generic (PLEG): container finished" podID="3e03c47b-3202-4b85-a3dd-af2ab3f241c6" containerID="557e37632b334ffef502c75e04fa7fe38acde25762befcf7ccaf57b982472e01" exitCode=0 Jan 27 16:30:01 crc kubenswrapper[4966]: I0127 16:30:01.959346 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-gp2zc" event={"ID":"3e03c47b-3202-4b85-a3dd-af2ab3f241c6","Type":"ContainerDied","Data":"557e37632b334ffef502c75e04fa7fe38acde25762befcf7ccaf57b982472e01"} Jan 27 16:30:01 crc kubenswrapper[4966]: I0127 16:30:01.959584 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-gp2zc" event={"ID":"3e03c47b-3202-4b85-a3dd-af2ab3f241c6","Type":"ContainerStarted","Data":"5ac41f5b1a3f4ab370505151300047dde0bae5425b7c396caf4b962e0c2a8a72"} Jan 27 16:30:01 crc kubenswrapper[4966]: I0127 16:30:01.962208 4966 generic.go:334] "Generic (PLEG): container finished" podID="da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe" containerID="7b694d888ebd3ae8e50ece30fdfabbf163500e0f07a41b0d52891ac5f2359aed" exitCode=0 Jan 27 16:30:01 crc kubenswrapper[4966]: I0127 16:30:01.962254 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" event={"ID":"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe","Type":"ContainerDied","Data":"7b694d888ebd3ae8e50ece30fdfabbf163500e0f07a41b0d52891ac5f2359aed"} Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.420689 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-gp2zc" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.438207 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slv5k\" (UniqueName: \"kubernetes.io/projected/3e03c47b-3202-4b85-a3dd-af2ab3f241c6-kube-api-access-slv5k\") pod \"3e03c47b-3202-4b85-a3dd-af2ab3f241c6\" (UID: \"3e03c47b-3202-4b85-a3dd-af2ab3f241c6\") " Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.438330 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3e03c47b-3202-4b85-a3dd-af2ab3f241c6-secret-volume\") pod \"3e03c47b-3202-4b85-a3dd-af2ab3f241c6\" (UID: \"3e03c47b-3202-4b85-a3dd-af2ab3f241c6\") " Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.438457 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e03c47b-3202-4b85-a3dd-af2ab3f241c6-config-volume\") pod \"3e03c47b-3202-4b85-a3dd-af2ab3f241c6\" (UID: \"3e03c47b-3202-4b85-a3dd-af2ab3f241c6\") " Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.439662 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e03c47b-3202-4b85-a3dd-af2ab3f241c6-config-volume" (OuterVolumeSpecName: "config-volume") pod "3e03c47b-3202-4b85-a3dd-af2ab3f241c6" (UID: "3e03c47b-3202-4b85-a3dd-af2ab3f241c6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.440837 4966 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e03c47b-3202-4b85-a3dd-af2ab3f241c6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.447436 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e03c47b-3202-4b85-a3dd-af2ab3f241c6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3e03c47b-3202-4b85-a3dd-af2ab3f241c6" (UID: "3e03c47b-3202-4b85-a3dd-af2ab3f241c6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.448416 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e03c47b-3202-4b85-a3dd-af2ab3f241c6-kube-api-access-slv5k" (OuterVolumeSpecName: "kube-api-access-slv5k") pod "3e03c47b-3202-4b85-a3dd-af2ab3f241c6" (UID: "3e03c47b-3202-4b85-a3dd-af2ab3f241c6"). InnerVolumeSpecName "kube-api-access-slv5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.543316 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slv5k\" (UniqueName: \"kubernetes.io/projected/3e03c47b-3202-4b85-a3dd-af2ab3f241c6-kube-api-access-slv5k\") on node \"crc\" DevicePath \"\"" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.543379 4966 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3e03c47b-3202-4b85-a3dd-af2ab3f241c6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.562263 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.645107 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-ssh-key-openstack-edpm-ipam\") pod \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.645188 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbkhr\" (UniqueName: \"kubernetes.io/projected/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-kube-api-access-lbkhr\") pod \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.645261 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-cell1-compute-config-1\") pod \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.645363 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-extra-config-0\") pod \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.645388 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-migration-ssh-key-0\") pod \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.645456 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-migration-ssh-key-1\") pod \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.645632 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-inventory\") pod \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.645723 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-combined-ca-bundle\") pod \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.645768 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-cell1-compute-config-0\") pod \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\" (UID: \"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe\") " Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.656287 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe" (UID: "da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.665494 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-kube-api-access-lbkhr" (OuterVolumeSpecName: "kube-api-access-lbkhr") pod "da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe" (UID: "da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe"). InnerVolumeSpecName "kube-api-access-lbkhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.678208 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe" (UID: "da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.685861 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe" (UID: "da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.686878 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-inventory" (OuterVolumeSpecName: "inventory") pod "da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe" (UID: "da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.689614 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe" (UID: "da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.693171 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe" (UID: "da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.697278 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe" (UID: "da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.725865 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe" (UID: "da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.748484 4966 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.748521 4966 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.748536 4966 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.748548 4966 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.748559 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbkhr\" (UniqueName: \"kubernetes.io/projected/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-kube-api-access-lbkhr\") on node \"crc\" DevicePath \"\"" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.748571 4966 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.748583 4966 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.748595 4966 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:30:03 crc kubenswrapper[4966]: I0127 16:30:03.748606 4966 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.018602 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-gp2zc" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.018611 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-gp2zc" event={"ID":"3e03c47b-3202-4b85-a3dd-af2ab3f241c6","Type":"ContainerDied","Data":"5ac41f5b1a3f4ab370505151300047dde0bae5425b7c396caf4b962e0c2a8a72"} Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.018967 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ac41f5b1a3f4ab370505151300047dde0bae5425b7c396caf4b962e0c2a8a72" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.031075 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" event={"ID":"da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe","Type":"ContainerDied","Data":"dc0e2d832691a677244da11db1a181dc29f29e720dc91d64203f1ddae87aa577"} Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.031128 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc0e2d832691a677244da11db1a181dc29f29e720dc91d64203f1ddae87aa577" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.031215 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cvnb5" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.087770 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb"] Jan 27 16:30:04 crc kubenswrapper[4966]: E0127 16:30:04.088325 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.088339 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 16:30:04 crc kubenswrapper[4966]: E0127 16:30:04.088356 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e03c47b-3202-4b85-a3dd-af2ab3f241c6" containerName="collect-profiles" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.088362 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e03c47b-3202-4b85-a3dd-af2ab3f241c6" containerName="collect-profiles" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.088610 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e03c47b-3202-4b85-a3dd-af2ab3f241c6" containerName="collect-profiles" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.088637 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.089569 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.097864 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.097978 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.098020 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.098174 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.098874 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hsd" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.114459 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb"] Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.158628 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.158703 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.158732 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.158872 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.158980 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67cnd\" (UniqueName: \"kubernetes.io/projected/492de347-8c7a-4efc-a0ad-000c4da9df94-kube-api-access-67cnd\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.159011 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.161059 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.263877 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.264004 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.264035 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.264166 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.264265 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67cnd\" (UniqueName: \"kubernetes.io/projected/492de347-8c7a-4efc-a0ad-000c4da9df94-kube-api-access-67cnd\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.264306 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.264338 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.269491 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.271182 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.271184 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.271380 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.271810 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.272437 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.283719 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67cnd\" (UniqueName: \"kubernetes.io/projected/492de347-8c7a-4efc-a0ad-000c4da9df94-kube-api-access-67cnd\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.424918 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.517942 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-nwrxn"] Jan 27 16:30:04 crc kubenswrapper[4966]: I0127 16:30:04.561242 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-nwrxn"] Jan 27 16:30:05 crc kubenswrapper[4966]: I0127 16:30:05.001308 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb"] Jan 27 16:30:05 crc kubenswrapper[4966]: I0127 16:30:05.041750 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" event={"ID":"492de347-8c7a-4efc-a0ad-000c4da9df94","Type":"ContainerStarted","Data":"967d20473ccc02eb3252f99916af0fa0732cd9a9e800b8fb5dedf07f38581e3d"} Jan 27 16:30:06 crc kubenswrapper[4966]: I0127 16:30:06.071320 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" event={"ID":"492de347-8c7a-4efc-a0ad-000c4da9df94","Type":"ContainerStarted","Data":"22156afbc76189adbbd61ccbdc8d5e8f9fca0b6f86b909691479bc11b5947df7"} Jan 27 16:30:06 crc kubenswrapper[4966]: I0127 16:30:06.097709 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" podStartSLOduration=1.459884888 podStartE2EDuration="2.097692408s" podCreationTimestamp="2026-01-27 16:30:04 +0000 UTC" firstStartedPulling="2026-01-27 16:30:05.003182957 +0000 UTC m=+2871.305976445" lastFinishedPulling="2026-01-27 16:30:05.640990467 +0000 UTC m=+2871.943783965" observedRunningTime="2026-01-27 16:30:06.089591224 +0000 UTC m=+2872.392384742" watchObservedRunningTime="2026-01-27 16:30:06.097692408 +0000 UTC m=+2872.400485896" Jan 27 16:30:06 crc kubenswrapper[4966]: I0127 16:30:06.545489 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e60a61a8-a15f-4e91-b12e-f77c8b9c7397" path="/var/lib/kubelet/pods/e60a61a8-a15f-4e91-b12e-f77c8b9c7397/volumes" Jan 27 16:30:10 crc kubenswrapper[4966]: I0127 16:30:10.119427 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:30:10 crc kubenswrapper[4966]: I0127 16:30:10.120004 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:30:40 crc kubenswrapper[4966]: I0127 16:30:40.119413 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:30:40 crc kubenswrapper[4966]: I0127 16:30:40.120036 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:30:40 crc kubenswrapper[4966]: I0127 16:30:40.120087 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 16:30:40 crc kubenswrapper[4966]: I0127 16:30:40.121095 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d"} pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:30:40 crc kubenswrapper[4966]: I0127 16:30:40.121163 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" containerID="cri-o://995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" gracePeriod=600 Jan 27 16:30:40 crc kubenswrapper[4966]: E0127 16:30:40.244212 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:30:40 crc kubenswrapper[4966]: I0127 16:30:40.447709 4966 generic.go:334] "Generic (PLEG): container finished" podID="75889828-fc5d-4516-a3c4-db3affd4f810" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" exitCode=0 Jan 27 16:30:40 crc kubenswrapper[4966]: I0127 16:30:40.447750 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerDied","Data":"995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d"} Jan 27 16:30:40 crc kubenswrapper[4966]: I0127 16:30:40.447785 4966 scope.go:117] "RemoveContainer" containerID="bd619a964f88149c2264863e47143f093900ae2ffb4faa0bcd78f7964962f4fe" Jan 27 16:30:40 crc kubenswrapper[4966]: I0127 16:30:40.448753 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:30:40 crc kubenswrapper[4966]: E0127 16:30:40.449227 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:30:47 crc kubenswrapper[4966]: I0127 16:30:47.502470 4966 scope.go:117] "RemoveContainer" containerID="d654bc5db9e9b1fa0be2d16fcab340e28e9c8c7a8cf1472d700975447199ea3f" Jan 27 16:30:54 crc kubenswrapper[4966]: I0127 16:30:54.529397 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:30:54 crc kubenswrapper[4966]: E0127 16:30:54.530258 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:31:06 crc kubenswrapper[4966]: I0127 16:31:06.521016 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:31:06 crc kubenswrapper[4966]: E0127 16:31:06.521794 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:31:12 crc kubenswrapper[4966]: I0127 16:31:12.147189 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bs528"] Jan 27 16:31:12 crc kubenswrapper[4966]: I0127 16:31:12.151260 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bs528" Jan 27 16:31:12 crc kubenswrapper[4966]: I0127 16:31:12.159529 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bs528"] Jan 27 16:31:12 crc kubenswrapper[4966]: I0127 16:31:12.193343 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f68e89-c2bc-42da-bbc5-dc0ed7c23472-catalog-content\") pod \"community-operators-bs528\" (UID: \"00f68e89-c2bc-42da-bbc5-dc0ed7c23472\") " pod="openshift-marketplace/community-operators-bs528" Jan 27 16:31:12 crc kubenswrapper[4966]: I0127 16:31:12.193502 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f68e89-c2bc-42da-bbc5-dc0ed7c23472-utilities\") pod \"community-operators-bs528\" (UID: \"00f68e89-c2bc-42da-bbc5-dc0ed7c23472\") " pod="openshift-marketplace/community-operators-bs528" Jan 27 16:31:12 crc kubenswrapper[4966]: I0127 16:31:12.193551 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzh9q\" (UniqueName: \"kubernetes.io/projected/00f68e89-c2bc-42da-bbc5-dc0ed7c23472-kube-api-access-mzh9q\") pod \"community-operators-bs528\" (UID: \"00f68e89-c2bc-42da-bbc5-dc0ed7c23472\") " pod="openshift-marketplace/community-operators-bs528" Jan 27 16:31:12 crc kubenswrapper[4966]: I0127 16:31:12.297495 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f68e89-c2bc-42da-bbc5-dc0ed7c23472-catalog-content\") pod \"community-operators-bs528\" (UID: \"00f68e89-c2bc-42da-bbc5-dc0ed7c23472\") " pod="openshift-marketplace/community-operators-bs528" Jan 27 16:31:12 crc kubenswrapper[4966]: I0127 16:31:12.297708 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f68e89-c2bc-42da-bbc5-dc0ed7c23472-utilities\") pod \"community-operators-bs528\" (UID: \"00f68e89-c2bc-42da-bbc5-dc0ed7c23472\") " pod="openshift-marketplace/community-operators-bs528" Jan 27 16:31:12 crc kubenswrapper[4966]: I0127 16:31:12.297773 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzh9q\" (UniqueName: \"kubernetes.io/projected/00f68e89-c2bc-42da-bbc5-dc0ed7c23472-kube-api-access-mzh9q\") pod \"community-operators-bs528\" (UID: \"00f68e89-c2bc-42da-bbc5-dc0ed7c23472\") " pod="openshift-marketplace/community-operators-bs528" Jan 27 16:31:12 crc kubenswrapper[4966]: I0127 16:31:12.298730 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f68e89-c2bc-42da-bbc5-dc0ed7c23472-catalog-content\") pod \"community-operators-bs528\" (UID: \"00f68e89-c2bc-42da-bbc5-dc0ed7c23472\") " pod="openshift-marketplace/community-operators-bs528" Jan 27 16:31:12 crc kubenswrapper[4966]: I0127 16:31:12.299042 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f68e89-c2bc-42da-bbc5-dc0ed7c23472-utilities\") pod \"community-operators-bs528\" (UID: \"00f68e89-c2bc-42da-bbc5-dc0ed7c23472\") " pod="openshift-marketplace/community-operators-bs528" Jan 27 16:31:12 crc kubenswrapper[4966]: I0127 16:31:12.327172 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzh9q\" (UniqueName: \"kubernetes.io/projected/00f68e89-c2bc-42da-bbc5-dc0ed7c23472-kube-api-access-mzh9q\") pod \"community-operators-bs528\" (UID: \"00f68e89-c2bc-42da-bbc5-dc0ed7c23472\") " pod="openshift-marketplace/community-operators-bs528" Jan 27 16:31:12 crc kubenswrapper[4966]: I0127 16:31:12.478736 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bs528" Jan 27 16:31:12 crc kubenswrapper[4966]: I0127 16:31:12.770970 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z7mdc"] Jan 27 16:31:12 crc kubenswrapper[4966]: I0127 16:31:12.775975 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7mdc" Jan 27 16:31:12 crc kubenswrapper[4966]: I0127 16:31:12.799781 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z7mdc"] Jan 27 16:31:12 crc kubenswrapper[4966]: I0127 16:31:12.911114 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d063213-fc23-4aae-bbdd-c07536fffd55-utilities\") pod \"certified-operators-z7mdc\" (UID: \"4d063213-fc23-4aae-bbdd-c07536fffd55\") " pod="openshift-marketplace/certified-operators-z7mdc" Jan 27 16:31:12 crc kubenswrapper[4966]: I0127 16:31:12.911332 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp4kk\" (UniqueName: \"kubernetes.io/projected/4d063213-fc23-4aae-bbdd-c07536fffd55-kube-api-access-mp4kk\") pod \"certified-operators-z7mdc\" (UID: \"4d063213-fc23-4aae-bbdd-c07536fffd55\") " pod="openshift-marketplace/certified-operators-z7mdc" Jan 27 16:31:12 crc kubenswrapper[4966]: I0127 16:31:12.911443 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d063213-fc23-4aae-bbdd-c07536fffd55-catalog-content\") pod \"certified-operators-z7mdc\" (UID: \"4d063213-fc23-4aae-bbdd-c07536fffd55\") " pod="openshift-marketplace/certified-operators-z7mdc" Jan 27 16:31:13 crc kubenswrapper[4966]: I0127 16:31:13.013584 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp4kk\" (UniqueName: \"kubernetes.io/projected/4d063213-fc23-4aae-bbdd-c07536fffd55-kube-api-access-mp4kk\") pod \"certified-operators-z7mdc\" (UID: \"4d063213-fc23-4aae-bbdd-c07536fffd55\") " pod="openshift-marketplace/certified-operators-z7mdc" Jan 27 16:31:13 crc kubenswrapper[4966]: I0127 16:31:13.013708 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d063213-fc23-4aae-bbdd-c07536fffd55-catalog-content\") pod \"certified-operators-z7mdc\" (UID: \"4d063213-fc23-4aae-bbdd-c07536fffd55\") " pod="openshift-marketplace/certified-operators-z7mdc" Jan 27 16:31:13 crc kubenswrapper[4966]: I0127 16:31:13.013774 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d063213-fc23-4aae-bbdd-c07536fffd55-utilities\") pod \"certified-operators-z7mdc\" (UID: \"4d063213-fc23-4aae-bbdd-c07536fffd55\") " pod="openshift-marketplace/certified-operators-z7mdc" Jan 27 16:31:13 crc kubenswrapper[4966]: I0127 16:31:13.014270 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d063213-fc23-4aae-bbdd-c07536fffd55-utilities\") pod \"certified-operators-z7mdc\" (UID: \"4d063213-fc23-4aae-bbdd-c07536fffd55\") " pod="openshift-marketplace/certified-operators-z7mdc" Jan 27 16:31:13 crc kubenswrapper[4966]: I0127 16:31:13.014758 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d063213-fc23-4aae-bbdd-c07536fffd55-catalog-content\") pod \"certified-operators-z7mdc\" (UID: \"4d063213-fc23-4aae-bbdd-c07536fffd55\") " pod="openshift-marketplace/certified-operators-z7mdc" Jan 27 16:31:13 crc kubenswrapper[4966]: I0127 16:31:13.047188 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp4kk\" (UniqueName: \"kubernetes.io/projected/4d063213-fc23-4aae-bbdd-c07536fffd55-kube-api-access-mp4kk\") pod \"certified-operators-z7mdc\" (UID: \"4d063213-fc23-4aae-bbdd-c07536fffd55\") " pod="openshift-marketplace/certified-operators-z7mdc" Jan 27 16:31:13 crc kubenswrapper[4966]: I0127 16:31:13.079814 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bs528"] Jan 27 16:31:13 crc kubenswrapper[4966]: I0127 16:31:13.116433 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7mdc" Jan 27 16:31:13 crc kubenswrapper[4966]: W0127 16:31:13.703454 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d063213_fc23_4aae_bbdd_c07536fffd55.slice/crio-210da336c628039cd6d343aab0d4cf446d195d698c1813bdfa2f9b94149f3fb8 WatchSource:0}: Error finding container 210da336c628039cd6d343aab0d4cf446d195d698c1813bdfa2f9b94149f3fb8: Status 404 returned error can't find the container with id 210da336c628039cd6d343aab0d4cf446d195d698c1813bdfa2f9b94149f3fb8 Jan 27 16:31:13 crc kubenswrapper[4966]: I0127 16:31:13.709711 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z7mdc"] Jan 27 16:31:13 crc kubenswrapper[4966]: I0127 16:31:13.832260 4966 generic.go:334] "Generic (PLEG): container finished" podID="00f68e89-c2bc-42da-bbc5-dc0ed7c23472" containerID="eb370bb9b5a583b0f948883ea8ba9c85c9b6d84f5ba57e8e0d6465745b69cd7c" exitCode=0 Jan 27 16:31:13 crc kubenswrapper[4966]: I0127 16:31:13.832315 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bs528" event={"ID":"00f68e89-c2bc-42da-bbc5-dc0ed7c23472","Type":"ContainerDied","Data":"eb370bb9b5a583b0f948883ea8ba9c85c9b6d84f5ba57e8e0d6465745b69cd7c"} Jan 27 16:31:13 crc kubenswrapper[4966]: I0127 16:31:13.832385 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bs528" event={"ID":"00f68e89-c2bc-42da-bbc5-dc0ed7c23472","Type":"ContainerStarted","Data":"02a94d593a481830296fa1b901baf9d88fb247c9f1175f70804b13770b77d58a"} Jan 27 16:31:13 crc kubenswrapper[4966]: I0127 16:31:13.835376 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7mdc" event={"ID":"4d063213-fc23-4aae-bbdd-c07536fffd55","Type":"ContainerStarted","Data":"210da336c628039cd6d343aab0d4cf446d195d698c1813bdfa2f9b94149f3fb8"} Jan 27 16:31:14 crc kubenswrapper[4966]: I0127 16:31:14.859885 4966 generic.go:334] "Generic (PLEG): container finished" podID="4d063213-fc23-4aae-bbdd-c07536fffd55" containerID="2863684495f7c7088092961d4d847025efd2fd133d1efd7e9690d0aa9940bb13" exitCode=0 Jan 27 16:31:14 crc kubenswrapper[4966]: I0127 16:31:14.860008 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7mdc" event={"ID":"4d063213-fc23-4aae-bbdd-c07536fffd55","Type":"ContainerDied","Data":"2863684495f7c7088092961d4d847025efd2fd133d1efd7e9690d0aa9940bb13"} Jan 27 16:31:15 crc kubenswrapper[4966]: I0127 16:31:15.873123 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7mdc" event={"ID":"4d063213-fc23-4aae-bbdd-c07536fffd55","Type":"ContainerStarted","Data":"eac63b89d4302b1990b19b2bc4dfedc589b82144b149eea55191c81983e8323c"} Jan 27 16:31:15 crc kubenswrapper[4966]: I0127 16:31:15.875185 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bs528" event={"ID":"00f68e89-c2bc-42da-bbc5-dc0ed7c23472","Type":"ContainerStarted","Data":"992f22153fc29b39b6ded43329afe34b79770c6fbadf79e731a89555ab7e82d1"} Jan 27 16:31:16 crc kubenswrapper[4966]: I0127 16:31:16.888830 4966 generic.go:334] "Generic (PLEG): container finished" podID="00f68e89-c2bc-42da-bbc5-dc0ed7c23472" containerID="992f22153fc29b39b6ded43329afe34b79770c6fbadf79e731a89555ab7e82d1" exitCode=0 Jan 27 16:31:16 crc kubenswrapper[4966]: I0127 16:31:16.888959 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bs528" event={"ID":"00f68e89-c2bc-42da-bbc5-dc0ed7c23472","Type":"ContainerDied","Data":"992f22153fc29b39b6ded43329afe34b79770c6fbadf79e731a89555ab7e82d1"} Jan 27 16:31:17 crc kubenswrapper[4966]: I0127 16:31:17.901424 4966 generic.go:334] "Generic (PLEG): container finished" podID="4d063213-fc23-4aae-bbdd-c07536fffd55" containerID="eac63b89d4302b1990b19b2bc4dfedc589b82144b149eea55191c81983e8323c" exitCode=0 Jan 27 16:31:17 crc kubenswrapper[4966]: I0127 16:31:17.901830 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7mdc" event={"ID":"4d063213-fc23-4aae-bbdd-c07536fffd55","Type":"ContainerDied","Data":"eac63b89d4302b1990b19b2bc4dfedc589b82144b149eea55191c81983e8323c"} Jan 27 16:31:17 crc kubenswrapper[4966]: I0127 16:31:17.907525 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bs528" event={"ID":"00f68e89-c2bc-42da-bbc5-dc0ed7c23472","Type":"ContainerStarted","Data":"79a841cd030e1a19a5f9616c91e44e103379d274e62db676f727951948a082ab"} Jan 27 16:31:17 crc kubenswrapper[4966]: I0127 16:31:17.945740 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bs528" podStartSLOduration=2.472363876 podStartE2EDuration="5.94571524s" podCreationTimestamp="2026-01-27 16:31:12 +0000 UTC" firstStartedPulling="2026-01-27 16:31:13.834943519 +0000 UTC m=+2940.137737007" lastFinishedPulling="2026-01-27 16:31:17.308294883 +0000 UTC m=+2943.611088371" observedRunningTime="2026-01-27 16:31:17.943308375 +0000 UTC m=+2944.246101873" watchObservedRunningTime="2026-01-27 16:31:17.94571524 +0000 UTC m=+2944.248508728" Jan 27 16:31:18 crc kubenswrapper[4966]: I0127 16:31:18.522392 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:31:18 crc kubenswrapper[4966]: E0127 16:31:18.523253 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:31:18 crc kubenswrapper[4966]: I0127 16:31:18.919729 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7mdc" event={"ID":"4d063213-fc23-4aae-bbdd-c07536fffd55","Type":"ContainerStarted","Data":"070ee715c1a59466066f8b11faa690d560ae4f998b413ae3e02311909f8d207e"} Jan 27 16:31:18 crc kubenswrapper[4966]: I0127 16:31:18.951784 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z7mdc" podStartSLOduration=3.487094306 podStartE2EDuration="6.951761757s" podCreationTimestamp="2026-01-27 16:31:12 +0000 UTC" firstStartedPulling="2026-01-27 16:31:14.8650468 +0000 UTC m=+2941.167840288" lastFinishedPulling="2026-01-27 16:31:18.329714251 +0000 UTC m=+2944.632507739" observedRunningTime="2026-01-27 16:31:18.938110969 +0000 UTC m=+2945.240904467" watchObservedRunningTime="2026-01-27 16:31:18.951761757 +0000 UTC m=+2945.254555245" Jan 27 16:31:22 crc kubenswrapper[4966]: I0127 16:31:22.479606 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bs528" Jan 27 16:31:22 crc kubenswrapper[4966]: I0127 16:31:22.480269 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bs528" Jan 27 16:31:22 crc kubenswrapper[4966]: I0127 16:31:22.562155 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bs528" Jan 27 16:31:23 crc kubenswrapper[4966]: I0127 16:31:23.015832 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bs528" Jan 27 16:31:23 crc kubenswrapper[4966]: I0127 16:31:23.117641 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z7mdc" Jan 27 16:31:23 crc kubenswrapper[4966]: I0127 16:31:23.117704 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z7mdc" Jan 27 16:31:24 crc kubenswrapper[4966]: I0127 16:31:24.173435 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-z7mdc" podUID="4d063213-fc23-4aae-bbdd-c07536fffd55" containerName="registry-server" probeResult="failure" output=< Jan 27 16:31:24 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 16:31:24 crc kubenswrapper[4966]: > Jan 27 16:31:26 crc kubenswrapper[4966]: I0127 16:31:26.132781 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bs528"] Jan 27 16:31:26 crc kubenswrapper[4966]: I0127 16:31:26.134491 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bs528" podUID="00f68e89-c2bc-42da-bbc5-dc0ed7c23472" containerName="registry-server" containerID="cri-o://79a841cd030e1a19a5f9616c91e44e103379d274e62db676f727951948a082ab" gracePeriod=2 Jan 27 16:31:26 crc kubenswrapper[4966]: I0127 16:31:26.808959 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bs528" Jan 27 16:31:26 crc kubenswrapper[4966]: I0127 16:31:26.904836 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f68e89-c2bc-42da-bbc5-dc0ed7c23472-catalog-content\") pod \"00f68e89-c2bc-42da-bbc5-dc0ed7c23472\" (UID: \"00f68e89-c2bc-42da-bbc5-dc0ed7c23472\") " Jan 27 16:31:26 crc kubenswrapper[4966]: I0127 16:31:26.904935 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzh9q\" (UniqueName: \"kubernetes.io/projected/00f68e89-c2bc-42da-bbc5-dc0ed7c23472-kube-api-access-mzh9q\") pod \"00f68e89-c2bc-42da-bbc5-dc0ed7c23472\" (UID: \"00f68e89-c2bc-42da-bbc5-dc0ed7c23472\") " Jan 27 16:31:26 crc kubenswrapper[4966]: I0127 16:31:26.905091 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f68e89-c2bc-42da-bbc5-dc0ed7c23472-utilities\") pod \"00f68e89-c2bc-42da-bbc5-dc0ed7c23472\" (UID: \"00f68e89-c2bc-42da-bbc5-dc0ed7c23472\") " Jan 27 16:31:26 crc kubenswrapper[4966]: I0127 16:31:26.905756 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00f68e89-c2bc-42da-bbc5-dc0ed7c23472-utilities" (OuterVolumeSpecName: "utilities") pod "00f68e89-c2bc-42da-bbc5-dc0ed7c23472" (UID: "00f68e89-c2bc-42da-bbc5-dc0ed7c23472"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:31:26 crc kubenswrapper[4966]: I0127 16:31:26.911913 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00f68e89-c2bc-42da-bbc5-dc0ed7c23472-kube-api-access-mzh9q" (OuterVolumeSpecName: "kube-api-access-mzh9q") pod "00f68e89-c2bc-42da-bbc5-dc0ed7c23472" (UID: "00f68e89-c2bc-42da-bbc5-dc0ed7c23472"). InnerVolumeSpecName "kube-api-access-mzh9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:31:26 crc kubenswrapper[4966]: I0127 16:31:26.962271 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00f68e89-c2bc-42da-bbc5-dc0ed7c23472-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00f68e89-c2bc-42da-bbc5-dc0ed7c23472" (UID: "00f68e89-c2bc-42da-bbc5-dc0ed7c23472"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:31:27 crc kubenswrapper[4966]: I0127 16:31:27.002183 4966 generic.go:334] "Generic (PLEG): container finished" podID="00f68e89-c2bc-42da-bbc5-dc0ed7c23472" containerID="79a841cd030e1a19a5f9616c91e44e103379d274e62db676f727951948a082ab" exitCode=0 Jan 27 16:31:27 crc kubenswrapper[4966]: I0127 16:31:27.002237 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bs528" Jan 27 16:31:27 crc kubenswrapper[4966]: I0127 16:31:27.002242 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bs528" event={"ID":"00f68e89-c2bc-42da-bbc5-dc0ed7c23472","Type":"ContainerDied","Data":"79a841cd030e1a19a5f9616c91e44e103379d274e62db676f727951948a082ab"} Jan 27 16:31:27 crc kubenswrapper[4966]: I0127 16:31:27.002861 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bs528" event={"ID":"00f68e89-c2bc-42da-bbc5-dc0ed7c23472","Type":"ContainerDied","Data":"02a94d593a481830296fa1b901baf9d88fb247c9f1175f70804b13770b77d58a"} Jan 27 16:31:27 crc kubenswrapper[4966]: I0127 16:31:27.002927 4966 scope.go:117] "RemoveContainer" containerID="79a841cd030e1a19a5f9616c91e44e103379d274e62db676f727951948a082ab" Jan 27 16:31:27 crc kubenswrapper[4966]: I0127 16:31:27.009587 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f68e89-c2bc-42da-bbc5-dc0ed7c23472-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:31:27 crc kubenswrapper[4966]: I0127 16:31:27.009632 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzh9q\" (UniqueName: \"kubernetes.io/projected/00f68e89-c2bc-42da-bbc5-dc0ed7c23472-kube-api-access-mzh9q\") on node \"crc\" DevicePath \"\"" Jan 27 16:31:27 crc kubenswrapper[4966]: I0127 16:31:27.009650 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f68e89-c2bc-42da-bbc5-dc0ed7c23472-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:31:27 crc kubenswrapper[4966]: I0127 16:31:27.035856 4966 scope.go:117] "RemoveContainer" containerID="992f22153fc29b39b6ded43329afe34b79770c6fbadf79e731a89555ab7e82d1" Jan 27 16:31:27 crc kubenswrapper[4966]: I0127 16:31:27.041626 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bs528"] Jan 27 16:31:27 crc kubenswrapper[4966]: I0127 16:31:27.055622 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bs528"] Jan 27 16:31:27 crc kubenswrapper[4966]: I0127 16:31:27.084423 4966 scope.go:117] "RemoveContainer" containerID="eb370bb9b5a583b0f948883ea8ba9c85c9b6d84f5ba57e8e0d6465745b69cd7c" Jan 27 16:31:27 crc kubenswrapper[4966]: I0127 16:31:27.124009 4966 scope.go:117] "RemoveContainer" containerID="79a841cd030e1a19a5f9616c91e44e103379d274e62db676f727951948a082ab" Jan 27 16:31:27 crc kubenswrapper[4966]: E0127 16:31:27.124628 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79a841cd030e1a19a5f9616c91e44e103379d274e62db676f727951948a082ab\": container with ID starting with 79a841cd030e1a19a5f9616c91e44e103379d274e62db676f727951948a082ab not found: ID does not exist" containerID="79a841cd030e1a19a5f9616c91e44e103379d274e62db676f727951948a082ab" Jan 27 16:31:27 crc kubenswrapper[4966]: I0127 16:31:27.124672 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79a841cd030e1a19a5f9616c91e44e103379d274e62db676f727951948a082ab"} err="failed to get container status \"79a841cd030e1a19a5f9616c91e44e103379d274e62db676f727951948a082ab\": rpc error: code = NotFound desc = could not find container \"79a841cd030e1a19a5f9616c91e44e103379d274e62db676f727951948a082ab\": container with ID starting with 79a841cd030e1a19a5f9616c91e44e103379d274e62db676f727951948a082ab not found: ID does not exist" Jan 27 16:31:27 crc kubenswrapper[4966]: I0127 16:31:27.124697 4966 scope.go:117] "RemoveContainer" containerID="992f22153fc29b39b6ded43329afe34b79770c6fbadf79e731a89555ab7e82d1" Jan 27 16:31:27 crc kubenswrapper[4966]: E0127 16:31:27.125176 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"992f22153fc29b39b6ded43329afe34b79770c6fbadf79e731a89555ab7e82d1\": container with ID starting with 992f22153fc29b39b6ded43329afe34b79770c6fbadf79e731a89555ab7e82d1 not found: ID does not exist" containerID="992f22153fc29b39b6ded43329afe34b79770c6fbadf79e731a89555ab7e82d1" Jan 27 16:31:27 crc kubenswrapper[4966]: I0127 16:31:27.125237 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"992f22153fc29b39b6ded43329afe34b79770c6fbadf79e731a89555ab7e82d1"} err="failed to get container status \"992f22153fc29b39b6ded43329afe34b79770c6fbadf79e731a89555ab7e82d1\": rpc error: code = NotFound desc = could not find container \"992f22153fc29b39b6ded43329afe34b79770c6fbadf79e731a89555ab7e82d1\": container with ID starting with 992f22153fc29b39b6ded43329afe34b79770c6fbadf79e731a89555ab7e82d1 not found: ID does not exist" Jan 27 16:31:27 crc kubenswrapper[4966]: I0127 16:31:27.125274 4966 scope.go:117] "RemoveContainer" containerID="eb370bb9b5a583b0f948883ea8ba9c85c9b6d84f5ba57e8e0d6465745b69cd7c" Jan 27 16:31:27 crc kubenswrapper[4966]: E0127 16:31:27.126085 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb370bb9b5a583b0f948883ea8ba9c85c9b6d84f5ba57e8e0d6465745b69cd7c\": container with ID starting with eb370bb9b5a583b0f948883ea8ba9c85c9b6d84f5ba57e8e0d6465745b69cd7c not found: ID does not exist" containerID="eb370bb9b5a583b0f948883ea8ba9c85c9b6d84f5ba57e8e0d6465745b69cd7c" Jan 27 16:31:27 crc kubenswrapper[4966]: I0127 16:31:27.126108 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb370bb9b5a583b0f948883ea8ba9c85c9b6d84f5ba57e8e0d6465745b69cd7c"} err="failed to get container status \"eb370bb9b5a583b0f948883ea8ba9c85c9b6d84f5ba57e8e0d6465745b69cd7c\": rpc error: code = NotFound desc = could not find container \"eb370bb9b5a583b0f948883ea8ba9c85c9b6d84f5ba57e8e0d6465745b69cd7c\": container with ID starting with eb370bb9b5a583b0f948883ea8ba9c85c9b6d84f5ba57e8e0d6465745b69cd7c not found: ID does not exist" Jan 27 16:31:27 crc kubenswrapper[4966]: E0127 16:31:27.288931 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00f68e89_c2bc_42da_bbc5_dc0ed7c23472.slice/crio-02a94d593a481830296fa1b901baf9d88fb247c9f1175f70804b13770b77d58a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00f68e89_c2bc_42da_bbc5_dc0ed7c23472.slice\": RecentStats: unable to find data in memory cache]" Jan 27 16:31:28 crc kubenswrapper[4966]: I0127 16:31:28.535808 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00f68e89-c2bc-42da-bbc5-dc0ed7c23472" path="/var/lib/kubelet/pods/00f68e89-c2bc-42da-bbc5-dc0ed7c23472/volumes" Jan 27 16:31:31 crc kubenswrapper[4966]: I0127 16:31:31.521222 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:31:31 crc kubenswrapper[4966]: E0127 16:31:31.522121 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:31:33 crc kubenswrapper[4966]: I0127 16:31:33.183682 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z7mdc" Jan 27 16:31:33 crc kubenswrapper[4966]: I0127 16:31:33.241016 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z7mdc" Jan 27 16:31:33 crc kubenswrapper[4966]: I0127 16:31:33.709468 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z7mdc"] Jan 27 16:31:35 crc kubenswrapper[4966]: I0127 16:31:35.087313 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z7mdc" podUID="4d063213-fc23-4aae-bbdd-c07536fffd55" containerName="registry-server" containerID="cri-o://070ee715c1a59466066f8b11faa690d560ae4f998b413ae3e02311909f8d207e" gracePeriod=2 Jan 27 16:31:35 crc kubenswrapper[4966]: I0127 16:31:35.622030 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7mdc" Jan 27 16:31:35 crc kubenswrapper[4966]: I0127 16:31:35.638097 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d063213-fc23-4aae-bbdd-c07536fffd55-catalog-content\") pod \"4d063213-fc23-4aae-bbdd-c07536fffd55\" (UID: \"4d063213-fc23-4aae-bbdd-c07536fffd55\") " Jan 27 16:31:35 crc kubenswrapper[4966]: I0127 16:31:35.638246 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d063213-fc23-4aae-bbdd-c07536fffd55-utilities\") pod \"4d063213-fc23-4aae-bbdd-c07536fffd55\" (UID: \"4d063213-fc23-4aae-bbdd-c07536fffd55\") " Jan 27 16:31:35 crc kubenswrapper[4966]: I0127 16:31:35.638361 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp4kk\" (UniqueName: \"kubernetes.io/projected/4d063213-fc23-4aae-bbdd-c07536fffd55-kube-api-access-mp4kk\") pod \"4d063213-fc23-4aae-bbdd-c07536fffd55\" (UID: \"4d063213-fc23-4aae-bbdd-c07536fffd55\") " Jan 27 16:31:35 crc kubenswrapper[4966]: I0127 16:31:35.639616 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d063213-fc23-4aae-bbdd-c07536fffd55-utilities" (OuterVolumeSpecName: "utilities") pod "4d063213-fc23-4aae-bbdd-c07536fffd55" (UID: "4d063213-fc23-4aae-bbdd-c07536fffd55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:31:35 crc kubenswrapper[4966]: I0127 16:31:35.644394 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d063213-fc23-4aae-bbdd-c07536fffd55-kube-api-access-mp4kk" (OuterVolumeSpecName: "kube-api-access-mp4kk") pod "4d063213-fc23-4aae-bbdd-c07536fffd55" (UID: "4d063213-fc23-4aae-bbdd-c07536fffd55"). InnerVolumeSpecName "kube-api-access-mp4kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:31:35 crc kubenswrapper[4966]: I0127 16:31:35.702484 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d063213-fc23-4aae-bbdd-c07536fffd55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d063213-fc23-4aae-bbdd-c07536fffd55" (UID: "4d063213-fc23-4aae-bbdd-c07536fffd55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:31:35 crc kubenswrapper[4966]: I0127 16:31:35.740716 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d063213-fc23-4aae-bbdd-c07536fffd55-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:31:35 crc kubenswrapper[4966]: I0127 16:31:35.740751 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp4kk\" (UniqueName: \"kubernetes.io/projected/4d063213-fc23-4aae-bbdd-c07536fffd55-kube-api-access-mp4kk\") on node \"crc\" DevicePath \"\"" Jan 27 16:31:35 crc kubenswrapper[4966]: I0127 16:31:35.740762 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d063213-fc23-4aae-bbdd-c07536fffd55-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:31:36 crc kubenswrapper[4966]: I0127 16:31:36.100386 4966 generic.go:334] "Generic (PLEG): container finished" podID="4d063213-fc23-4aae-bbdd-c07536fffd55" containerID="070ee715c1a59466066f8b11faa690d560ae4f998b413ae3e02311909f8d207e" exitCode=0 Jan 27 16:31:36 crc kubenswrapper[4966]: I0127 16:31:36.100452 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7mdc" event={"ID":"4d063213-fc23-4aae-bbdd-c07536fffd55","Type":"ContainerDied","Data":"070ee715c1a59466066f8b11faa690d560ae4f998b413ae3e02311909f8d207e"} Jan 27 16:31:36 crc kubenswrapper[4966]: I0127 16:31:36.100722 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7mdc" event={"ID":"4d063213-fc23-4aae-bbdd-c07536fffd55","Type":"ContainerDied","Data":"210da336c628039cd6d343aab0d4cf446d195d698c1813bdfa2f9b94149f3fb8"} Jan 27 16:31:36 crc kubenswrapper[4966]: I0127 16:31:36.100745 4966 scope.go:117] "RemoveContainer" containerID="070ee715c1a59466066f8b11faa690d560ae4f998b413ae3e02311909f8d207e" Jan 27 16:31:36 crc kubenswrapper[4966]: I0127 16:31:36.100472 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7mdc" Jan 27 16:31:36 crc kubenswrapper[4966]: I0127 16:31:36.129477 4966 scope.go:117] "RemoveContainer" containerID="eac63b89d4302b1990b19b2bc4dfedc589b82144b149eea55191c81983e8323c" Jan 27 16:31:36 crc kubenswrapper[4966]: I0127 16:31:36.156077 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z7mdc"] Jan 27 16:31:36 crc kubenswrapper[4966]: I0127 16:31:36.168094 4966 scope.go:117] "RemoveContainer" containerID="2863684495f7c7088092961d4d847025efd2fd133d1efd7e9690d0aa9940bb13" Jan 27 16:31:36 crc kubenswrapper[4966]: I0127 16:31:36.174785 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z7mdc"] Jan 27 16:31:36 crc kubenswrapper[4966]: I0127 16:31:36.215769 4966 scope.go:117] "RemoveContainer" containerID="070ee715c1a59466066f8b11faa690d560ae4f998b413ae3e02311909f8d207e" Jan 27 16:31:36 crc kubenswrapper[4966]: E0127 16:31:36.216440 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"070ee715c1a59466066f8b11faa690d560ae4f998b413ae3e02311909f8d207e\": container with ID starting with 070ee715c1a59466066f8b11faa690d560ae4f998b413ae3e02311909f8d207e not found: ID does not exist" containerID="070ee715c1a59466066f8b11faa690d560ae4f998b413ae3e02311909f8d207e" Jan 27 16:31:36 crc kubenswrapper[4966]: I0127 16:31:36.216488 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"070ee715c1a59466066f8b11faa690d560ae4f998b413ae3e02311909f8d207e"} err="failed to get container status \"070ee715c1a59466066f8b11faa690d560ae4f998b413ae3e02311909f8d207e\": rpc error: code = NotFound desc = could not find container \"070ee715c1a59466066f8b11faa690d560ae4f998b413ae3e02311909f8d207e\": container with ID starting with 070ee715c1a59466066f8b11faa690d560ae4f998b413ae3e02311909f8d207e not found: ID does not exist" Jan 27 16:31:36 crc kubenswrapper[4966]: I0127 16:31:36.216516 4966 scope.go:117] "RemoveContainer" containerID="eac63b89d4302b1990b19b2bc4dfedc589b82144b149eea55191c81983e8323c" Jan 27 16:31:36 crc kubenswrapper[4966]: E0127 16:31:36.217050 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eac63b89d4302b1990b19b2bc4dfedc589b82144b149eea55191c81983e8323c\": container with ID starting with eac63b89d4302b1990b19b2bc4dfedc589b82144b149eea55191c81983e8323c not found: ID does not exist" containerID="eac63b89d4302b1990b19b2bc4dfedc589b82144b149eea55191c81983e8323c" Jan 27 16:31:36 crc kubenswrapper[4966]: I0127 16:31:36.217170 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eac63b89d4302b1990b19b2bc4dfedc589b82144b149eea55191c81983e8323c"} err="failed to get container status \"eac63b89d4302b1990b19b2bc4dfedc589b82144b149eea55191c81983e8323c\": rpc error: code = NotFound desc = could not find container \"eac63b89d4302b1990b19b2bc4dfedc589b82144b149eea55191c81983e8323c\": container with ID starting with eac63b89d4302b1990b19b2bc4dfedc589b82144b149eea55191c81983e8323c not found: ID does not exist" Jan 27 16:31:36 crc kubenswrapper[4966]: I0127 16:31:36.217216 4966 scope.go:117] "RemoveContainer" containerID="2863684495f7c7088092961d4d847025efd2fd133d1efd7e9690d0aa9940bb13" Jan 27 16:31:36 crc kubenswrapper[4966]: E0127 16:31:36.217588 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2863684495f7c7088092961d4d847025efd2fd133d1efd7e9690d0aa9940bb13\": container with ID starting with 2863684495f7c7088092961d4d847025efd2fd133d1efd7e9690d0aa9940bb13 not found: ID does not exist" containerID="2863684495f7c7088092961d4d847025efd2fd133d1efd7e9690d0aa9940bb13" Jan 27 16:31:36 crc kubenswrapper[4966]: I0127 16:31:36.217711 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2863684495f7c7088092961d4d847025efd2fd133d1efd7e9690d0aa9940bb13"} err="failed to get container status \"2863684495f7c7088092961d4d847025efd2fd133d1efd7e9690d0aa9940bb13\": rpc error: code = NotFound desc = could not find container \"2863684495f7c7088092961d4d847025efd2fd133d1efd7e9690d0aa9940bb13\": container with ID starting with 2863684495f7c7088092961d4d847025efd2fd133d1efd7e9690d0aa9940bb13 not found: ID does not exist" Jan 27 16:31:36 crc kubenswrapper[4966]: I0127 16:31:36.534479 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d063213-fc23-4aae-bbdd-c07536fffd55" path="/var/lib/kubelet/pods/4d063213-fc23-4aae-bbdd-c07536fffd55/volumes" Jan 27 16:31:46 crc kubenswrapper[4966]: I0127 16:31:46.521113 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:31:46 crc kubenswrapper[4966]: E0127 16:31:46.521846 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:31:58 crc kubenswrapper[4966]: I0127 16:31:58.522543 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:31:58 crc kubenswrapper[4966]: E0127 16:31:58.524508 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:32:09 crc kubenswrapper[4966]: I0127 16:32:09.521246 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:32:09 crc kubenswrapper[4966]: E0127 16:32:09.522181 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:32:23 crc kubenswrapper[4966]: I0127 16:32:23.521542 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:32:23 crc kubenswrapper[4966]: E0127 16:32:23.522647 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:32:30 crc kubenswrapper[4966]: I0127 16:32:30.692962 4966 generic.go:334] "Generic (PLEG): container finished" podID="492de347-8c7a-4efc-a0ad-000c4da9df94" containerID="22156afbc76189adbbd61ccbdc8d5e8f9fca0b6f86b909691479bc11b5947df7" exitCode=0 Jan 27 16:32:30 crc kubenswrapper[4966]: I0127 16:32:30.693024 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" event={"ID":"492de347-8c7a-4efc-a0ad-000c4da9df94","Type":"ContainerDied","Data":"22156afbc76189adbbd61ccbdc8d5e8f9fca0b6f86b909691479bc11b5947df7"} Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.223151 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.393780 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ceilometer-compute-config-data-2\") pod \"492de347-8c7a-4efc-a0ad-000c4da9df94\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.393877 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67cnd\" (UniqueName: \"kubernetes.io/projected/492de347-8c7a-4efc-a0ad-000c4da9df94-kube-api-access-67cnd\") pod \"492de347-8c7a-4efc-a0ad-000c4da9df94\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.393921 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ceilometer-compute-config-data-0\") pod \"492de347-8c7a-4efc-a0ad-000c4da9df94\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.394148 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ssh-key-openstack-edpm-ipam\") pod \"492de347-8c7a-4efc-a0ad-000c4da9df94\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.394280 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-inventory\") pod \"492de347-8c7a-4efc-a0ad-000c4da9df94\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.394300 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-telemetry-combined-ca-bundle\") pod \"492de347-8c7a-4efc-a0ad-000c4da9df94\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.394355 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ceilometer-compute-config-data-1\") pod \"492de347-8c7a-4efc-a0ad-000c4da9df94\" (UID: \"492de347-8c7a-4efc-a0ad-000c4da9df94\") " Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.399906 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "492de347-8c7a-4efc-a0ad-000c4da9df94" (UID: "492de347-8c7a-4efc-a0ad-000c4da9df94"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.406209 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/492de347-8c7a-4efc-a0ad-000c4da9df94-kube-api-access-67cnd" (OuterVolumeSpecName: "kube-api-access-67cnd") pod "492de347-8c7a-4efc-a0ad-000c4da9df94" (UID: "492de347-8c7a-4efc-a0ad-000c4da9df94"). InnerVolumeSpecName "kube-api-access-67cnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.427437 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "492de347-8c7a-4efc-a0ad-000c4da9df94" (UID: "492de347-8c7a-4efc-a0ad-000c4da9df94"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.431080 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-inventory" (OuterVolumeSpecName: "inventory") pod "492de347-8c7a-4efc-a0ad-000c4da9df94" (UID: "492de347-8c7a-4efc-a0ad-000c4da9df94"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.433802 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "492de347-8c7a-4efc-a0ad-000c4da9df94" (UID: "492de347-8c7a-4efc-a0ad-000c4da9df94"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.434718 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "492de347-8c7a-4efc-a0ad-000c4da9df94" (UID: "492de347-8c7a-4efc-a0ad-000c4da9df94"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.434946 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "492de347-8c7a-4efc-a0ad-000c4da9df94" (UID: "492de347-8c7a-4efc-a0ad-000c4da9df94"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.497242 4966 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.497303 4966 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.497325 4966 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.497338 4966 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.497351 4966 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.497367 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67cnd\" (UniqueName: \"kubernetes.io/projected/492de347-8c7a-4efc-a0ad-000c4da9df94-kube-api-access-67cnd\") on node \"crc\" DevicePath \"\"" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.497379 4966 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/492de347-8c7a-4efc-a0ad-000c4da9df94-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.718588 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" event={"ID":"492de347-8c7a-4efc-a0ad-000c4da9df94","Type":"ContainerDied","Data":"967d20473ccc02eb3252f99916af0fa0732cd9a9e800b8fb5dedf07f38581e3d"} Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.719049 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="967d20473ccc02eb3252f99916af0fa0732cd9a9e800b8fb5dedf07f38581e3d" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.719111 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.845773 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc"] Jan 27 16:32:32 crc kubenswrapper[4966]: E0127 16:32:32.846491 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f68e89-c2bc-42da-bbc5-dc0ed7c23472" containerName="registry-server" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.846508 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f68e89-c2bc-42da-bbc5-dc0ed7c23472" containerName="registry-server" Jan 27 16:32:32 crc kubenswrapper[4966]: E0127 16:32:32.846524 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d063213-fc23-4aae-bbdd-c07536fffd55" containerName="registry-server" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.846530 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d063213-fc23-4aae-bbdd-c07536fffd55" containerName="registry-server" Jan 27 16:32:32 crc kubenswrapper[4966]: E0127 16:32:32.846548 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d063213-fc23-4aae-bbdd-c07536fffd55" containerName="extract-content" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.846555 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d063213-fc23-4aae-bbdd-c07536fffd55" containerName="extract-content" Jan 27 16:32:32 crc kubenswrapper[4966]: E0127 16:32:32.846573 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d063213-fc23-4aae-bbdd-c07536fffd55" containerName="extract-utilities" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.846581 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d063213-fc23-4aae-bbdd-c07536fffd55" containerName="extract-utilities" Jan 27 16:32:32 crc kubenswrapper[4966]: E0127 16:32:32.846595 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f68e89-c2bc-42da-bbc5-dc0ed7c23472" containerName="extract-utilities" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.846602 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f68e89-c2bc-42da-bbc5-dc0ed7c23472" containerName="extract-utilities" Jan 27 16:32:32 crc kubenswrapper[4966]: E0127 16:32:32.846616 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f68e89-c2bc-42da-bbc5-dc0ed7c23472" containerName="extract-content" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.846621 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f68e89-c2bc-42da-bbc5-dc0ed7c23472" containerName="extract-content" Jan 27 16:32:32 crc kubenswrapper[4966]: E0127 16:32:32.846637 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492de347-8c7a-4efc-a0ad-000c4da9df94" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.846646 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="492de347-8c7a-4efc-a0ad-000c4da9df94" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.846913 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d063213-fc23-4aae-bbdd-c07536fffd55" containerName="registry-server" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.846929 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="00f68e89-c2bc-42da-bbc5-dc0ed7c23472" containerName="registry-server" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.846951 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="492de347-8c7a-4efc-a0ad-000c4da9df94" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.847933 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.850985 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.850989 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.852783 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.852926 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.853003 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hsd" Jan 27 16:32:32 crc kubenswrapper[4966]: I0127 16:32:32.861206 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc"] Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.009170 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.009555 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68rg2\" (UniqueName: \"kubernetes.io/projected/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-kube-api-access-68rg2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.009695 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.009888 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.010033 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.010606 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.011686 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.114032 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.114246 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.114364 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.114480 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.114623 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68rg2\" (UniqueName: \"kubernetes.io/projected/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-kube-api-access-68rg2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.114732 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.114861 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.119285 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.119718 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.120457 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.121088 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.124440 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.125349 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.134210 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68rg2\" (UniqueName: \"kubernetes.io/projected/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-kube-api-access-68rg2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.168450 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.702136 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc"] Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.707252 4966 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:32:33 crc kubenswrapper[4966]: I0127 16:32:33.729807 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" event={"ID":"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b","Type":"ContainerStarted","Data":"5acbd2091d3921f8d42ee9eb9af8138b1485d3a477cced37ab14a8d48302840d"} Jan 27 16:32:34 crc kubenswrapper[4966]: I0127 16:32:34.742200 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" event={"ID":"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b","Type":"ContainerStarted","Data":"400235e25d4b5142dfc92ee03484ef27b40847adf152d2c969bd30ede99e3ffb"} Jan 27 16:32:34 crc kubenswrapper[4966]: I0127 16:32:34.781131 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" podStartSLOduration=2.256954828 podStartE2EDuration="2.781096893s" podCreationTimestamp="2026-01-27 16:32:32 +0000 UTC" firstStartedPulling="2026-01-27 16:32:33.706995202 +0000 UTC m=+3020.009788690" lastFinishedPulling="2026-01-27 16:32:34.231137267 +0000 UTC m=+3020.533930755" observedRunningTime="2026-01-27 16:32:34.769934163 +0000 UTC m=+3021.072727661" watchObservedRunningTime="2026-01-27 16:32:34.781096893 +0000 UTC m=+3021.083890381" Jan 27 16:32:37 crc kubenswrapper[4966]: I0127 16:32:37.521200 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:32:37 crc kubenswrapper[4966]: E0127 16:32:37.521714 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:32:52 crc kubenswrapper[4966]: I0127 16:32:52.521884 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:32:52 crc kubenswrapper[4966]: E0127 16:32:52.522629 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:33:03 crc kubenswrapper[4966]: I0127 16:33:03.522160 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:33:03 crc kubenswrapper[4966]: E0127 16:33:03.523116 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:33:15 crc kubenswrapper[4966]: I0127 16:33:15.521961 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:33:15 crc kubenswrapper[4966]: E0127 16:33:15.522732 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:33:28 crc kubenswrapper[4966]: I0127 16:33:28.521054 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:33:28 crc kubenswrapper[4966]: E0127 16:33:28.521779 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:33:40 crc kubenswrapper[4966]: I0127 16:33:40.525673 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:33:40 crc kubenswrapper[4966]: E0127 16:33:40.526547 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:33:55 crc kubenswrapper[4966]: I0127 16:33:55.521543 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:33:55 crc kubenswrapper[4966]: E0127 16:33:55.522540 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:34:06 crc kubenswrapper[4966]: I0127 16:34:06.521689 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:34:06 crc kubenswrapper[4966]: E0127 16:34:06.522579 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:34:17 crc kubenswrapper[4966]: I0127 16:34:17.521785 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:34:17 crc kubenswrapper[4966]: E0127 16:34:17.522847 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:34:30 crc kubenswrapper[4966]: I0127 16:34:30.522052 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:34:30 crc kubenswrapper[4966]: E0127 16:34:30.523075 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:34:33 crc kubenswrapper[4966]: I0127 16:34:33.000147 4966 generic.go:334] "Generic (PLEG): container finished" podID="78e9dd7a-9cb3-47ec-8412-30ce3be2b93b" containerID="400235e25d4b5142dfc92ee03484ef27b40847adf152d2c969bd30ede99e3ffb" exitCode=0 Jan 27 16:34:33 crc kubenswrapper[4966]: I0127 16:34:33.000242 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" event={"ID":"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b","Type":"ContainerDied","Data":"400235e25d4b5142dfc92ee03484ef27b40847adf152d2c969bd30ede99e3ffb"} Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.462797 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.555252 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ssh-key-openstack-edpm-ipam\") pod \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.555464 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ceilometer-ipmi-config-data-0\") pod \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.555567 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68rg2\" (UniqueName: \"kubernetes.io/projected/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-kube-api-access-68rg2\") pod \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.555620 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-telemetry-power-monitoring-combined-ca-bundle\") pod \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.555666 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ceilometer-ipmi-config-data-1\") pod \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.556881 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ceilometer-ipmi-config-data-2\") pod \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.557555 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-inventory\") pod \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\" (UID: \"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b\") " Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.562802 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "78e9dd7a-9cb3-47ec-8412-30ce3be2b93b" (UID: "78e9dd7a-9cb3-47ec-8412-30ce3be2b93b"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.562958 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-kube-api-access-68rg2" (OuterVolumeSpecName: "kube-api-access-68rg2") pod "78e9dd7a-9cb3-47ec-8412-30ce3be2b93b" (UID: "78e9dd7a-9cb3-47ec-8412-30ce3be2b93b"). InnerVolumeSpecName "kube-api-access-68rg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.590213 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "78e9dd7a-9cb3-47ec-8412-30ce3be2b93b" (UID: "78e9dd7a-9cb3-47ec-8412-30ce3be2b93b"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.591984 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "78e9dd7a-9cb3-47ec-8412-30ce3be2b93b" (UID: "78e9dd7a-9cb3-47ec-8412-30ce3be2b93b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.601964 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "78e9dd7a-9cb3-47ec-8412-30ce3be2b93b" (UID: "78e9dd7a-9cb3-47ec-8412-30ce3be2b93b"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.606443 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-inventory" (OuterVolumeSpecName: "inventory") pod "78e9dd7a-9cb3-47ec-8412-30ce3be2b93b" (UID: "78e9dd7a-9cb3-47ec-8412-30ce3be2b93b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.606887 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "78e9dd7a-9cb3-47ec-8412-30ce3be2b93b" (UID: "78e9dd7a-9cb3-47ec-8412-30ce3be2b93b"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.660808 4966 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.660847 4966 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.660863 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68rg2\" (UniqueName: \"kubernetes.io/projected/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-kube-api-access-68rg2\") on node \"crc\" DevicePath \"\"" Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.660883 4966 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.660899 4966 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.660929 4966 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 27 16:34:34 crc kubenswrapper[4966]: I0127 16:34:34.660944 4966 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78e9dd7a-9cb3-47ec-8412-30ce3be2b93b-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.019685 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" event={"ID":"78e9dd7a-9cb3-47ec-8412-30ce3be2b93b","Type":"ContainerDied","Data":"5acbd2091d3921f8d42ee9eb9af8138b1485d3a477cced37ab14a8d48302840d"} Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.019734 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5acbd2091d3921f8d42ee9eb9af8138b1485d3a477cced37ab14a8d48302840d" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.019744 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.126450 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc"] Jan 27 16:34:35 crc kubenswrapper[4966]: E0127 16:34:35.127029 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e9dd7a-9cb3-47ec-8412-30ce3be2b93b" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.127053 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e9dd7a-9cb3-47ec-8412-30ce3be2b93b" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.127384 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="78e9dd7a-9cb3-47ec-8412-30ce3be2b93b" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.128347 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.133683 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hsd" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.133688 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.134450 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.134463 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.134631 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.137141 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc"] Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.281457 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-vkkgc\" (UID: \"2f03cdf3-ef7a-4b45-b92a-346469d17373\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.282201 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps8hd\" (UniqueName: \"kubernetes.io/projected/2f03cdf3-ef7a-4b45-b92a-346469d17373-kube-api-access-ps8hd\") pod \"logging-edpm-deployment-openstack-edpm-ipam-vkkgc\" (UID: \"2f03cdf3-ef7a-4b45-b92a-346469d17373\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.282336 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-vkkgc\" (UID: \"2f03cdf3-ef7a-4b45-b92a-346469d17373\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.282480 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-vkkgc\" (UID: \"2f03cdf3-ef7a-4b45-b92a-346469d17373\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.282557 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-vkkgc\" (UID: \"2f03cdf3-ef7a-4b45-b92a-346469d17373\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.384978 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps8hd\" (UniqueName: \"kubernetes.io/projected/2f03cdf3-ef7a-4b45-b92a-346469d17373-kube-api-access-ps8hd\") pod \"logging-edpm-deployment-openstack-edpm-ipam-vkkgc\" (UID: \"2f03cdf3-ef7a-4b45-b92a-346469d17373\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.385287 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-vkkgc\" (UID: \"2f03cdf3-ef7a-4b45-b92a-346469d17373\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.385496 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-vkkgc\" (UID: \"2f03cdf3-ef7a-4b45-b92a-346469d17373\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.385627 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-vkkgc\" (UID: \"2f03cdf3-ef7a-4b45-b92a-346469d17373\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.385819 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-vkkgc\" (UID: \"2f03cdf3-ef7a-4b45-b92a-346469d17373\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.389412 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-vkkgc\" (UID: \"2f03cdf3-ef7a-4b45-b92a-346469d17373\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.390607 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-vkkgc\" (UID: \"2f03cdf3-ef7a-4b45-b92a-346469d17373\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.399324 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-vkkgc\" (UID: \"2f03cdf3-ef7a-4b45-b92a-346469d17373\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.399872 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-vkkgc\" (UID: \"2f03cdf3-ef7a-4b45-b92a-346469d17373\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.404352 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps8hd\" (UniqueName: \"kubernetes.io/projected/2f03cdf3-ef7a-4b45-b92a-346469d17373-kube-api-access-ps8hd\") pod \"logging-edpm-deployment-openstack-edpm-ipam-vkkgc\" (UID: \"2f03cdf3-ef7a-4b45-b92a-346469d17373\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.455436 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" Jan 27 16:34:35 crc kubenswrapper[4966]: I0127 16:34:35.972730 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc"] Jan 27 16:34:36 crc kubenswrapper[4966]: I0127 16:34:36.037118 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" event={"ID":"2f03cdf3-ef7a-4b45-b92a-346469d17373","Type":"ContainerStarted","Data":"8b4dd56754e160689f9c0337f0613e96d8b1489b25c324b39398b3844a8ae039"} Jan 27 16:34:38 crc kubenswrapper[4966]: I0127 16:34:38.065534 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" event={"ID":"2f03cdf3-ef7a-4b45-b92a-346469d17373","Type":"ContainerStarted","Data":"debc47ce4265d6049d7f382ea03653396c820f3bc59729cf2283f8828058fdf4"} Jan 27 16:34:38 crc kubenswrapper[4966]: I0127 16:34:38.096625 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" podStartSLOduration=1.6212008770000002 podStartE2EDuration="3.096604241s" podCreationTimestamp="2026-01-27 16:34:35 +0000 UTC" firstStartedPulling="2026-01-27 16:34:35.974704675 +0000 UTC m=+3142.277498163" lastFinishedPulling="2026-01-27 16:34:37.450108039 +0000 UTC m=+3143.752901527" observedRunningTime="2026-01-27 16:34:38.08633008 +0000 UTC m=+3144.389123588" watchObservedRunningTime="2026-01-27 16:34:38.096604241 +0000 UTC m=+3144.399397729" Jan 27 16:34:44 crc kubenswrapper[4966]: I0127 16:34:44.530409 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:34:44 crc kubenswrapper[4966]: E0127 16:34:44.531244 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:34:52 crc kubenswrapper[4966]: I0127 16:34:52.225658 4966 generic.go:334] "Generic (PLEG): container finished" podID="2f03cdf3-ef7a-4b45-b92a-346469d17373" containerID="debc47ce4265d6049d7f382ea03653396c820f3bc59729cf2283f8828058fdf4" exitCode=0 Jan 27 16:34:52 crc kubenswrapper[4966]: I0127 16:34:52.225739 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" event={"ID":"2f03cdf3-ef7a-4b45-b92a-346469d17373","Type":"ContainerDied","Data":"debc47ce4265d6049d7f382ea03653396c820f3bc59729cf2283f8828058fdf4"} Jan 27 16:34:53 crc kubenswrapper[4966]: I0127 16:34:53.702026 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" Jan 27 16:34:53 crc kubenswrapper[4966]: I0127 16:34:53.789095 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-logging-compute-config-data-1\") pod \"2f03cdf3-ef7a-4b45-b92a-346469d17373\" (UID: \"2f03cdf3-ef7a-4b45-b92a-346469d17373\") " Jan 27 16:34:53 crc kubenswrapper[4966]: I0127 16:34:53.789197 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-ssh-key-openstack-edpm-ipam\") pod \"2f03cdf3-ef7a-4b45-b92a-346469d17373\" (UID: \"2f03cdf3-ef7a-4b45-b92a-346469d17373\") " Jan 27 16:34:53 crc kubenswrapper[4966]: I0127 16:34:53.789309 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-logging-compute-config-data-0\") pod \"2f03cdf3-ef7a-4b45-b92a-346469d17373\" (UID: \"2f03cdf3-ef7a-4b45-b92a-346469d17373\") " Jan 27 16:34:53 crc kubenswrapper[4966]: I0127 16:34:53.789393 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps8hd\" (UniqueName: \"kubernetes.io/projected/2f03cdf3-ef7a-4b45-b92a-346469d17373-kube-api-access-ps8hd\") pod \"2f03cdf3-ef7a-4b45-b92a-346469d17373\" (UID: \"2f03cdf3-ef7a-4b45-b92a-346469d17373\") " Jan 27 16:34:53 crc kubenswrapper[4966]: I0127 16:34:53.789486 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-inventory\") pod \"2f03cdf3-ef7a-4b45-b92a-346469d17373\" (UID: \"2f03cdf3-ef7a-4b45-b92a-346469d17373\") " Jan 27 16:34:53 crc kubenswrapper[4966]: I0127 16:34:53.804022 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f03cdf3-ef7a-4b45-b92a-346469d17373-kube-api-access-ps8hd" (OuterVolumeSpecName: "kube-api-access-ps8hd") pod "2f03cdf3-ef7a-4b45-b92a-346469d17373" (UID: "2f03cdf3-ef7a-4b45-b92a-346469d17373"). InnerVolumeSpecName "kube-api-access-ps8hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:34:53 crc kubenswrapper[4966]: I0127 16:34:53.821701 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-inventory" (OuterVolumeSpecName: "inventory") pod "2f03cdf3-ef7a-4b45-b92a-346469d17373" (UID: "2f03cdf3-ef7a-4b45-b92a-346469d17373"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:34:53 crc kubenswrapper[4966]: I0127 16:34:53.822245 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "2f03cdf3-ef7a-4b45-b92a-346469d17373" (UID: "2f03cdf3-ef7a-4b45-b92a-346469d17373"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:34:53 crc kubenswrapper[4966]: I0127 16:34:53.823304 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "2f03cdf3-ef7a-4b45-b92a-346469d17373" (UID: "2f03cdf3-ef7a-4b45-b92a-346469d17373"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:34:53 crc kubenswrapper[4966]: I0127 16:34:53.830776 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2f03cdf3-ef7a-4b45-b92a-346469d17373" (UID: "2f03cdf3-ef7a-4b45-b92a-346469d17373"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:34:53 crc kubenswrapper[4966]: I0127 16:34:53.892587 4966 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 16:34:53 crc kubenswrapper[4966]: I0127 16:34:53.892616 4966 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 16:34:53 crc kubenswrapper[4966]: I0127 16:34:53.892626 4966 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:34:53 crc kubenswrapper[4966]: I0127 16:34:53.892635 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps8hd\" (UniqueName: \"kubernetes.io/projected/2f03cdf3-ef7a-4b45-b92a-346469d17373-kube-api-access-ps8hd\") on node \"crc\" DevicePath \"\"" Jan 27 16:34:53 crc kubenswrapper[4966]: I0127 16:34:53.892644 4966 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f03cdf3-ef7a-4b45-b92a-346469d17373-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 16:34:54 crc kubenswrapper[4966]: I0127 16:34:54.248954 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" event={"ID":"2f03cdf3-ef7a-4b45-b92a-346469d17373","Type":"ContainerDied","Data":"8b4dd56754e160689f9c0337f0613e96d8b1489b25c324b39398b3844a8ae039"} Jan 27 16:34:54 crc kubenswrapper[4966]: I0127 16:34:54.249199 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b4dd56754e160689f9c0337f0613e96d8b1489b25c324b39398b3844a8ae039" Jan 27 16:34:54 crc kubenswrapper[4966]: I0127 16:34:54.249020 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-vkkgc" Jan 27 16:34:57 crc kubenswrapper[4966]: I0127 16:34:57.521091 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:34:57 crc kubenswrapper[4966]: E0127 16:34:57.522079 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:35:12 crc kubenswrapper[4966]: I0127 16:35:12.521658 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:35:12 crc kubenswrapper[4966]: E0127 16:35:12.522547 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:35:23 crc kubenswrapper[4966]: I0127 16:35:23.522410 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:35:23 crc kubenswrapper[4966]: E0127 16:35:23.523255 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:35:38 crc kubenswrapper[4966]: I0127 16:35:38.521060 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:35:38 crc kubenswrapper[4966]: E0127 16:35:38.521992 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:35:53 crc kubenswrapper[4966]: I0127 16:35:53.521718 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:35:53 crc kubenswrapper[4966]: I0127 16:35:53.860970 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"7bdbc3e9b8a02c03daf2fd9d623d3d799b555cda597143e2df71cca8d0b7c7ae"} Jan 27 16:36:46 crc kubenswrapper[4966]: I0127 16:36:46.309875 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-42trm"] Jan 27 16:36:46 crc kubenswrapper[4966]: E0127 16:36:46.316157 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f03cdf3-ef7a-4b45-b92a-346469d17373" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 27 16:36:46 crc kubenswrapper[4966]: I0127 16:36:46.316177 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f03cdf3-ef7a-4b45-b92a-346469d17373" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 27 16:36:46 crc kubenswrapper[4966]: I0127 16:36:46.316475 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f03cdf3-ef7a-4b45-b92a-346469d17373" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 27 16:36:46 crc kubenswrapper[4966]: I0127 16:36:46.318376 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-42trm" Jan 27 16:36:46 crc kubenswrapper[4966]: I0127 16:36:46.326803 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-42trm"] Jan 27 16:36:46 crc kubenswrapper[4966]: I0127 16:36:46.401439 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7w2g\" (UniqueName: \"kubernetes.io/projected/4728aad5-bbf6-41c6-a703-b4034b440d75-kube-api-access-t7w2g\") pod \"redhat-marketplace-42trm\" (UID: \"4728aad5-bbf6-41c6-a703-b4034b440d75\") " pod="openshift-marketplace/redhat-marketplace-42trm" Jan 27 16:36:46 crc kubenswrapper[4966]: I0127 16:36:46.401614 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4728aad5-bbf6-41c6-a703-b4034b440d75-utilities\") pod \"redhat-marketplace-42trm\" (UID: \"4728aad5-bbf6-41c6-a703-b4034b440d75\") " pod="openshift-marketplace/redhat-marketplace-42trm" Jan 27 16:36:46 crc kubenswrapper[4966]: I0127 16:36:46.402084 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4728aad5-bbf6-41c6-a703-b4034b440d75-catalog-content\") pod \"redhat-marketplace-42trm\" (UID: \"4728aad5-bbf6-41c6-a703-b4034b440d75\") " pod="openshift-marketplace/redhat-marketplace-42trm" Jan 27 16:36:46 crc kubenswrapper[4966]: I0127 16:36:46.504960 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4728aad5-bbf6-41c6-a703-b4034b440d75-catalog-content\") pod \"redhat-marketplace-42trm\" (UID: \"4728aad5-bbf6-41c6-a703-b4034b440d75\") " pod="openshift-marketplace/redhat-marketplace-42trm" Jan 27 16:36:46 crc kubenswrapper[4966]: I0127 16:36:46.505048 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7w2g\" (UniqueName: \"kubernetes.io/projected/4728aad5-bbf6-41c6-a703-b4034b440d75-kube-api-access-t7w2g\") pod \"redhat-marketplace-42trm\" (UID: \"4728aad5-bbf6-41c6-a703-b4034b440d75\") " pod="openshift-marketplace/redhat-marketplace-42trm" Jan 27 16:36:46 crc kubenswrapper[4966]: I0127 16:36:46.505104 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4728aad5-bbf6-41c6-a703-b4034b440d75-utilities\") pod \"redhat-marketplace-42trm\" (UID: \"4728aad5-bbf6-41c6-a703-b4034b440d75\") " pod="openshift-marketplace/redhat-marketplace-42trm" Jan 27 16:36:46 crc kubenswrapper[4966]: I0127 16:36:46.505482 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4728aad5-bbf6-41c6-a703-b4034b440d75-catalog-content\") pod \"redhat-marketplace-42trm\" (UID: \"4728aad5-bbf6-41c6-a703-b4034b440d75\") " pod="openshift-marketplace/redhat-marketplace-42trm" Jan 27 16:36:46 crc kubenswrapper[4966]: I0127 16:36:46.505647 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4728aad5-bbf6-41c6-a703-b4034b440d75-utilities\") pod \"redhat-marketplace-42trm\" (UID: \"4728aad5-bbf6-41c6-a703-b4034b440d75\") " pod="openshift-marketplace/redhat-marketplace-42trm" Jan 27 16:36:46 crc kubenswrapper[4966]: I0127 16:36:46.525393 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7w2g\" (UniqueName: \"kubernetes.io/projected/4728aad5-bbf6-41c6-a703-b4034b440d75-kube-api-access-t7w2g\") pod \"redhat-marketplace-42trm\" (UID: \"4728aad5-bbf6-41c6-a703-b4034b440d75\") " pod="openshift-marketplace/redhat-marketplace-42trm" Jan 27 16:36:46 crc kubenswrapper[4966]: I0127 16:36:46.651687 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-42trm" Jan 27 16:36:47 crc kubenswrapper[4966]: I0127 16:36:47.153997 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-42trm"] Jan 27 16:36:47 crc kubenswrapper[4966]: I0127 16:36:47.445515 4966 generic.go:334] "Generic (PLEG): container finished" podID="4728aad5-bbf6-41c6-a703-b4034b440d75" containerID="e0422e65a2e99bd3eef4d2df0ff92a2e0d1206d747a9c542f0f95caf5ebbc7c8" exitCode=0 Jan 27 16:36:47 crc kubenswrapper[4966]: I0127 16:36:47.445846 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42trm" event={"ID":"4728aad5-bbf6-41c6-a703-b4034b440d75","Type":"ContainerDied","Data":"e0422e65a2e99bd3eef4d2df0ff92a2e0d1206d747a9c542f0f95caf5ebbc7c8"} Jan 27 16:36:47 crc kubenswrapper[4966]: I0127 16:36:47.445888 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42trm" event={"ID":"4728aad5-bbf6-41c6-a703-b4034b440d75","Type":"ContainerStarted","Data":"e2bf16e6c4b2ecd5bc8bfb062362c113b855a93cf435695b9ebe123e8a90d892"} Jan 27 16:36:48 crc kubenswrapper[4966]: I0127 16:36:48.456422 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42trm" event={"ID":"4728aad5-bbf6-41c6-a703-b4034b440d75","Type":"ContainerStarted","Data":"7654d17938974780201d72b3e351529335f9bf04e4297d735fbb29bfc64fd4f7"} Jan 27 16:36:49 crc kubenswrapper[4966]: I0127 16:36:49.498819 4966 generic.go:334] "Generic (PLEG): container finished" podID="4728aad5-bbf6-41c6-a703-b4034b440d75" containerID="7654d17938974780201d72b3e351529335f9bf04e4297d735fbb29bfc64fd4f7" exitCode=0 Jan 27 16:36:49 crc kubenswrapper[4966]: I0127 16:36:49.498989 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42trm" event={"ID":"4728aad5-bbf6-41c6-a703-b4034b440d75","Type":"ContainerDied","Data":"7654d17938974780201d72b3e351529335f9bf04e4297d735fbb29bfc64fd4f7"} Jan 27 16:36:50 crc kubenswrapper[4966]: I0127 16:36:50.511665 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42trm" event={"ID":"4728aad5-bbf6-41c6-a703-b4034b440d75","Type":"ContainerStarted","Data":"4b2503d3e64070db4633db62813d9bb32280b4da876fdf6ba02913d295ee1cce"} Jan 27 16:36:50 crc kubenswrapper[4966]: I0127 16:36:50.540691 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-42trm" podStartSLOduration=2.023934339 podStartE2EDuration="4.540675657s" podCreationTimestamp="2026-01-27 16:36:46 +0000 UTC" firstStartedPulling="2026-01-27 16:36:47.45180515 +0000 UTC m=+3273.754598628" lastFinishedPulling="2026-01-27 16:36:49.968546458 +0000 UTC m=+3276.271339946" observedRunningTime="2026-01-27 16:36:50.534398681 +0000 UTC m=+3276.837192189" watchObservedRunningTime="2026-01-27 16:36:50.540675657 +0000 UTC m=+3276.843469145" Jan 27 16:36:56 crc kubenswrapper[4966]: I0127 16:36:56.652131 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-42trm" Jan 27 16:36:56 crc kubenswrapper[4966]: I0127 16:36:56.652708 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-42trm" Jan 27 16:36:56 crc kubenswrapper[4966]: I0127 16:36:56.723404 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-42trm" Jan 27 16:36:57 crc kubenswrapper[4966]: I0127 16:36:57.634004 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-42trm" Jan 27 16:36:57 crc kubenswrapper[4966]: I0127 16:36:57.691327 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-42trm"] Jan 27 16:36:59 crc kubenswrapper[4966]: I0127 16:36:59.607280 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-42trm" podUID="4728aad5-bbf6-41c6-a703-b4034b440d75" containerName="registry-server" containerID="cri-o://4b2503d3e64070db4633db62813d9bb32280b4da876fdf6ba02913d295ee1cce" gracePeriod=2 Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.204848 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-42trm" Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.250150 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7w2g\" (UniqueName: \"kubernetes.io/projected/4728aad5-bbf6-41c6-a703-b4034b440d75-kube-api-access-t7w2g\") pod \"4728aad5-bbf6-41c6-a703-b4034b440d75\" (UID: \"4728aad5-bbf6-41c6-a703-b4034b440d75\") " Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.250620 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4728aad5-bbf6-41c6-a703-b4034b440d75-utilities\") pod \"4728aad5-bbf6-41c6-a703-b4034b440d75\" (UID: \"4728aad5-bbf6-41c6-a703-b4034b440d75\") " Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.250731 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4728aad5-bbf6-41c6-a703-b4034b440d75-catalog-content\") pod \"4728aad5-bbf6-41c6-a703-b4034b440d75\" (UID: \"4728aad5-bbf6-41c6-a703-b4034b440d75\") " Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.251738 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4728aad5-bbf6-41c6-a703-b4034b440d75-utilities" (OuterVolumeSpecName: "utilities") pod "4728aad5-bbf6-41c6-a703-b4034b440d75" (UID: "4728aad5-bbf6-41c6-a703-b4034b440d75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.256929 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4728aad5-bbf6-41c6-a703-b4034b440d75-kube-api-access-t7w2g" (OuterVolumeSpecName: "kube-api-access-t7w2g") pod "4728aad5-bbf6-41c6-a703-b4034b440d75" (UID: "4728aad5-bbf6-41c6-a703-b4034b440d75"). InnerVolumeSpecName "kube-api-access-t7w2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.275109 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4728aad5-bbf6-41c6-a703-b4034b440d75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4728aad5-bbf6-41c6-a703-b4034b440d75" (UID: "4728aad5-bbf6-41c6-a703-b4034b440d75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.353807 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4728aad5-bbf6-41c6-a703-b4034b440d75-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.353846 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7w2g\" (UniqueName: \"kubernetes.io/projected/4728aad5-bbf6-41c6-a703-b4034b440d75-kube-api-access-t7w2g\") on node \"crc\" DevicePath \"\"" Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.353858 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4728aad5-bbf6-41c6-a703-b4034b440d75-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.619134 4966 generic.go:334] "Generic (PLEG): container finished" podID="4728aad5-bbf6-41c6-a703-b4034b440d75" containerID="4b2503d3e64070db4633db62813d9bb32280b4da876fdf6ba02913d295ee1cce" exitCode=0 Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.619179 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42trm" event={"ID":"4728aad5-bbf6-41c6-a703-b4034b440d75","Type":"ContainerDied","Data":"4b2503d3e64070db4633db62813d9bb32280b4da876fdf6ba02913d295ee1cce"} Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.619211 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42trm" event={"ID":"4728aad5-bbf6-41c6-a703-b4034b440d75","Type":"ContainerDied","Data":"e2bf16e6c4b2ecd5bc8bfb062362c113b855a93cf435695b9ebe123e8a90d892"} Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.619221 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-42trm" Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.619232 4966 scope.go:117] "RemoveContainer" containerID="4b2503d3e64070db4633db62813d9bb32280b4da876fdf6ba02913d295ee1cce" Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.640636 4966 scope.go:117] "RemoveContainer" containerID="7654d17938974780201d72b3e351529335f9bf04e4297d735fbb29bfc64fd4f7" Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.643828 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-42trm"] Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.654939 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-42trm"] Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.676459 4966 scope.go:117] "RemoveContainer" containerID="e0422e65a2e99bd3eef4d2df0ff92a2e0d1206d747a9c542f0f95caf5ebbc7c8" Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.728006 4966 scope.go:117] "RemoveContainer" containerID="4b2503d3e64070db4633db62813d9bb32280b4da876fdf6ba02913d295ee1cce" Jan 27 16:37:00 crc kubenswrapper[4966]: E0127 16:37:00.728391 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b2503d3e64070db4633db62813d9bb32280b4da876fdf6ba02913d295ee1cce\": container with ID starting with 4b2503d3e64070db4633db62813d9bb32280b4da876fdf6ba02913d295ee1cce not found: ID does not exist" containerID="4b2503d3e64070db4633db62813d9bb32280b4da876fdf6ba02913d295ee1cce" Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.728426 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b2503d3e64070db4633db62813d9bb32280b4da876fdf6ba02913d295ee1cce"} err="failed to get container status \"4b2503d3e64070db4633db62813d9bb32280b4da876fdf6ba02913d295ee1cce\": rpc error: code = NotFound desc = could not find container \"4b2503d3e64070db4633db62813d9bb32280b4da876fdf6ba02913d295ee1cce\": container with ID starting with 4b2503d3e64070db4633db62813d9bb32280b4da876fdf6ba02913d295ee1cce not found: ID does not exist" Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.728450 4966 scope.go:117] "RemoveContainer" containerID="7654d17938974780201d72b3e351529335f9bf04e4297d735fbb29bfc64fd4f7" Jan 27 16:37:00 crc kubenswrapper[4966]: E0127 16:37:00.728736 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7654d17938974780201d72b3e351529335f9bf04e4297d735fbb29bfc64fd4f7\": container with ID starting with 7654d17938974780201d72b3e351529335f9bf04e4297d735fbb29bfc64fd4f7 not found: ID does not exist" containerID="7654d17938974780201d72b3e351529335f9bf04e4297d735fbb29bfc64fd4f7" Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.728768 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7654d17938974780201d72b3e351529335f9bf04e4297d735fbb29bfc64fd4f7"} err="failed to get container status \"7654d17938974780201d72b3e351529335f9bf04e4297d735fbb29bfc64fd4f7\": rpc error: code = NotFound desc = could not find container \"7654d17938974780201d72b3e351529335f9bf04e4297d735fbb29bfc64fd4f7\": container with ID starting with 7654d17938974780201d72b3e351529335f9bf04e4297d735fbb29bfc64fd4f7 not found: ID does not exist" Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.728790 4966 scope.go:117] "RemoveContainer" containerID="e0422e65a2e99bd3eef4d2df0ff92a2e0d1206d747a9c542f0f95caf5ebbc7c8" Jan 27 16:37:00 crc kubenswrapper[4966]: E0127 16:37:00.729088 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0422e65a2e99bd3eef4d2df0ff92a2e0d1206d747a9c542f0f95caf5ebbc7c8\": container with ID starting with e0422e65a2e99bd3eef4d2df0ff92a2e0d1206d747a9c542f0f95caf5ebbc7c8 not found: ID does not exist" containerID="e0422e65a2e99bd3eef4d2df0ff92a2e0d1206d747a9c542f0f95caf5ebbc7c8" Jan 27 16:37:00 crc kubenswrapper[4966]: I0127 16:37:00.729114 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0422e65a2e99bd3eef4d2df0ff92a2e0d1206d747a9c542f0f95caf5ebbc7c8"} err="failed to get container status \"e0422e65a2e99bd3eef4d2df0ff92a2e0d1206d747a9c542f0f95caf5ebbc7c8\": rpc error: code = NotFound desc = could not find container \"e0422e65a2e99bd3eef4d2df0ff92a2e0d1206d747a9c542f0f95caf5ebbc7c8\": container with ID starting with e0422e65a2e99bd3eef4d2df0ff92a2e0d1206d747a9c542f0f95caf5ebbc7c8 not found: ID does not exist" Jan 27 16:37:02 crc kubenswrapper[4966]: I0127 16:37:02.534026 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4728aad5-bbf6-41c6-a703-b4034b440d75" path="/var/lib/kubelet/pods/4728aad5-bbf6-41c6-a703-b4034b440d75/volumes" Jan 27 16:38:10 crc kubenswrapper[4966]: I0127 16:38:10.120514 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:38:10 crc kubenswrapper[4966]: I0127 16:38:10.121215 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:38:40 crc kubenswrapper[4966]: I0127 16:38:40.120377 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:38:40 crc kubenswrapper[4966]: I0127 16:38:40.120936 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:39:10 crc kubenswrapper[4966]: I0127 16:39:10.119807 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:39:10 crc kubenswrapper[4966]: I0127 16:39:10.121573 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:39:10 crc kubenswrapper[4966]: I0127 16:39:10.121709 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 16:39:10 crc kubenswrapper[4966]: I0127 16:39:10.122686 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7bdbc3e9b8a02c03daf2fd9d623d3d799b555cda597143e2df71cca8d0b7c7ae"} pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:39:10 crc kubenswrapper[4966]: I0127 16:39:10.122828 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" containerID="cri-o://7bdbc3e9b8a02c03daf2fd9d623d3d799b555cda597143e2df71cca8d0b7c7ae" gracePeriod=600 Jan 27 16:39:11 crc kubenswrapper[4966]: I0127 16:39:11.176785 4966 generic.go:334] "Generic (PLEG): container finished" podID="75889828-fc5d-4516-a3c4-db3affd4f810" containerID="7bdbc3e9b8a02c03daf2fd9d623d3d799b555cda597143e2df71cca8d0b7c7ae" exitCode=0 Jan 27 16:39:11 crc kubenswrapper[4966]: I0127 16:39:11.176890 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerDied","Data":"7bdbc3e9b8a02c03daf2fd9d623d3d799b555cda597143e2df71cca8d0b7c7ae"} Jan 27 16:39:11 crc kubenswrapper[4966]: I0127 16:39:11.178742 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893"} Jan 27 16:39:11 crc kubenswrapper[4966]: I0127 16:39:11.178834 4966 scope.go:117] "RemoveContainer" containerID="995dc1f850a0013c9f2b480ada932425f2e64686464060a847d4baeac59e823d" Jan 27 16:39:47 crc kubenswrapper[4966]: E0127 16:39:47.787184 4966 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.58:55792->38.129.56.58:34425: write tcp 38.129.56.58:55792->38.129.56.58:34425: write: broken pipe Jan 27 16:41:10 crc kubenswrapper[4966]: I0127 16:41:10.119984 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:41:10 crc kubenswrapper[4966]: I0127 16:41:10.121184 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:41:19 crc kubenswrapper[4966]: I0127 16:41:19.422295 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pv28b"] Jan 27 16:41:19 crc kubenswrapper[4966]: E0127 16:41:19.423556 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4728aad5-bbf6-41c6-a703-b4034b440d75" containerName="registry-server" Jan 27 16:41:19 crc kubenswrapper[4966]: I0127 16:41:19.423576 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4728aad5-bbf6-41c6-a703-b4034b440d75" containerName="registry-server" Jan 27 16:41:19 crc kubenswrapper[4966]: E0127 16:41:19.423588 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4728aad5-bbf6-41c6-a703-b4034b440d75" containerName="extract-utilities" Jan 27 16:41:19 crc kubenswrapper[4966]: I0127 16:41:19.423596 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4728aad5-bbf6-41c6-a703-b4034b440d75" containerName="extract-utilities" Jan 27 16:41:19 crc kubenswrapper[4966]: E0127 16:41:19.423614 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4728aad5-bbf6-41c6-a703-b4034b440d75" containerName="extract-content" Jan 27 16:41:19 crc kubenswrapper[4966]: I0127 16:41:19.423621 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="4728aad5-bbf6-41c6-a703-b4034b440d75" containerName="extract-content" Jan 27 16:41:19 crc kubenswrapper[4966]: I0127 16:41:19.423939 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="4728aad5-bbf6-41c6-a703-b4034b440d75" containerName="registry-server" Jan 27 16:41:19 crc kubenswrapper[4966]: I0127 16:41:19.426185 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pv28b" Jan 27 16:41:19 crc kubenswrapper[4966]: I0127 16:41:19.447401 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pv28b"] Jan 27 16:41:19 crc kubenswrapper[4966]: I0127 16:41:19.589037 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2968be0-eeca-45e1-bb93-ccb7c1da2ed5-catalog-content\") pod \"community-operators-pv28b\" (UID: \"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5\") " pod="openshift-marketplace/community-operators-pv28b" Jan 27 16:41:19 crc kubenswrapper[4966]: I0127 16:41:19.589977 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xlml\" (UniqueName: \"kubernetes.io/projected/a2968be0-eeca-45e1-bb93-ccb7c1da2ed5-kube-api-access-7xlml\") pod \"community-operators-pv28b\" (UID: \"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5\") " pod="openshift-marketplace/community-operators-pv28b" Jan 27 16:41:19 crc kubenswrapper[4966]: I0127 16:41:19.590210 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2968be0-eeca-45e1-bb93-ccb7c1da2ed5-utilities\") pod \"community-operators-pv28b\" (UID: \"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5\") " pod="openshift-marketplace/community-operators-pv28b" Jan 27 16:41:19 crc kubenswrapper[4966]: I0127 16:41:19.693702 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xlml\" (UniqueName: \"kubernetes.io/projected/a2968be0-eeca-45e1-bb93-ccb7c1da2ed5-kube-api-access-7xlml\") pod \"community-operators-pv28b\" (UID: \"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5\") " pod="openshift-marketplace/community-operators-pv28b" Jan 27 16:41:19 crc kubenswrapper[4966]: I0127 16:41:19.694003 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2968be0-eeca-45e1-bb93-ccb7c1da2ed5-utilities\") pod \"community-operators-pv28b\" (UID: \"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5\") " pod="openshift-marketplace/community-operators-pv28b" Jan 27 16:41:19 crc kubenswrapper[4966]: I0127 16:41:19.694431 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2968be0-eeca-45e1-bb93-ccb7c1da2ed5-catalog-content\") pod \"community-operators-pv28b\" (UID: \"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5\") " pod="openshift-marketplace/community-operators-pv28b" Jan 27 16:41:19 crc kubenswrapper[4966]: I0127 16:41:19.694918 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2968be0-eeca-45e1-bb93-ccb7c1da2ed5-catalog-content\") pod \"community-operators-pv28b\" (UID: \"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5\") " pod="openshift-marketplace/community-operators-pv28b" Jan 27 16:41:19 crc kubenswrapper[4966]: I0127 16:41:19.695347 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2968be0-eeca-45e1-bb93-ccb7c1da2ed5-utilities\") pod \"community-operators-pv28b\" (UID: \"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5\") " pod="openshift-marketplace/community-operators-pv28b" Jan 27 16:41:19 crc kubenswrapper[4966]: I0127 16:41:19.715527 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xlml\" (UniqueName: \"kubernetes.io/projected/a2968be0-eeca-45e1-bb93-ccb7c1da2ed5-kube-api-access-7xlml\") pod \"community-operators-pv28b\" (UID: \"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5\") " pod="openshift-marketplace/community-operators-pv28b" Jan 27 16:41:19 crc kubenswrapper[4966]: I0127 16:41:19.760458 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pv28b" Jan 27 16:41:20 crc kubenswrapper[4966]: I0127 16:41:20.375855 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pv28b"] Jan 27 16:41:20 crc kubenswrapper[4966]: I0127 16:41:20.782993 4966 generic.go:334] "Generic (PLEG): container finished" podID="a2968be0-eeca-45e1-bb93-ccb7c1da2ed5" containerID="5e54e06e24a052891a72d25fa149f61c65b8ac526efc7aab3dc01736116f1bb4" exitCode=0 Jan 27 16:41:20 crc kubenswrapper[4966]: I0127 16:41:20.783098 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv28b" event={"ID":"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5","Type":"ContainerDied","Data":"5e54e06e24a052891a72d25fa149f61c65b8ac526efc7aab3dc01736116f1bb4"} Jan 27 16:41:20 crc kubenswrapper[4966]: I0127 16:41:20.783294 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv28b" event={"ID":"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5","Type":"ContainerStarted","Data":"6a03886fd163fd72808980fcd1fbe075168a8938112c3b2529478a4745129e21"} Jan 27 16:41:20 crc kubenswrapper[4966]: I0127 16:41:20.786557 4966 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:41:22 crc kubenswrapper[4966]: I0127 16:41:22.816774 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv28b" event={"ID":"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5","Type":"ContainerStarted","Data":"2593afd647fcc14c57d271a656e3a78a0d0f22c9d51036987e4d164996360ee6"} Jan 27 16:41:23 crc kubenswrapper[4966]: I0127 16:41:23.830694 4966 generic.go:334] "Generic (PLEG): container finished" podID="a2968be0-eeca-45e1-bb93-ccb7c1da2ed5" containerID="2593afd647fcc14c57d271a656e3a78a0d0f22c9d51036987e4d164996360ee6" exitCode=0 Jan 27 16:41:23 crc kubenswrapper[4966]: I0127 16:41:23.831066 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv28b" event={"ID":"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5","Type":"ContainerDied","Data":"2593afd647fcc14c57d271a656e3a78a0d0f22c9d51036987e4d164996360ee6"} Jan 27 16:41:24 crc kubenswrapper[4966]: I0127 16:41:24.858599 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv28b" event={"ID":"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5","Type":"ContainerStarted","Data":"9649b68614ea984c0f8afec2cd9a075ae597fb114bce0968f6e9e8f1c05714bd"} Jan 27 16:41:24 crc kubenswrapper[4966]: I0127 16:41:24.889801 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pv28b" podStartSLOduration=2.440036249 podStartE2EDuration="5.889785763s" podCreationTimestamp="2026-01-27 16:41:19 +0000 UTC" firstStartedPulling="2026-01-27 16:41:20.786286051 +0000 UTC m=+3547.089079549" lastFinishedPulling="2026-01-27 16:41:24.236035565 +0000 UTC m=+3550.538829063" observedRunningTime="2026-01-27 16:41:24.886817901 +0000 UTC m=+3551.189611399" watchObservedRunningTime="2026-01-27 16:41:24.889785763 +0000 UTC m=+3551.192579251" Jan 27 16:41:29 crc kubenswrapper[4966]: I0127 16:41:29.761000 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pv28b" Jan 27 16:41:29 crc kubenswrapper[4966]: I0127 16:41:29.761577 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pv28b" Jan 27 16:41:29 crc kubenswrapper[4966]: I0127 16:41:29.848255 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pv28b" Jan 27 16:41:29 crc kubenswrapper[4966]: I0127 16:41:29.979117 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pv28b" Jan 27 16:41:30 crc kubenswrapper[4966]: I0127 16:41:30.096235 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pv28b"] Jan 27 16:41:31 crc kubenswrapper[4966]: I0127 16:41:31.936734 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pv28b" podUID="a2968be0-eeca-45e1-bb93-ccb7c1da2ed5" containerName="registry-server" containerID="cri-o://9649b68614ea984c0f8afec2cd9a075ae597fb114bce0968f6e9e8f1c05714bd" gracePeriod=2 Jan 27 16:41:32 crc kubenswrapper[4966]: I0127 16:41:32.479253 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pv28b" Jan 27 16:41:32 crc kubenswrapper[4966]: I0127 16:41:32.532400 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2968be0-eeca-45e1-bb93-ccb7c1da2ed5-catalog-content\") pod \"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5\" (UID: \"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5\") " Jan 27 16:41:32 crc kubenswrapper[4966]: I0127 16:41:32.532656 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xlml\" (UniqueName: \"kubernetes.io/projected/a2968be0-eeca-45e1-bb93-ccb7c1da2ed5-kube-api-access-7xlml\") pod \"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5\" (UID: \"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5\") " Jan 27 16:41:32 crc kubenswrapper[4966]: I0127 16:41:32.534599 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2968be0-eeca-45e1-bb93-ccb7c1da2ed5-utilities\") pod \"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5\" (UID: \"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5\") " Jan 27 16:41:32 crc kubenswrapper[4966]: I0127 16:41:32.536395 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2968be0-eeca-45e1-bb93-ccb7c1da2ed5-utilities" (OuterVolumeSpecName: "utilities") pod "a2968be0-eeca-45e1-bb93-ccb7c1da2ed5" (UID: "a2968be0-eeca-45e1-bb93-ccb7c1da2ed5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:41:32 crc kubenswrapper[4966]: I0127 16:41:32.537477 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2968be0-eeca-45e1-bb93-ccb7c1da2ed5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:41:32 crc kubenswrapper[4966]: I0127 16:41:32.543308 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2968be0-eeca-45e1-bb93-ccb7c1da2ed5-kube-api-access-7xlml" (OuterVolumeSpecName: "kube-api-access-7xlml") pod "a2968be0-eeca-45e1-bb93-ccb7c1da2ed5" (UID: "a2968be0-eeca-45e1-bb93-ccb7c1da2ed5"). InnerVolumeSpecName "kube-api-access-7xlml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:41:32 crc kubenswrapper[4966]: I0127 16:41:32.608958 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2968be0-eeca-45e1-bb93-ccb7c1da2ed5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2968be0-eeca-45e1-bb93-ccb7c1da2ed5" (UID: "a2968be0-eeca-45e1-bb93-ccb7c1da2ed5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:41:32 crc kubenswrapper[4966]: I0127 16:41:32.641162 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xlml\" (UniqueName: \"kubernetes.io/projected/a2968be0-eeca-45e1-bb93-ccb7c1da2ed5-kube-api-access-7xlml\") on node \"crc\" DevicePath \"\"" Jan 27 16:41:32 crc kubenswrapper[4966]: I0127 16:41:32.641292 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2968be0-eeca-45e1-bb93-ccb7c1da2ed5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:41:32 crc kubenswrapper[4966]: I0127 16:41:32.957163 4966 generic.go:334] "Generic (PLEG): container finished" podID="a2968be0-eeca-45e1-bb93-ccb7c1da2ed5" containerID="9649b68614ea984c0f8afec2cd9a075ae597fb114bce0968f6e9e8f1c05714bd" exitCode=0 Jan 27 16:41:32 crc kubenswrapper[4966]: I0127 16:41:32.957226 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv28b" event={"ID":"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5","Type":"ContainerDied","Data":"9649b68614ea984c0f8afec2cd9a075ae597fb114bce0968f6e9e8f1c05714bd"} Jan 27 16:41:32 crc kubenswrapper[4966]: I0127 16:41:32.957242 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pv28b" Jan 27 16:41:32 crc kubenswrapper[4966]: I0127 16:41:32.957279 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv28b" event={"ID":"a2968be0-eeca-45e1-bb93-ccb7c1da2ed5","Type":"ContainerDied","Data":"6a03886fd163fd72808980fcd1fbe075168a8938112c3b2529478a4745129e21"} Jan 27 16:41:32 crc kubenswrapper[4966]: I0127 16:41:32.957311 4966 scope.go:117] "RemoveContainer" containerID="9649b68614ea984c0f8afec2cd9a075ae597fb114bce0968f6e9e8f1c05714bd" Jan 27 16:41:32 crc kubenswrapper[4966]: I0127 16:41:32.980689 4966 scope.go:117] "RemoveContainer" containerID="2593afd647fcc14c57d271a656e3a78a0d0f22c9d51036987e4d164996360ee6" Jan 27 16:41:33 crc kubenswrapper[4966]: I0127 16:41:33.023537 4966 scope.go:117] "RemoveContainer" containerID="5e54e06e24a052891a72d25fa149f61c65b8ac526efc7aab3dc01736116f1bb4" Jan 27 16:41:33 crc kubenswrapper[4966]: I0127 16:41:33.026697 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pv28b"] Jan 27 16:41:33 crc kubenswrapper[4966]: I0127 16:41:33.041538 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pv28b"] Jan 27 16:41:33 crc kubenswrapper[4966]: I0127 16:41:33.098203 4966 scope.go:117] "RemoveContainer" containerID="9649b68614ea984c0f8afec2cd9a075ae597fb114bce0968f6e9e8f1c05714bd" Jan 27 16:41:33 crc kubenswrapper[4966]: E0127 16:41:33.098840 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9649b68614ea984c0f8afec2cd9a075ae597fb114bce0968f6e9e8f1c05714bd\": container with ID starting with 9649b68614ea984c0f8afec2cd9a075ae597fb114bce0968f6e9e8f1c05714bd not found: ID does not exist" containerID="9649b68614ea984c0f8afec2cd9a075ae597fb114bce0968f6e9e8f1c05714bd" Jan 27 16:41:33 crc kubenswrapper[4966]: I0127 16:41:33.098926 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9649b68614ea984c0f8afec2cd9a075ae597fb114bce0968f6e9e8f1c05714bd"} err="failed to get container status \"9649b68614ea984c0f8afec2cd9a075ae597fb114bce0968f6e9e8f1c05714bd\": rpc error: code = NotFound desc = could not find container \"9649b68614ea984c0f8afec2cd9a075ae597fb114bce0968f6e9e8f1c05714bd\": container with ID starting with 9649b68614ea984c0f8afec2cd9a075ae597fb114bce0968f6e9e8f1c05714bd not found: ID does not exist" Jan 27 16:41:33 crc kubenswrapper[4966]: I0127 16:41:33.098996 4966 scope.go:117] "RemoveContainer" containerID="2593afd647fcc14c57d271a656e3a78a0d0f22c9d51036987e4d164996360ee6" Jan 27 16:41:33 crc kubenswrapper[4966]: E0127 16:41:33.099423 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2593afd647fcc14c57d271a656e3a78a0d0f22c9d51036987e4d164996360ee6\": container with ID starting with 2593afd647fcc14c57d271a656e3a78a0d0f22c9d51036987e4d164996360ee6 not found: ID does not exist" containerID="2593afd647fcc14c57d271a656e3a78a0d0f22c9d51036987e4d164996360ee6" Jan 27 16:41:33 crc kubenswrapper[4966]: I0127 16:41:33.099459 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2593afd647fcc14c57d271a656e3a78a0d0f22c9d51036987e4d164996360ee6"} err="failed to get container status \"2593afd647fcc14c57d271a656e3a78a0d0f22c9d51036987e4d164996360ee6\": rpc error: code = NotFound desc = could not find container \"2593afd647fcc14c57d271a656e3a78a0d0f22c9d51036987e4d164996360ee6\": container with ID starting with 2593afd647fcc14c57d271a656e3a78a0d0f22c9d51036987e4d164996360ee6 not found: ID does not exist" Jan 27 16:41:33 crc kubenswrapper[4966]: I0127 16:41:33.099479 4966 scope.go:117] "RemoveContainer" containerID="5e54e06e24a052891a72d25fa149f61c65b8ac526efc7aab3dc01736116f1bb4" Jan 27 16:41:33 crc kubenswrapper[4966]: E0127 16:41:33.099878 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e54e06e24a052891a72d25fa149f61c65b8ac526efc7aab3dc01736116f1bb4\": container with ID starting with 5e54e06e24a052891a72d25fa149f61c65b8ac526efc7aab3dc01736116f1bb4 not found: ID does not exist" containerID="5e54e06e24a052891a72d25fa149f61c65b8ac526efc7aab3dc01736116f1bb4" Jan 27 16:41:33 crc kubenswrapper[4966]: I0127 16:41:33.099953 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e54e06e24a052891a72d25fa149f61c65b8ac526efc7aab3dc01736116f1bb4"} err="failed to get container status \"5e54e06e24a052891a72d25fa149f61c65b8ac526efc7aab3dc01736116f1bb4\": rpc error: code = NotFound desc = could not find container \"5e54e06e24a052891a72d25fa149f61c65b8ac526efc7aab3dc01736116f1bb4\": container with ID starting with 5e54e06e24a052891a72d25fa149f61c65b8ac526efc7aab3dc01736116f1bb4 not found: ID does not exist" Jan 27 16:41:34 crc kubenswrapper[4966]: I0127 16:41:34.540475 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2968be0-eeca-45e1-bb93-ccb7c1da2ed5" path="/var/lib/kubelet/pods/a2968be0-eeca-45e1-bb93-ccb7c1da2ed5/volumes" Jan 27 16:41:40 crc kubenswrapper[4966]: I0127 16:41:40.119957 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:41:40 crc kubenswrapper[4966]: I0127 16:41:40.120516 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:42:10 crc kubenswrapper[4966]: I0127 16:42:10.120135 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:42:10 crc kubenswrapper[4966]: I0127 16:42:10.120772 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:42:10 crc kubenswrapper[4966]: I0127 16:42:10.120839 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 16:42:10 crc kubenswrapper[4966]: I0127 16:42:10.122517 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893"} pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:42:10 crc kubenswrapper[4966]: I0127 16:42:10.122626 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" containerID="cri-o://30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" gracePeriod=600 Jan 27 16:42:10 crc kubenswrapper[4966]: E0127 16:42:10.243622 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:42:10 crc kubenswrapper[4966]: I0127 16:42:10.425096 4966 generic.go:334] "Generic (PLEG): container finished" podID="75889828-fc5d-4516-a3c4-db3affd4f810" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" exitCode=0 Jan 27 16:42:10 crc kubenswrapper[4966]: I0127 16:42:10.425154 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerDied","Data":"30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893"} Jan 27 16:42:10 crc kubenswrapper[4966]: I0127 16:42:10.425189 4966 scope.go:117] "RemoveContainer" containerID="7bdbc3e9b8a02c03daf2fd9d623d3d799b555cda597143e2df71cca8d0b7c7ae" Jan 27 16:42:10 crc kubenswrapper[4966]: I0127 16:42:10.426257 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:42:10 crc kubenswrapper[4966]: E0127 16:42:10.426802 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:42:25 crc kubenswrapper[4966]: I0127 16:42:25.522154 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:42:25 crc kubenswrapper[4966]: E0127 16:42:25.523168 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:42:26 crc kubenswrapper[4966]: I0127 16:42:26.056462 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wfk6m"] Jan 27 16:42:26 crc kubenswrapper[4966]: E0127 16:42:26.057097 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2968be0-eeca-45e1-bb93-ccb7c1da2ed5" containerName="extract-content" Jan 27 16:42:26 crc kubenswrapper[4966]: I0127 16:42:26.057119 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2968be0-eeca-45e1-bb93-ccb7c1da2ed5" containerName="extract-content" Jan 27 16:42:26 crc kubenswrapper[4966]: E0127 16:42:26.057147 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2968be0-eeca-45e1-bb93-ccb7c1da2ed5" containerName="registry-server" Jan 27 16:42:26 crc kubenswrapper[4966]: I0127 16:42:26.057155 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2968be0-eeca-45e1-bb93-ccb7c1da2ed5" containerName="registry-server" Jan 27 16:42:26 crc kubenswrapper[4966]: E0127 16:42:26.057183 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2968be0-eeca-45e1-bb93-ccb7c1da2ed5" containerName="extract-utilities" Jan 27 16:42:26 crc kubenswrapper[4966]: I0127 16:42:26.057193 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2968be0-eeca-45e1-bb93-ccb7c1da2ed5" containerName="extract-utilities" Jan 27 16:42:26 crc kubenswrapper[4966]: I0127 16:42:26.057465 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2968be0-eeca-45e1-bb93-ccb7c1da2ed5" containerName="registry-server" Jan 27 16:42:26 crc kubenswrapper[4966]: I0127 16:42:26.059524 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wfk6m" Jan 27 16:42:26 crc kubenswrapper[4966]: I0127 16:42:26.088185 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wfk6m"] Jan 27 16:42:26 crc kubenswrapper[4966]: I0127 16:42:26.127156 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f4b74bd-9fe4-488d-a651-1db61770e717-catalog-content\") pod \"certified-operators-wfk6m\" (UID: \"1f4b74bd-9fe4-488d-a651-1db61770e717\") " pod="openshift-marketplace/certified-operators-wfk6m" Jan 27 16:42:26 crc kubenswrapper[4966]: I0127 16:42:26.127224 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv2sn\" (UniqueName: \"kubernetes.io/projected/1f4b74bd-9fe4-488d-a651-1db61770e717-kube-api-access-bv2sn\") pod \"certified-operators-wfk6m\" (UID: \"1f4b74bd-9fe4-488d-a651-1db61770e717\") " pod="openshift-marketplace/certified-operators-wfk6m" Jan 27 16:42:26 crc kubenswrapper[4966]: I0127 16:42:26.127267 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f4b74bd-9fe4-488d-a651-1db61770e717-utilities\") pod \"certified-operators-wfk6m\" (UID: \"1f4b74bd-9fe4-488d-a651-1db61770e717\") " pod="openshift-marketplace/certified-operators-wfk6m" Jan 27 16:42:26 crc kubenswrapper[4966]: I0127 16:42:26.229700 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv2sn\" (UniqueName: \"kubernetes.io/projected/1f4b74bd-9fe4-488d-a651-1db61770e717-kube-api-access-bv2sn\") pod \"certified-operators-wfk6m\" (UID: \"1f4b74bd-9fe4-488d-a651-1db61770e717\") " pod="openshift-marketplace/certified-operators-wfk6m" Jan 27 16:42:26 crc kubenswrapper[4966]: I0127 16:42:26.229761 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f4b74bd-9fe4-488d-a651-1db61770e717-utilities\") pod \"certified-operators-wfk6m\" (UID: \"1f4b74bd-9fe4-488d-a651-1db61770e717\") " pod="openshift-marketplace/certified-operators-wfk6m" Jan 27 16:42:26 crc kubenswrapper[4966]: I0127 16:42:26.230064 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f4b74bd-9fe4-488d-a651-1db61770e717-catalog-content\") pod \"certified-operators-wfk6m\" (UID: \"1f4b74bd-9fe4-488d-a651-1db61770e717\") " pod="openshift-marketplace/certified-operators-wfk6m" Jan 27 16:42:26 crc kubenswrapper[4966]: I0127 16:42:26.230560 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f4b74bd-9fe4-488d-a651-1db61770e717-catalog-content\") pod \"certified-operators-wfk6m\" (UID: \"1f4b74bd-9fe4-488d-a651-1db61770e717\") " pod="openshift-marketplace/certified-operators-wfk6m" Jan 27 16:42:26 crc kubenswrapper[4966]: I0127 16:42:26.230558 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f4b74bd-9fe4-488d-a651-1db61770e717-utilities\") pod \"certified-operators-wfk6m\" (UID: \"1f4b74bd-9fe4-488d-a651-1db61770e717\") " pod="openshift-marketplace/certified-operators-wfk6m" Jan 27 16:42:26 crc kubenswrapper[4966]: I0127 16:42:26.250417 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv2sn\" (UniqueName: \"kubernetes.io/projected/1f4b74bd-9fe4-488d-a651-1db61770e717-kube-api-access-bv2sn\") pod \"certified-operators-wfk6m\" (UID: \"1f4b74bd-9fe4-488d-a651-1db61770e717\") " pod="openshift-marketplace/certified-operators-wfk6m" Jan 27 16:42:26 crc kubenswrapper[4966]: I0127 16:42:26.385559 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wfk6m" Jan 27 16:42:26 crc kubenswrapper[4966]: I0127 16:42:26.893581 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wfk6m"] Jan 27 16:42:27 crc kubenswrapper[4966]: I0127 16:42:27.638566 4966 generic.go:334] "Generic (PLEG): container finished" podID="1f4b74bd-9fe4-488d-a651-1db61770e717" containerID="41cc9d87fdb53f5ce412c1da6c7c7835f3fa58ca20aa7044dc38d7939e7d3c7b" exitCode=0 Jan 27 16:42:27 crc kubenswrapper[4966]: I0127 16:42:27.638612 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfk6m" event={"ID":"1f4b74bd-9fe4-488d-a651-1db61770e717","Type":"ContainerDied","Data":"41cc9d87fdb53f5ce412c1da6c7c7835f3fa58ca20aa7044dc38d7939e7d3c7b"} Jan 27 16:42:27 crc kubenswrapper[4966]: I0127 16:42:27.638641 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfk6m" event={"ID":"1f4b74bd-9fe4-488d-a651-1db61770e717","Type":"ContainerStarted","Data":"8c5b4c12eff7200f716ce64eeb2d8bb523d61f33d49c9b6de52940005df68541"} Jan 27 16:42:29 crc kubenswrapper[4966]: I0127 16:42:29.669703 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfk6m" event={"ID":"1f4b74bd-9fe4-488d-a651-1db61770e717","Type":"ContainerStarted","Data":"adba0a17dbd1f6afe81dc8acda9f4cddba83fffeee1ec42c56e502664c2e828e"} Jan 27 16:42:30 crc kubenswrapper[4966]: I0127 16:42:30.690205 4966 generic.go:334] "Generic (PLEG): container finished" podID="1f4b74bd-9fe4-488d-a651-1db61770e717" containerID="adba0a17dbd1f6afe81dc8acda9f4cddba83fffeee1ec42c56e502664c2e828e" exitCode=0 Jan 27 16:42:30 crc kubenswrapper[4966]: I0127 16:42:30.690326 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfk6m" event={"ID":"1f4b74bd-9fe4-488d-a651-1db61770e717","Type":"ContainerDied","Data":"adba0a17dbd1f6afe81dc8acda9f4cddba83fffeee1ec42c56e502664c2e828e"} Jan 27 16:42:31 crc kubenswrapper[4966]: I0127 16:42:31.705502 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfk6m" event={"ID":"1f4b74bd-9fe4-488d-a651-1db61770e717","Type":"ContainerStarted","Data":"1c445f443ee903dc23428ce8f86c4f7d6dee1d3758617c7ec0f7701c012b0b55"} Jan 27 16:42:31 crc kubenswrapper[4966]: I0127 16:42:31.745218 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wfk6m" podStartSLOduration=2.248467492 podStartE2EDuration="5.745194639s" podCreationTimestamp="2026-01-27 16:42:26 +0000 UTC" firstStartedPulling="2026-01-27 16:42:27.640728906 +0000 UTC m=+3613.943522394" lastFinishedPulling="2026-01-27 16:42:31.137456013 +0000 UTC m=+3617.440249541" observedRunningTime="2026-01-27 16:42:31.727720021 +0000 UTC m=+3618.030513509" watchObservedRunningTime="2026-01-27 16:42:31.745194639 +0000 UTC m=+3618.047988127" Jan 27 16:42:36 crc kubenswrapper[4966]: I0127 16:42:36.386767 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wfk6m" Jan 27 16:42:36 crc kubenswrapper[4966]: I0127 16:42:36.387198 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wfk6m" Jan 27 16:42:36 crc kubenswrapper[4966]: I0127 16:42:36.461344 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wfk6m" Jan 27 16:42:36 crc kubenswrapper[4966]: I0127 16:42:36.847677 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wfk6m" Jan 27 16:42:36 crc kubenswrapper[4966]: I0127 16:42:36.918615 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wfk6m"] Jan 27 16:42:38 crc kubenswrapper[4966]: I0127 16:42:38.805480 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wfk6m" podUID="1f4b74bd-9fe4-488d-a651-1db61770e717" containerName="registry-server" containerID="cri-o://1c445f443ee903dc23428ce8f86c4f7d6dee1d3758617c7ec0f7701c012b0b55" gracePeriod=2 Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.339989 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wfk6m" Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.375840 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f4b74bd-9fe4-488d-a651-1db61770e717-catalog-content\") pod \"1f4b74bd-9fe4-488d-a651-1db61770e717\" (UID: \"1f4b74bd-9fe4-488d-a651-1db61770e717\") " Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.376339 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv2sn\" (UniqueName: \"kubernetes.io/projected/1f4b74bd-9fe4-488d-a651-1db61770e717-kube-api-access-bv2sn\") pod \"1f4b74bd-9fe4-488d-a651-1db61770e717\" (UID: \"1f4b74bd-9fe4-488d-a651-1db61770e717\") " Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.376538 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f4b74bd-9fe4-488d-a651-1db61770e717-utilities\") pod \"1f4b74bd-9fe4-488d-a651-1db61770e717\" (UID: \"1f4b74bd-9fe4-488d-a651-1db61770e717\") " Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.377320 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f4b74bd-9fe4-488d-a651-1db61770e717-utilities" (OuterVolumeSpecName: "utilities") pod "1f4b74bd-9fe4-488d-a651-1db61770e717" (UID: "1f4b74bd-9fe4-488d-a651-1db61770e717"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.378134 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f4b74bd-9fe4-488d-a651-1db61770e717-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.387654 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f4b74bd-9fe4-488d-a651-1db61770e717-kube-api-access-bv2sn" (OuterVolumeSpecName: "kube-api-access-bv2sn") pod "1f4b74bd-9fe4-488d-a651-1db61770e717" (UID: "1f4b74bd-9fe4-488d-a651-1db61770e717"). InnerVolumeSpecName "kube-api-access-bv2sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.437780 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f4b74bd-9fe4-488d-a651-1db61770e717-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f4b74bd-9fe4-488d-a651-1db61770e717" (UID: "1f4b74bd-9fe4-488d-a651-1db61770e717"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.480851 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv2sn\" (UniqueName: \"kubernetes.io/projected/1f4b74bd-9fe4-488d-a651-1db61770e717-kube-api-access-bv2sn\") on node \"crc\" DevicePath \"\"" Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.480895 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f4b74bd-9fe4-488d-a651-1db61770e717-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.819669 4966 generic.go:334] "Generic (PLEG): container finished" podID="1f4b74bd-9fe4-488d-a651-1db61770e717" containerID="1c445f443ee903dc23428ce8f86c4f7d6dee1d3758617c7ec0f7701c012b0b55" exitCode=0 Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.819773 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wfk6m" Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.819749 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfk6m" event={"ID":"1f4b74bd-9fe4-488d-a651-1db61770e717","Type":"ContainerDied","Data":"1c445f443ee903dc23428ce8f86c4f7d6dee1d3758617c7ec0f7701c012b0b55"} Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.820189 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfk6m" event={"ID":"1f4b74bd-9fe4-488d-a651-1db61770e717","Type":"ContainerDied","Data":"8c5b4c12eff7200f716ce64eeb2d8bb523d61f33d49c9b6de52940005df68541"} Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.820238 4966 scope.go:117] "RemoveContainer" containerID="1c445f443ee903dc23428ce8f86c4f7d6dee1d3758617c7ec0f7701c012b0b55" Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.858482 4966 scope.go:117] "RemoveContainer" containerID="adba0a17dbd1f6afe81dc8acda9f4cddba83fffeee1ec42c56e502664c2e828e" Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.858620 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wfk6m"] Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.880633 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wfk6m"] Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.892933 4966 scope.go:117] "RemoveContainer" containerID="41cc9d87fdb53f5ce412c1da6c7c7835f3fa58ca20aa7044dc38d7939e7d3c7b" Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.944588 4966 scope.go:117] "RemoveContainer" containerID="1c445f443ee903dc23428ce8f86c4f7d6dee1d3758617c7ec0f7701c012b0b55" Jan 27 16:42:39 crc kubenswrapper[4966]: E0127 16:42:39.945137 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c445f443ee903dc23428ce8f86c4f7d6dee1d3758617c7ec0f7701c012b0b55\": container with ID starting with 1c445f443ee903dc23428ce8f86c4f7d6dee1d3758617c7ec0f7701c012b0b55 not found: ID does not exist" containerID="1c445f443ee903dc23428ce8f86c4f7d6dee1d3758617c7ec0f7701c012b0b55" Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.945186 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c445f443ee903dc23428ce8f86c4f7d6dee1d3758617c7ec0f7701c012b0b55"} err="failed to get container status \"1c445f443ee903dc23428ce8f86c4f7d6dee1d3758617c7ec0f7701c012b0b55\": rpc error: code = NotFound desc = could not find container \"1c445f443ee903dc23428ce8f86c4f7d6dee1d3758617c7ec0f7701c012b0b55\": container with ID starting with 1c445f443ee903dc23428ce8f86c4f7d6dee1d3758617c7ec0f7701c012b0b55 not found: ID does not exist" Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.945217 4966 scope.go:117] "RemoveContainer" containerID="adba0a17dbd1f6afe81dc8acda9f4cddba83fffeee1ec42c56e502664c2e828e" Jan 27 16:42:39 crc kubenswrapper[4966]: E0127 16:42:39.945723 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adba0a17dbd1f6afe81dc8acda9f4cddba83fffeee1ec42c56e502664c2e828e\": container with ID starting with adba0a17dbd1f6afe81dc8acda9f4cddba83fffeee1ec42c56e502664c2e828e not found: ID does not exist" containerID="adba0a17dbd1f6afe81dc8acda9f4cddba83fffeee1ec42c56e502664c2e828e" Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.945771 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adba0a17dbd1f6afe81dc8acda9f4cddba83fffeee1ec42c56e502664c2e828e"} err="failed to get container status \"adba0a17dbd1f6afe81dc8acda9f4cddba83fffeee1ec42c56e502664c2e828e\": rpc error: code = NotFound desc = could not find container \"adba0a17dbd1f6afe81dc8acda9f4cddba83fffeee1ec42c56e502664c2e828e\": container with ID starting with adba0a17dbd1f6afe81dc8acda9f4cddba83fffeee1ec42c56e502664c2e828e not found: ID does not exist" Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.945805 4966 scope.go:117] "RemoveContainer" containerID="41cc9d87fdb53f5ce412c1da6c7c7835f3fa58ca20aa7044dc38d7939e7d3c7b" Jan 27 16:42:39 crc kubenswrapper[4966]: E0127 16:42:39.946186 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41cc9d87fdb53f5ce412c1da6c7c7835f3fa58ca20aa7044dc38d7939e7d3c7b\": container with ID starting with 41cc9d87fdb53f5ce412c1da6c7c7835f3fa58ca20aa7044dc38d7939e7d3c7b not found: ID does not exist" containerID="41cc9d87fdb53f5ce412c1da6c7c7835f3fa58ca20aa7044dc38d7939e7d3c7b" Jan 27 16:42:39 crc kubenswrapper[4966]: I0127 16:42:39.946308 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41cc9d87fdb53f5ce412c1da6c7c7835f3fa58ca20aa7044dc38d7939e7d3c7b"} err="failed to get container status \"41cc9d87fdb53f5ce412c1da6c7c7835f3fa58ca20aa7044dc38d7939e7d3c7b\": rpc error: code = NotFound desc = could not find container \"41cc9d87fdb53f5ce412c1da6c7c7835f3fa58ca20aa7044dc38d7939e7d3c7b\": container with ID starting with 41cc9d87fdb53f5ce412c1da6c7c7835f3fa58ca20aa7044dc38d7939e7d3c7b not found: ID does not exist" Jan 27 16:42:40 crc kubenswrapper[4966]: I0127 16:42:40.521469 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:42:40 crc kubenswrapper[4966]: E0127 16:42:40.521883 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:42:40 crc kubenswrapper[4966]: I0127 16:42:40.534418 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f4b74bd-9fe4-488d-a651-1db61770e717" path="/var/lib/kubelet/pods/1f4b74bd-9fe4-488d-a651-1db61770e717/volumes" Jan 27 16:42:54 crc kubenswrapper[4966]: I0127 16:42:54.530535 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:42:54 crc kubenswrapper[4966]: E0127 16:42:54.531394 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:43:08 crc kubenswrapper[4966]: I0127 16:43:08.522167 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:43:08 crc kubenswrapper[4966]: E0127 16:43:08.523185 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:43:19 crc kubenswrapper[4966]: I0127 16:43:19.521408 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:43:19 crc kubenswrapper[4966]: E0127 16:43:19.522458 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:43:30 crc kubenswrapper[4966]: I0127 16:43:30.521620 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:43:30 crc kubenswrapper[4966]: E0127 16:43:30.523612 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:43:41 crc kubenswrapper[4966]: I0127 16:43:41.521844 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:43:41 crc kubenswrapper[4966]: E0127 16:43:41.522753 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:43:53 crc kubenswrapper[4966]: I0127 16:43:53.521860 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:43:53 crc kubenswrapper[4966]: E0127 16:43:53.523868 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:44:08 crc kubenswrapper[4966]: I0127 16:44:08.521242 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:44:08 crc kubenswrapper[4966]: E0127 16:44:08.522057 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:44:14 crc kubenswrapper[4966]: I0127 16:44:14.023309 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-7f67cf7b6c-fm8vs" podUID="bb0a5c7d-bb55-4f56-9f03-268df91b2748" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 27 16:44:21 crc kubenswrapper[4966]: I0127 16:44:21.521349 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:44:21 crc kubenswrapper[4966]: E0127 16:44:21.522406 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:44:35 crc kubenswrapper[4966]: I0127 16:44:35.520644 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:44:35 crc kubenswrapper[4966]: E0127 16:44:35.521513 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:44:47 crc kubenswrapper[4966]: I0127 16:44:47.520976 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:44:47 crc kubenswrapper[4966]: E0127 16:44:47.521879 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:44:58 crc kubenswrapper[4966]: I0127 16:44:58.521987 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:44:58 crc kubenswrapper[4966]: E0127 16:44:58.523437 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:45:00 crc kubenswrapper[4966]: I0127 16:45:00.178067 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492205-6c4dw"] Jan 27 16:45:00 crc kubenswrapper[4966]: E0127 16:45:00.178971 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f4b74bd-9fe4-488d-a651-1db61770e717" containerName="extract-utilities" Jan 27 16:45:00 crc kubenswrapper[4966]: I0127 16:45:00.178983 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4b74bd-9fe4-488d-a651-1db61770e717" containerName="extract-utilities" Jan 27 16:45:00 crc kubenswrapper[4966]: E0127 16:45:00.179002 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f4b74bd-9fe4-488d-a651-1db61770e717" containerName="extract-content" Jan 27 16:45:00 crc kubenswrapper[4966]: I0127 16:45:00.179008 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4b74bd-9fe4-488d-a651-1db61770e717" containerName="extract-content" Jan 27 16:45:00 crc kubenswrapper[4966]: E0127 16:45:00.179041 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f4b74bd-9fe4-488d-a651-1db61770e717" containerName="registry-server" Jan 27 16:45:00 crc kubenswrapper[4966]: I0127 16:45:00.179048 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4b74bd-9fe4-488d-a651-1db61770e717" containerName="registry-server" Jan 27 16:45:00 crc kubenswrapper[4966]: I0127 16:45:00.179283 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f4b74bd-9fe4-488d-a651-1db61770e717" containerName="registry-server" Jan 27 16:45:00 crc kubenswrapper[4966]: I0127 16:45:00.182207 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-6c4dw" Jan 27 16:45:00 crc kubenswrapper[4966]: I0127 16:45:00.185375 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 16:45:00 crc kubenswrapper[4966]: I0127 16:45:00.185426 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 16:45:00 crc kubenswrapper[4966]: I0127 16:45:00.191440 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492205-6c4dw"] Jan 27 16:45:00 crc kubenswrapper[4966]: I0127 16:45:00.229259 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4w75\" (UniqueName: \"kubernetes.io/projected/24367f68-d377-45ea-b353-39024840194f-kube-api-access-l4w75\") pod \"collect-profiles-29492205-6c4dw\" (UID: \"24367f68-d377-45ea-b353-39024840194f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-6c4dw" Jan 27 16:45:00 crc kubenswrapper[4966]: I0127 16:45:00.229375 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24367f68-d377-45ea-b353-39024840194f-secret-volume\") pod \"collect-profiles-29492205-6c4dw\" (UID: \"24367f68-d377-45ea-b353-39024840194f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-6c4dw" Jan 27 16:45:00 crc kubenswrapper[4966]: I0127 16:45:00.229427 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24367f68-d377-45ea-b353-39024840194f-config-volume\") pod \"collect-profiles-29492205-6c4dw\" (UID: \"24367f68-d377-45ea-b353-39024840194f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-6c4dw" Jan 27 16:45:00 crc kubenswrapper[4966]: I0127 16:45:00.332442 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4w75\" (UniqueName: \"kubernetes.io/projected/24367f68-d377-45ea-b353-39024840194f-kube-api-access-l4w75\") pod \"collect-profiles-29492205-6c4dw\" (UID: \"24367f68-d377-45ea-b353-39024840194f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-6c4dw" Jan 27 16:45:00 crc kubenswrapper[4966]: I0127 16:45:00.332529 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24367f68-d377-45ea-b353-39024840194f-secret-volume\") pod \"collect-profiles-29492205-6c4dw\" (UID: \"24367f68-d377-45ea-b353-39024840194f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-6c4dw" Jan 27 16:45:00 crc kubenswrapper[4966]: I0127 16:45:00.332560 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24367f68-d377-45ea-b353-39024840194f-config-volume\") pod \"collect-profiles-29492205-6c4dw\" (UID: \"24367f68-d377-45ea-b353-39024840194f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-6c4dw" Jan 27 16:45:00 crc kubenswrapper[4966]: I0127 16:45:00.333506 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24367f68-d377-45ea-b353-39024840194f-config-volume\") pod \"collect-profiles-29492205-6c4dw\" (UID: \"24367f68-d377-45ea-b353-39024840194f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-6c4dw" Jan 27 16:45:00 crc kubenswrapper[4966]: I0127 16:45:00.337985 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24367f68-d377-45ea-b353-39024840194f-secret-volume\") pod \"collect-profiles-29492205-6c4dw\" (UID: \"24367f68-d377-45ea-b353-39024840194f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-6c4dw" Jan 27 16:45:00 crc kubenswrapper[4966]: I0127 16:45:00.354742 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4w75\" (UniqueName: \"kubernetes.io/projected/24367f68-d377-45ea-b353-39024840194f-kube-api-access-l4w75\") pod \"collect-profiles-29492205-6c4dw\" (UID: \"24367f68-d377-45ea-b353-39024840194f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-6c4dw" Jan 27 16:45:00 crc kubenswrapper[4966]: I0127 16:45:00.506949 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-6c4dw" Jan 27 16:45:01 crc kubenswrapper[4966]: I0127 16:45:01.015872 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492205-6c4dw"] Jan 27 16:45:01 crc kubenswrapper[4966]: I0127 16:45:01.605239 4966 generic.go:334] "Generic (PLEG): container finished" podID="24367f68-d377-45ea-b353-39024840194f" containerID="58893e4efe5d0c6b1914d9ee010e52d6bf3b2059903eb2233b39eccb4234f797" exitCode=0 Jan 27 16:45:01 crc kubenswrapper[4966]: I0127 16:45:01.605744 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-6c4dw" event={"ID":"24367f68-d377-45ea-b353-39024840194f","Type":"ContainerDied","Data":"58893e4efe5d0c6b1914d9ee010e52d6bf3b2059903eb2233b39eccb4234f797"} Jan 27 16:45:01 crc kubenswrapper[4966]: I0127 16:45:01.605780 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-6c4dw" event={"ID":"24367f68-d377-45ea-b353-39024840194f","Type":"ContainerStarted","Data":"516a4050add4effda41bf6df973e6a8ca02987c00ddb6e42b3d84acf5c124a0b"} Jan 27 16:45:03 crc kubenswrapper[4966]: I0127 16:45:03.047177 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-6c4dw" Jan 27 16:45:03 crc kubenswrapper[4966]: I0127 16:45:03.207954 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24367f68-d377-45ea-b353-39024840194f-secret-volume\") pod \"24367f68-d377-45ea-b353-39024840194f\" (UID: \"24367f68-d377-45ea-b353-39024840194f\") " Jan 27 16:45:03 crc kubenswrapper[4966]: I0127 16:45:03.208252 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24367f68-d377-45ea-b353-39024840194f-config-volume\") pod \"24367f68-d377-45ea-b353-39024840194f\" (UID: \"24367f68-d377-45ea-b353-39024840194f\") " Jan 27 16:45:03 crc kubenswrapper[4966]: I0127 16:45:03.208365 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4w75\" (UniqueName: \"kubernetes.io/projected/24367f68-d377-45ea-b353-39024840194f-kube-api-access-l4w75\") pod \"24367f68-d377-45ea-b353-39024840194f\" (UID: \"24367f68-d377-45ea-b353-39024840194f\") " Jan 27 16:45:03 crc kubenswrapper[4966]: I0127 16:45:03.209049 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24367f68-d377-45ea-b353-39024840194f-config-volume" (OuterVolumeSpecName: "config-volume") pod "24367f68-d377-45ea-b353-39024840194f" (UID: "24367f68-d377-45ea-b353-39024840194f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:45:03 crc kubenswrapper[4966]: I0127 16:45:03.216076 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24367f68-d377-45ea-b353-39024840194f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "24367f68-d377-45ea-b353-39024840194f" (UID: "24367f68-d377-45ea-b353-39024840194f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:45:03 crc kubenswrapper[4966]: I0127 16:45:03.216178 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24367f68-d377-45ea-b353-39024840194f-kube-api-access-l4w75" (OuterVolumeSpecName: "kube-api-access-l4w75") pod "24367f68-d377-45ea-b353-39024840194f" (UID: "24367f68-d377-45ea-b353-39024840194f"). InnerVolumeSpecName "kube-api-access-l4w75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:45:03 crc kubenswrapper[4966]: I0127 16:45:03.311008 4966 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24367f68-d377-45ea-b353-39024840194f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:45:03 crc kubenswrapper[4966]: I0127 16:45:03.311043 4966 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24367f68-d377-45ea-b353-39024840194f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:45:03 crc kubenswrapper[4966]: I0127 16:45:03.311055 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4w75\" (UniqueName: \"kubernetes.io/projected/24367f68-d377-45ea-b353-39024840194f-kube-api-access-l4w75\") on node \"crc\" DevicePath \"\"" Jan 27 16:45:03 crc kubenswrapper[4966]: I0127 16:45:03.635481 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-6c4dw" event={"ID":"24367f68-d377-45ea-b353-39024840194f","Type":"ContainerDied","Data":"516a4050add4effda41bf6df973e6a8ca02987c00ddb6e42b3d84acf5c124a0b"} Jan 27 16:45:03 crc kubenswrapper[4966]: I0127 16:45:03.635519 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="516a4050add4effda41bf6df973e6a8ca02987c00ddb6e42b3d84acf5c124a0b" Jan 27 16:45:03 crc kubenswrapper[4966]: I0127 16:45:03.635552 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-6c4dw" Jan 27 16:45:04 crc kubenswrapper[4966]: I0127 16:45:04.127856 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492160-m5225"] Jan 27 16:45:04 crc kubenswrapper[4966]: I0127 16:45:04.141164 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492160-m5225"] Jan 27 16:45:04 crc kubenswrapper[4966]: I0127 16:45:04.536016 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c54394f6-54e8-472f-b09b-198431196e09" path="/var/lib/kubelet/pods/c54394f6-54e8-472f-b09b-198431196e09/volumes" Jan 27 16:45:09 crc kubenswrapper[4966]: I0127 16:45:09.520687 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:45:09 crc kubenswrapper[4966]: E0127 16:45:09.521270 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:45:24 crc kubenswrapper[4966]: I0127 16:45:24.538559 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:45:24 crc kubenswrapper[4966]: E0127 16:45:24.539424 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:45:38 crc kubenswrapper[4966]: I0127 16:45:38.522064 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:45:38 crc kubenswrapper[4966]: E0127 16:45:38.523328 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:45:47 crc kubenswrapper[4966]: I0127 16:45:47.934775 4966 scope.go:117] "RemoveContainer" containerID="92ad7e2d28cd60d99f263d76ed16d0a5f54d8d8f72f6868c5ac79c2eba86a9e9" Jan 27 16:45:51 crc kubenswrapper[4966]: I0127 16:45:51.521233 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:45:51 crc kubenswrapper[4966]: E0127 16:45:51.522738 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:45:52 crc kubenswrapper[4966]: I0127 16:45:52.201094 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9c56q"] Jan 27 16:45:52 crc kubenswrapper[4966]: E0127 16:45:52.202289 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24367f68-d377-45ea-b353-39024840194f" containerName="collect-profiles" Jan 27 16:45:52 crc kubenswrapper[4966]: I0127 16:45:52.202312 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="24367f68-d377-45ea-b353-39024840194f" containerName="collect-profiles" Jan 27 16:45:52 crc kubenswrapper[4966]: I0127 16:45:52.202674 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="24367f68-d377-45ea-b353-39024840194f" containerName="collect-profiles" Jan 27 16:45:52 crc kubenswrapper[4966]: I0127 16:45:52.205748 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9c56q" Jan 27 16:45:52 crc kubenswrapper[4966]: I0127 16:45:52.216141 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9c56q"] Jan 27 16:45:52 crc kubenswrapper[4966]: I0127 16:45:52.377273 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b53de596-ce07-41fe-8dd3-dfda1b69a621-utilities\") pod \"redhat-operators-9c56q\" (UID: \"b53de596-ce07-41fe-8dd3-dfda1b69a621\") " pod="openshift-marketplace/redhat-operators-9c56q" Jan 27 16:45:52 crc kubenswrapper[4966]: I0127 16:45:52.377586 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d44mx\" (UniqueName: \"kubernetes.io/projected/b53de596-ce07-41fe-8dd3-dfda1b69a621-kube-api-access-d44mx\") pod \"redhat-operators-9c56q\" (UID: \"b53de596-ce07-41fe-8dd3-dfda1b69a621\") " pod="openshift-marketplace/redhat-operators-9c56q" Jan 27 16:45:52 crc kubenswrapper[4966]: I0127 16:45:52.377731 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b53de596-ce07-41fe-8dd3-dfda1b69a621-catalog-content\") pod \"redhat-operators-9c56q\" (UID: \"b53de596-ce07-41fe-8dd3-dfda1b69a621\") " pod="openshift-marketplace/redhat-operators-9c56q" Jan 27 16:45:52 crc kubenswrapper[4966]: I0127 16:45:52.480201 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b53de596-ce07-41fe-8dd3-dfda1b69a621-catalog-content\") pod \"redhat-operators-9c56q\" (UID: \"b53de596-ce07-41fe-8dd3-dfda1b69a621\") " pod="openshift-marketplace/redhat-operators-9c56q" Jan 27 16:45:52 crc kubenswrapper[4966]: I0127 16:45:52.480595 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b53de596-ce07-41fe-8dd3-dfda1b69a621-utilities\") pod \"redhat-operators-9c56q\" (UID: \"b53de596-ce07-41fe-8dd3-dfda1b69a621\") " pod="openshift-marketplace/redhat-operators-9c56q" Jan 27 16:45:52 crc kubenswrapper[4966]: I0127 16:45:52.480703 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d44mx\" (UniqueName: \"kubernetes.io/projected/b53de596-ce07-41fe-8dd3-dfda1b69a621-kube-api-access-d44mx\") pod \"redhat-operators-9c56q\" (UID: \"b53de596-ce07-41fe-8dd3-dfda1b69a621\") " pod="openshift-marketplace/redhat-operators-9c56q" Jan 27 16:45:52 crc kubenswrapper[4966]: I0127 16:45:52.481002 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b53de596-ce07-41fe-8dd3-dfda1b69a621-utilities\") pod \"redhat-operators-9c56q\" (UID: \"b53de596-ce07-41fe-8dd3-dfda1b69a621\") " pod="openshift-marketplace/redhat-operators-9c56q" Jan 27 16:45:52 crc kubenswrapper[4966]: I0127 16:45:52.481117 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b53de596-ce07-41fe-8dd3-dfda1b69a621-catalog-content\") pod \"redhat-operators-9c56q\" (UID: \"b53de596-ce07-41fe-8dd3-dfda1b69a621\") " pod="openshift-marketplace/redhat-operators-9c56q" Jan 27 16:45:52 crc kubenswrapper[4966]: I0127 16:45:52.500549 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d44mx\" (UniqueName: \"kubernetes.io/projected/b53de596-ce07-41fe-8dd3-dfda1b69a621-kube-api-access-d44mx\") pod \"redhat-operators-9c56q\" (UID: \"b53de596-ce07-41fe-8dd3-dfda1b69a621\") " pod="openshift-marketplace/redhat-operators-9c56q" Jan 27 16:45:52 crc kubenswrapper[4966]: I0127 16:45:52.529444 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9c56q" Jan 27 16:45:53 crc kubenswrapper[4966]: I0127 16:45:52.999768 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9c56q"] Jan 27 16:45:53 crc kubenswrapper[4966]: I0127 16:45:53.245319 4966 generic.go:334] "Generic (PLEG): container finished" podID="b53de596-ce07-41fe-8dd3-dfda1b69a621" containerID="97df6a5c19c9dae8ecb48e1c518d31bd99caa4a0f62f70e150e193167fb74b96" exitCode=0 Jan 27 16:45:53 crc kubenswrapper[4966]: I0127 16:45:53.245373 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c56q" event={"ID":"b53de596-ce07-41fe-8dd3-dfda1b69a621","Type":"ContainerDied","Data":"97df6a5c19c9dae8ecb48e1c518d31bd99caa4a0f62f70e150e193167fb74b96"} Jan 27 16:45:53 crc kubenswrapper[4966]: I0127 16:45:53.245615 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c56q" event={"ID":"b53de596-ce07-41fe-8dd3-dfda1b69a621","Type":"ContainerStarted","Data":"31d1d8b4899a7c722eebc6a36a380dbafacfdec848e265b8801da2dd749702ed"} Jan 27 16:45:54 crc kubenswrapper[4966]: I0127 16:45:54.263366 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c56q" event={"ID":"b53de596-ce07-41fe-8dd3-dfda1b69a621","Type":"ContainerStarted","Data":"b572abd1a8546e2e76d1b90441cc7fde3bc0395332080ab8e0671a14a836e689"} Jan 27 16:45:59 crc kubenswrapper[4966]: I0127 16:45:59.337375 4966 generic.go:334] "Generic (PLEG): container finished" podID="b53de596-ce07-41fe-8dd3-dfda1b69a621" containerID="b572abd1a8546e2e76d1b90441cc7fde3bc0395332080ab8e0671a14a836e689" exitCode=0 Jan 27 16:45:59 crc kubenswrapper[4966]: I0127 16:45:59.337447 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c56q" event={"ID":"b53de596-ce07-41fe-8dd3-dfda1b69a621","Type":"ContainerDied","Data":"b572abd1a8546e2e76d1b90441cc7fde3bc0395332080ab8e0671a14a836e689"} Jan 27 16:46:00 crc kubenswrapper[4966]: I0127 16:46:00.357606 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c56q" event={"ID":"b53de596-ce07-41fe-8dd3-dfda1b69a621","Type":"ContainerStarted","Data":"80ebeea4a0996814986a70965bebf44bc6b23b86fa4a34ca67433fe5c8865e3d"} Jan 27 16:46:00 crc kubenswrapper[4966]: I0127 16:46:00.394138 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9c56q" podStartSLOduration=1.9160767170000002 podStartE2EDuration="8.394116988s" podCreationTimestamp="2026-01-27 16:45:52 +0000 UTC" firstStartedPulling="2026-01-27 16:45:53.247781553 +0000 UTC m=+3819.550575041" lastFinishedPulling="2026-01-27 16:45:59.725821814 +0000 UTC m=+3826.028615312" observedRunningTime="2026-01-27 16:46:00.385118336 +0000 UTC m=+3826.687911854" watchObservedRunningTime="2026-01-27 16:46:00.394116988 +0000 UTC m=+3826.696910476" Jan 27 16:46:02 crc kubenswrapper[4966]: I0127 16:46:02.540239 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9c56q" Jan 27 16:46:02 crc kubenswrapper[4966]: I0127 16:46:02.540651 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9c56q" Jan 27 16:46:03 crc kubenswrapper[4966]: I0127 16:46:03.594292 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9c56q" podUID="b53de596-ce07-41fe-8dd3-dfda1b69a621" containerName="registry-server" probeResult="failure" output=< Jan 27 16:46:03 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 16:46:03 crc kubenswrapper[4966]: > Jan 27 16:46:04 crc kubenswrapper[4966]: I0127 16:46:04.540150 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:46:04 crc kubenswrapper[4966]: E0127 16:46:04.540516 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:46:13 crc kubenswrapper[4966]: I0127 16:46:13.574461 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9c56q" podUID="b53de596-ce07-41fe-8dd3-dfda1b69a621" containerName="registry-server" probeResult="failure" output=< Jan 27 16:46:13 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 16:46:13 crc kubenswrapper[4966]: > Jan 27 16:46:15 crc kubenswrapper[4966]: I0127 16:46:15.521573 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:46:15 crc kubenswrapper[4966]: E0127 16:46:15.522436 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:46:22 crc kubenswrapper[4966]: I0127 16:46:22.626991 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9c56q" Jan 27 16:46:22 crc kubenswrapper[4966]: I0127 16:46:22.732249 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9c56q" Jan 27 16:46:23 crc kubenswrapper[4966]: I0127 16:46:23.412666 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9c56q"] Jan 27 16:46:24 crc kubenswrapper[4966]: I0127 16:46:24.637722 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9c56q" podUID="b53de596-ce07-41fe-8dd3-dfda1b69a621" containerName="registry-server" containerID="cri-o://80ebeea4a0996814986a70965bebf44bc6b23b86fa4a34ca67433fe5c8865e3d" gracePeriod=2 Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.202187 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9c56q" Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.239935 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b53de596-ce07-41fe-8dd3-dfda1b69a621-catalog-content\") pod \"b53de596-ce07-41fe-8dd3-dfda1b69a621\" (UID: \"b53de596-ce07-41fe-8dd3-dfda1b69a621\") " Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.240183 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b53de596-ce07-41fe-8dd3-dfda1b69a621-utilities\") pod \"b53de596-ce07-41fe-8dd3-dfda1b69a621\" (UID: \"b53de596-ce07-41fe-8dd3-dfda1b69a621\") " Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.240227 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d44mx\" (UniqueName: \"kubernetes.io/projected/b53de596-ce07-41fe-8dd3-dfda1b69a621-kube-api-access-d44mx\") pod \"b53de596-ce07-41fe-8dd3-dfda1b69a621\" (UID: \"b53de596-ce07-41fe-8dd3-dfda1b69a621\") " Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.242193 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b53de596-ce07-41fe-8dd3-dfda1b69a621-utilities" (OuterVolumeSpecName: "utilities") pod "b53de596-ce07-41fe-8dd3-dfda1b69a621" (UID: "b53de596-ce07-41fe-8dd3-dfda1b69a621"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.247754 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b53de596-ce07-41fe-8dd3-dfda1b69a621-kube-api-access-d44mx" (OuterVolumeSpecName: "kube-api-access-d44mx") pod "b53de596-ce07-41fe-8dd3-dfda1b69a621" (UID: "b53de596-ce07-41fe-8dd3-dfda1b69a621"). InnerVolumeSpecName "kube-api-access-d44mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.343845 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d44mx\" (UniqueName: \"kubernetes.io/projected/b53de596-ce07-41fe-8dd3-dfda1b69a621-kube-api-access-d44mx\") on node \"crc\" DevicePath \"\"" Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.344308 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b53de596-ce07-41fe-8dd3-dfda1b69a621-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.408457 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b53de596-ce07-41fe-8dd3-dfda1b69a621-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b53de596-ce07-41fe-8dd3-dfda1b69a621" (UID: "b53de596-ce07-41fe-8dd3-dfda1b69a621"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.446872 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b53de596-ce07-41fe-8dd3-dfda1b69a621-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.650151 4966 generic.go:334] "Generic (PLEG): container finished" podID="b53de596-ce07-41fe-8dd3-dfda1b69a621" containerID="80ebeea4a0996814986a70965bebf44bc6b23b86fa4a34ca67433fe5c8865e3d" exitCode=0 Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.650193 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c56q" event={"ID":"b53de596-ce07-41fe-8dd3-dfda1b69a621","Type":"ContainerDied","Data":"80ebeea4a0996814986a70965bebf44bc6b23b86fa4a34ca67433fe5c8865e3d"} Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.650230 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c56q" event={"ID":"b53de596-ce07-41fe-8dd3-dfda1b69a621","Type":"ContainerDied","Data":"31d1d8b4899a7c722eebc6a36a380dbafacfdec848e265b8801da2dd749702ed"} Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.650252 4966 scope.go:117] "RemoveContainer" containerID="80ebeea4a0996814986a70965bebf44bc6b23b86fa4a34ca67433fe5c8865e3d" Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.651058 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9c56q" Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.685139 4966 scope.go:117] "RemoveContainer" containerID="b572abd1a8546e2e76d1b90441cc7fde3bc0395332080ab8e0671a14a836e689" Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.691125 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9c56q"] Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.706346 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9c56q"] Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.709852 4966 scope.go:117] "RemoveContainer" containerID="97df6a5c19c9dae8ecb48e1c518d31bd99caa4a0f62f70e150e193167fb74b96" Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.777262 4966 scope.go:117] "RemoveContainer" containerID="80ebeea4a0996814986a70965bebf44bc6b23b86fa4a34ca67433fe5c8865e3d" Jan 27 16:46:25 crc kubenswrapper[4966]: E0127 16:46:25.777832 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80ebeea4a0996814986a70965bebf44bc6b23b86fa4a34ca67433fe5c8865e3d\": container with ID starting with 80ebeea4a0996814986a70965bebf44bc6b23b86fa4a34ca67433fe5c8865e3d not found: ID does not exist" containerID="80ebeea4a0996814986a70965bebf44bc6b23b86fa4a34ca67433fe5c8865e3d" Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.777923 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ebeea4a0996814986a70965bebf44bc6b23b86fa4a34ca67433fe5c8865e3d"} err="failed to get container status \"80ebeea4a0996814986a70965bebf44bc6b23b86fa4a34ca67433fe5c8865e3d\": rpc error: code = NotFound desc = could not find container \"80ebeea4a0996814986a70965bebf44bc6b23b86fa4a34ca67433fe5c8865e3d\": container with ID starting with 80ebeea4a0996814986a70965bebf44bc6b23b86fa4a34ca67433fe5c8865e3d not found: ID does not exist" Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.777957 4966 scope.go:117] "RemoveContainer" containerID="b572abd1a8546e2e76d1b90441cc7fde3bc0395332080ab8e0671a14a836e689" Jan 27 16:46:25 crc kubenswrapper[4966]: E0127 16:46:25.778367 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b572abd1a8546e2e76d1b90441cc7fde3bc0395332080ab8e0671a14a836e689\": container with ID starting with b572abd1a8546e2e76d1b90441cc7fde3bc0395332080ab8e0671a14a836e689 not found: ID does not exist" containerID="b572abd1a8546e2e76d1b90441cc7fde3bc0395332080ab8e0671a14a836e689" Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.778398 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b572abd1a8546e2e76d1b90441cc7fde3bc0395332080ab8e0671a14a836e689"} err="failed to get container status \"b572abd1a8546e2e76d1b90441cc7fde3bc0395332080ab8e0671a14a836e689\": rpc error: code = NotFound desc = could not find container \"b572abd1a8546e2e76d1b90441cc7fde3bc0395332080ab8e0671a14a836e689\": container with ID starting with b572abd1a8546e2e76d1b90441cc7fde3bc0395332080ab8e0671a14a836e689 not found: ID does not exist" Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.778421 4966 scope.go:117] "RemoveContainer" containerID="97df6a5c19c9dae8ecb48e1c518d31bd99caa4a0f62f70e150e193167fb74b96" Jan 27 16:46:25 crc kubenswrapper[4966]: E0127 16:46:25.778642 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97df6a5c19c9dae8ecb48e1c518d31bd99caa4a0f62f70e150e193167fb74b96\": container with ID starting with 97df6a5c19c9dae8ecb48e1c518d31bd99caa4a0f62f70e150e193167fb74b96 not found: ID does not exist" containerID="97df6a5c19c9dae8ecb48e1c518d31bd99caa4a0f62f70e150e193167fb74b96" Jan 27 16:46:25 crc kubenswrapper[4966]: I0127 16:46:25.778662 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97df6a5c19c9dae8ecb48e1c518d31bd99caa4a0f62f70e150e193167fb74b96"} err="failed to get container status \"97df6a5c19c9dae8ecb48e1c518d31bd99caa4a0f62f70e150e193167fb74b96\": rpc error: code = NotFound desc = could not find container \"97df6a5c19c9dae8ecb48e1c518d31bd99caa4a0f62f70e150e193167fb74b96\": container with ID starting with 97df6a5c19c9dae8ecb48e1c518d31bd99caa4a0f62f70e150e193167fb74b96 not found: ID does not exist" Jan 27 16:46:26 crc kubenswrapper[4966]: I0127 16:46:26.536701 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b53de596-ce07-41fe-8dd3-dfda1b69a621" path="/var/lib/kubelet/pods/b53de596-ce07-41fe-8dd3-dfda1b69a621/volumes" Jan 27 16:46:28 crc kubenswrapper[4966]: I0127 16:46:28.521519 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:46:28 crc kubenswrapper[4966]: E0127 16:46:28.522472 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:46:40 crc kubenswrapper[4966]: I0127 16:46:40.522404 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:46:40 crc kubenswrapper[4966]: E0127 16:46:40.524055 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:46:53 crc kubenswrapper[4966]: I0127 16:46:53.521988 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:46:53 crc kubenswrapper[4966]: E0127 16:46:53.522998 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:47:08 crc kubenswrapper[4966]: I0127 16:47:08.523637 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:47:08 crc kubenswrapper[4966]: E0127 16:47:08.525505 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:47:19 crc kubenswrapper[4966]: I0127 16:47:19.522016 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:47:20 crc kubenswrapper[4966]: I0127 16:47:20.411121 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"00d97371b3f6c7f1870beff9d1559e1b79067f1ba3fc6424cfef1b0780289b3d"} Jan 27 16:49:40 crc kubenswrapper[4966]: I0127 16:49:40.119516 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:49:40 crc kubenswrapper[4966]: I0127 16:49:40.120441 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:50:10 crc kubenswrapper[4966]: I0127 16:50:10.120775 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:50:10 crc kubenswrapper[4966]: I0127 16:50:10.121606 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:50:40 crc kubenswrapper[4966]: I0127 16:50:40.120092 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:50:40 crc kubenswrapper[4966]: I0127 16:50:40.121065 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:50:40 crc kubenswrapper[4966]: I0127 16:50:40.121185 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 16:50:40 crc kubenswrapper[4966]: I0127 16:50:40.122660 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00d97371b3f6c7f1870beff9d1559e1b79067f1ba3fc6424cfef1b0780289b3d"} pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:50:40 crc kubenswrapper[4966]: I0127 16:50:40.122784 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" containerID="cri-o://00d97371b3f6c7f1870beff9d1559e1b79067f1ba3fc6424cfef1b0780289b3d" gracePeriod=600 Jan 27 16:50:41 crc kubenswrapper[4966]: I0127 16:50:41.008256 4966 generic.go:334] "Generic (PLEG): container finished" podID="75889828-fc5d-4516-a3c4-db3affd4f810" containerID="00d97371b3f6c7f1870beff9d1559e1b79067f1ba3fc6424cfef1b0780289b3d" exitCode=0 Jan 27 16:50:41 crc kubenswrapper[4966]: I0127 16:50:41.008316 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerDied","Data":"00d97371b3f6c7f1870beff9d1559e1b79067f1ba3fc6424cfef1b0780289b3d"} Jan 27 16:50:41 crc kubenswrapper[4966]: I0127 16:50:41.008864 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892"} Jan 27 16:50:41 crc kubenswrapper[4966]: I0127 16:50:41.008886 4966 scope.go:117] "RemoveContainer" containerID="30a0c42bf6d13ec3e09b30b7e4727b24427a152f4266eab2d6db583a62fd5893" Jan 27 16:51:49 crc kubenswrapper[4966]: I0127 16:51:49.738449 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v589d"] Jan 27 16:51:49 crc kubenswrapper[4966]: E0127 16:51:49.739807 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b53de596-ce07-41fe-8dd3-dfda1b69a621" containerName="extract-content" Jan 27 16:51:49 crc kubenswrapper[4966]: I0127 16:51:49.739820 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b53de596-ce07-41fe-8dd3-dfda1b69a621" containerName="extract-content" Jan 27 16:51:49 crc kubenswrapper[4966]: E0127 16:51:49.739836 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b53de596-ce07-41fe-8dd3-dfda1b69a621" containerName="registry-server" Jan 27 16:51:49 crc kubenswrapper[4966]: I0127 16:51:49.739841 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b53de596-ce07-41fe-8dd3-dfda1b69a621" containerName="registry-server" Jan 27 16:51:49 crc kubenswrapper[4966]: E0127 16:51:49.739966 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b53de596-ce07-41fe-8dd3-dfda1b69a621" containerName="extract-utilities" Jan 27 16:51:49 crc kubenswrapper[4966]: I0127 16:51:49.739975 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b53de596-ce07-41fe-8dd3-dfda1b69a621" containerName="extract-utilities" Jan 27 16:51:49 crc kubenswrapper[4966]: I0127 16:51:49.740391 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="b53de596-ce07-41fe-8dd3-dfda1b69a621" containerName="registry-server" Jan 27 16:51:49 crc kubenswrapper[4966]: I0127 16:51:49.742259 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v589d" Jan 27 16:51:49 crc kubenswrapper[4966]: I0127 16:51:49.756140 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v589d"] Jan 27 16:51:49 crc kubenswrapper[4966]: I0127 16:51:49.909715 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80ce0394-153b-4ebd-a2fd-ee42d2d29416-utilities\") pod \"redhat-marketplace-v589d\" (UID: \"80ce0394-153b-4ebd-a2fd-ee42d2d29416\") " pod="openshift-marketplace/redhat-marketplace-v589d" Jan 27 16:51:49 crc kubenswrapper[4966]: I0127 16:51:49.909827 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80ce0394-153b-4ebd-a2fd-ee42d2d29416-catalog-content\") pod \"redhat-marketplace-v589d\" (UID: \"80ce0394-153b-4ebd-a2fd-ee42d2d29416\") " pod="openshift-marketplace/redhat-marketplace-v589d" Jan 27 16:51:49 crc kubenswrapper[4966]: I0127 16:51:49.910429 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkbcl\" (UniqueName: \"kubernetes.io/projected/80ce0394-153b-4ebd-a2fd-ee42d2d29416-kube-api-access-rkbcl\") pod \"redhat-marketplace-v589d\" (UID: \"80ce0394-153b-4ebd-a2fd-ee42d2d29416\") " pod="openshift-marketplace/redhat-marketplace-v589d" Jan 27 16:51:50 crc kubenswrapper[4966]: I0127 16:51:50.013519 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkbcl\" (UniqueName: \"kubernetes.io/projected/80ce0394-153b-4ebd-a2fd-ee42d2d29416-kube-api-access-rkbcl\") pod \"redhat-marketplace-v589d\" (UID: \"80ce0394-153b-4ebd-a2fd-ee42d2d29416\") " pod="openshift-marketplace/redhat-marketplace-v589d" Jan 27 16:51:50 crc kubenswrapper[4966]: I0127 16:51:50.013945 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80ce0394-153b-4ebd-a2fd-ee42d2d29416-utilities\") pod \"redhat-marketplace-v589d\" (UID: \"80ce0394-153b-4ebd-a2fd-ee42d2d29416\") " pod="openshift-marketplace/redhat-marketplace-v589d" Jan 27 16:51:50 crc kubenswrapper[4966]: I0127 16:51:50.014012 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80ce0394-153b-4ebd-a2fd-ee42d2d29416-catalog-content\") pod \"redhat-marketplace-v589d\" (UID: \"80ce0394-153b-4ebd-a2fd-ee42d2d29416\") " pod="openshift-marketplace/redhat-marketplace-v589d" Jan 27 16:51:50 crc kubenswrapper[4966]: I0127 16:51:50.014500 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80ce0394-153b-4ebd-a2fd-ee42d2d29416-utilities\") pod \"redhat-marketplace-v589d\" (UID: \"80ce0394-153b-4ebd-a2fd-ee42d2d29416\") " pod="openshift-marketplace/redhat-marketplace-v589d" Jan 27 16:51:50 crc kubenswrapper[4966]: I0127 16:51:50.014527 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80ce0394-153b-4ebd-a2fd-ee42d2d29416-catalog-content\") pod \"redhat-marketplace-v589d\" (UID: \"80ce0394-153b-4ebd-a2fd-ee42d2d29416\") " pod="openshift-marketplace/redhat-marketplace-v589d" Jan 27 16:51:50 crc kubenswrapper[4966]: I0127 16:51:50.039245 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkbcl\" (UniqueName: \"kubernetes.io/projected/80ce0394-153b-4ebd-a2fd-ee42d2d29416-kube-api-access-rkbcl\") pod \"redhat-marketplace-v589d\" (UID: \"80ce0394-153b-4ebd-a2fd-ee42d2d29416\") " pod="openshift-marketplace/redhat-marketplace-v589d" Jan 27 16:51:50 crc kubenswrapper[4966]: I0127 16:51:50.078889 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v589d" Jan 27 16:51:50 crc kubenswrapper[4966]: I0127 16:51:50.588943 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v589d"] Jan 27 16:51:50 crc kubenswrapper[4966]: W0127 16:51:50.848781 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80ce0394_153b_4ebd_a2fd_ee42d2d29416.slice/crio-a1e28e1c766fc7d73e554e0f1c4639efbdcabadea868a5f612fa3d6e46482969 WatchSource:0}: Error finding container a1e28e1c766fc7d73e554e0f1c4639efbdcabadea868a5f612fa3d6e46482969: Status 404 returned error can't find the container with id a1e28e1c766fc7d73e554e0f1c4639efbdcabadea868a5f612fa3d6e46482969 Jan 27 16:51:50 crc kubenswrapper[4966]: I0127 16:51:50.986040 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v589d" event={"ID":"80ce0394-153b-4ebd-a2fd-ee42d2d29416","Type":"ContainerStarted","Data":"a1e28e1c766fc7d73e554e0f1c4639efbdcabadea868a5f612fa3d6e46482969"} Jan 27 16:51:51 crc kubenswrapper[4966]: I0127 16:51:51.996936 4966 generic.go:334] "Generic (PLEG): container finished" podID="80ce0394-153b-4ebd-a2fd-ee42d2d29416" containerID="72137ca16bbc2af921e8a9f49ae410cba2c3829aae6e0743506c47aa498068f6" exitCode=0 Jan 27 16:51:51 crc kubenswrapper[4966]: I0127 16:51:51.997009 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v589d" event={"ID":"80ce0394-153b-4ebd-a2fd-ee42d2d29416","Type":"ContainerDied","Data":"72137ca16bbc2af921e8a9f49ae410cba2c3829aae6e0743506c47aa498068f6"} Jan 27 16:51:51 crc kubenswrapper[4966]: I0127 16:51:51.999228 4966 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:51:54 crc kubenswrapper[4966]: I0127 16:51:54.059488 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v589d" event={"ID":"80ce0394-153b-4ebd-a2fd-ee42d2d29416","Type":"ContainerStarted","Data":"565b609c089cac4f312ba0ce147248dc9844c2c4a8f07d3e1eeec256824460bd"} Jan 27 16:51:54 crc kubenswrapper[4966]: I0127 16:51:54.938322 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pss88"] Jan 27 16:51:54 crc kubenswrapper[4966]: I0127 16:51:54.941442 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pss88" Jan 27 16:51:54 crc kubenswrapper[4966]: I0127 16:51:54.952600 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pss88"] Jan 27 16:51:55 crc kubenswrapper[4966]: I0127 16:51:55.047873 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksqx4\" (UniqueName: \"kubernetes.io/projected/88fa5e0f-58b7-48cf-8433-35a3b7c7be0b-kube-api-access-ksqx4\") pod \"community-operators-pss88\" (UID: \"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b\") " pod="openshift-marketplace/community-operators-pss88" Jan 27 16:51:55 crc kubenswrapper[4966]: I0127 16:51:55.047930 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fa5e0f-58b7-48cf-8433-35a3b7c7be0b-catalog-content\") pod \"community-operators-pss88\" (UID: \"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b\") " pod="openshift-marketplace/community-operators-pss88" Jan 27 16:51:55 crc kubenswrapper[4966]: I0127 16:51:55.048019 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fa5e0f-58b7-48cf-8433-35a3b7c7be0b-utilities\") pod \"community-operators-pss88\" (UID: \"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b\") " pod="openshift-marketplace/community-operators-pss88" Jan 27 16:51:55 crc kubenswrapper[4966]: I0127 16:51:55.070782 4966 generic.go:334] "Generic (PLEG): container finished" podID="80ce0394-153b-4ebd-a2fd-ee42d2d29416" containerID="565b609c089cac4f312ba0ce147248dc9844c2c4a8f07d3e1eeec256824460bd" exitCode=0 Jan 27 16:51:55 crc kubenswrapper[4966]: I0127 16:51:55.070959 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v589d" event={"ID":"80ce0394-153b-4ebd-a2fd-ee42d2d29416","Type":"ContainerDied","Data":"565b609c089cac4f312ba0ce147248dc9844c2c4a8f07d3e1eeec256824460bd"} Jan 27 16:51:55 crc kubenswrapper[4966]: I0127 16:51:55.150394 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksqx4\" (UniqueName: \"kubernetes.io/projected/88fa5e0f-58b7-48cf-8433-35a3b7c7be0b-kube-api-access-ksqx4\") pod \"community-operators-pss88\" (UID: \"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b\") " pod="openshift-marketplace/community-operators-pss88" Jan 27 16:51:55 crc kubenswrapper[4966]: I0127 16:51:55.150448 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fa5e0f-58b7-48cf-8433-35a3b7c7be0b-catalog-content\") pod \"community-operators-pss88\" (UID: \"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b\") " pod="openshift-marketplace/community-operators-pss88" Jan 27 16:51:55 crc kubenswrapper[4966]: I0127 16:51:55.150556 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fa5e0f-58b7-48cf-8433-35a3b7c7be0b-utilities\") pod \"community-operators-pss88\" (UID: \"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b\") " pod="openshift-marketplace/community-operators-pss88" Jan 27 16:51:55 crc kubenswrapper[4966]: I0127 16:51:55.150973 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fa5e0f-58b7-48cf-8433-35a3b7c7be0b-catalog-content\") pod \"community-operators-pss88\" (UID: \"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b\") " pod="openshift-marketplace/community-operators-pss88" Jan 27 16:51:55 crc kubenswrapper[4966]: I0127 16:51:55.151076 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fa5e0f-58b7-48cf-8433-35a3b7c7be0b-utilities\") pod \"community-operators-pss88\" (UID: \"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b\") " pod="openshift-marketplace/community-operators-pss88" Jan 27 16:51:55 crc kubenswrapper[4966]: I0127 16:51:55.172056 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksqx4\" (UniqueName: \"kubernetes.io/projected/88fa5e0f-58b7-48cf-8433-35a3b7c7be0b-kube-api-access-ksqx4\") pod \"community-operators-pss88\" (UID: \"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b\") " pod="openshift-marketplace/community-operators-pss88" Jan 27 16:51:55 crc kubenswrapper[4966]: I0127 16:51:55.271511 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pss88" Jan 27 16:51:55 crc kubenswrapper[4966]: I0127 16:51:55.810269 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pss88"] Jan 27 16:51:55 crc kubenswrapper[4966]: W0127 16:51:55.812818 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88fa5e0f_58b7_48cf_8433_35a3b7c7be0b.slice/crio-2823aac83d50a545fd12e196a15a7d8f737ab467df85184b913c5c75326ab1ac WatchSource:0}: Error finding container 2823aac83d50a545fd12e196a15a7d8f737ab467df85184b913c5c75326ab1ac: Status 404 returned error can't find the container with id 2823aac83d50a545fd12e196a15a7d8f737ab467df85184b913c5c75326ab1ac Jan 27 16:51:56 crc kubenswrapper[4966]: I0127 16:51:56.085238 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v589d" event={"ID":"80ce0394-153b-4ebd-a2fd-ee42d2d29416","Type":"ContainerStarted","Data":"a00db30035a61e3c337537775096fdc09c9c677915315429cac75f459f5e3c24"} Jan 27 16:51:56 crc kubenswrapper[4966]: I0127 16:51:56.086850 4966 generic.go:334] "Generic (PLEG): container finished" podID="88fa5e0f-58b7-48cf-8433-35a3b7c7be0b" containerID="2b5d1665933aab3f83df26cdabf835060df85850f40fa86e787dc144ad948015" exitCode=0 Jan 27 16:51:56 crc kubenswrapper[4966]: I0127 16:51:56.086879 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pss88" event={"ID":"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b","Type":"ContainerDied","Data":"2b5d1665933aab3f83df26cdabf835060df85850f40fa86e787dc144ad948015"} Jan 27 16:51:56 crc kubenswrapper[4966]: I0127 16:51:56.086910 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pss88" event={"ID":"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b","Type":"ContainerStarted","Data":"2823aac83d50a545fd12e196a15a7d8f737ab467df85184b913c5c75326ab1ac"} Jan 27 16:51:56 crc kubenswrapper[4966]: I0127 16:51:56.114819 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v589d" podStartSLOduration=3.557665326 podStartE2EDuration="7.114800065s" podCreationTimestamp="2026-01-27 16:51:49 +0000 UTC" firstStartedPulling="2026-01-27 16:51:51.998973197 +0000 UTC m=+4178.301766685" lastFinishedPulling="2026-01-27 16:51:55.556107936 +0000 UTC m=+4181.858901424" observedRunningTime="2026-01-27 16:51:56.106147984 +0000 UTC m=+4182.408941472" watchObservedRunningTime="2026-01-27 16:51:56.114800065 +0000 UTC m=+4182.417593553" Jan 27 16:51:57 crc kubenswrapper[4966]: I0127 16:51:57.121057 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pss88" event={"ID":"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b","Type":"ContainerStarted","Data":"fcb69ad8fdb24f5c1f12510b7f68bd6f656be923d337271c30c5ae5bfd3db81c"} Jan 27 16:51:59 crc kubenswrapper[4966]: I0127 16:51:59.156494 4966 generic.go:334] "Generic (PLEG): container finished" podID="88fa5e0f-58b7-48cf-8433-35a3b7c7be0b" containerID="fcb69ad8fdb24f5c1f12510b7f68bd6f656be923d337271c30c5ae5bfd3db81c" exitCode=0 Jan 27 16:51:59 crc kubenswrapper[4966]: I0127 16:51:59.156612 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pss88" event={"ID":"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b","Type":"ContainerDied","Data":"fcb69ad8fdb24f5c1f12510b7f68bd6f656be923d337271c30c5ae5bfd3db81c"} Jan 27 16:52:00 crc kubenswrapper[4966]: I0127 16:52:00.079664 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v589d" Jan 27 16:52:00 crc kubenswrapper[4966]: I0127 16:52:00.079710 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v589d" Jan 27 16:52:01 crc kubenswrapper[4966]: I0127 16:52:01.180965 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pss88" event={"ID":"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b","Type":"ContainerStarted","Data":"a6b23f53b35956248696b82f3eb73af98d0286a2bfc2daab1cd1777db96b32d7"} Jan 27 16:52:01 crc kubenswrapper[4966]: I0127 16:52:01.206357 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pss88" podStartSLOduration=3.6378890889999997 podStartE2EDuration="7.206342383s" podCreationTimestamp="2026-01-27 16:51:54 +0000 UTC" firstStartedPulling="2026-01-27 16:51:56.088806281 +0000 UTC m=+4182.391599769" lastFinishedPulling="2026-01-27 16:51:59.657259575 +0000 UTC m=+4185.960053063" observedRunningTime="2026-01-27 16:52:01.204044002 +0000 UTC m=+4187.506837510" watchObservedRunningTime="2026-01-27 16:52:01.206342383 +0000 UTC m=+4187.509135871" Jan 27 16:52:01 crc kubenswrapper[4966]: I0127 16:52:01.384697 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-v589d" podUID="80ce0394-153b-4ebd-a2fd-ee42d2d29416" containerName="registry-server" probeResult="failure" output=< Jan 27 16:52:01 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 16:52:01 crc kubenswrapper[4966]: > Jan 27 16:52:05 crc kubenswrapper[4966]: I0127 16:52:05.272495 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pss88" Jan 27 16:52:05 crc kubenswrapper[4966]: I0127 16:52:05.272984 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pss88" Jan 27 16:52:06 crc kubenswrapper[4966]: I0127 16:52:06.333785 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pss88" podUID="88fa5e0f-58b7-48cf-8433-35a3b7c7be0b" containerName="registry-server" probeResult="failure" output=< Jan 27 16:52:06 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 16:52:06 crc kubenswrapper[4966]: > Jan 27 16:52:10 crc kubenswrapper[4966]: I0127 16:52:10.136121 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v589d" Jan 27 16:52:10 crc kubenswrapper[4966]: I0127 16:52:10.219236 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v589d" Jan 27 16:52:11 crc kubenswrapper[4966]: I0127 16:52:11.123781 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v589d"] Jan 27 16:52:11 crc kubenswrapper[4966]: I0127 16:52:11.310627 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v589d" podUID="80ce0394-153b-4ebd-a2fd-ee42d2d29416" containerName="registry-server" containerID="cri-o://a00db30035a61e3c337537775096fdc09c9c677915315429cac75f459f5e3c24" gracePeriod=2 Jan 27 16:52:11 crc kubenswrapper[4966]: I0127 16:52:11.891029 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v589d" Jan 27 16:52:11 crc kubenswrapper[4966]: I0127 16:52:11.986551 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkbcl\" (UniqueName: \"kubernetes.io/projected/80ce0394-153b-4ebd-a2fd-ee42d2d29416-kube-api-access-rkbcl\") pod \"80ce0394-153b-4ebd-a2fd-ee42d2d29416\" (UID: \"80ce0394-153b-4ebd-a2fd-ee42d2d29416\") " Jan 27 16:52:11 crc kubenswrapper[4966]: I0127 16:52:11.986756 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80ce0394-153b-4ebd-a2fd-ee42d2d29416-catalog-content\") pod \"80ce0394-153b-4ebd-a2fd-ee42d2d29416\" (UID: \"80ce0394-153b-4ebd-a2fd-ee42d2d29416\") " Jan 27 16:52:11 crc kubenswrapper[4966]: I0127 16:52:11.994802 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80ce0394-153b-4ebd-a2fd-ee42d2d29416-utilities\") pod \"80ce0394-153b-4ebd-a2fd-ee42d2d29416\" (UID: \"80ce0394-153b-4ebd-a2fd-ee42d2d29416\") " Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.001143 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80ce0394-153b-4ebd-a2fd-ee42d2d29416-utilities" (OuterVolumeSpecName: "utilities") pod "80ce0394-153b-4ebd-a2fd-ee42d2d29416" (UID: "80ce0394-153b-4ebd-a2fd-ee42d2d29416"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.008161 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80ce0394-153b-4ebd-a2fd-ee42d2d29416-kube-api-access-rkbcl" (OuterVolumeSpecName: "kube-api-access-rkbcl") pod "80ce0394-153b-4ebd-a2fd-ee42d2d29416" (UID: "80ce0394-153b-4ebd-a2fd-ee42d2d29416"). InnerVolumeSpecName "kube-api-access-rkbcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.026054 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80ce0394-153b-4ebd-a2fd-ee42d2d29416-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80ce0394-153b-4ebd-a2fd-ee42d2d29416" (UID: "80ce0394-153b-4ebd-a2fd-ee42d2d29416"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.100670 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80ce0394-153b-4ebd-a2fd-ee42d2d29416-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.101002 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80ce0394-153b-4ebd-a2fd-ee42d2d29416-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.101017 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkbcl\" (UniqueName: \"kubernetes.io/projected/80ce0394-153b-4ebd-a2fd-ee42d2d29416-kube-api-access-rkbcl\") on node \"crc\" DevicePath \"\"" Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.332795 4966 generic.go:334] "Generic (PLEG): container finished" podID="80ce0394-153b-4ebd-a2fd-ee42d2d29416" containerID="a00db30035a61e3c337537775096fdc09c9c677915315429cac75f459f5e3c24" exitCode=0 Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.332879 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v589d" event={"ID":"80ce0394-153b-4ebd-a2fd-ee42d2d29416","Type":"ContainerDied","Data":"a00db30035a61e3c337537775096fdc09c9c677915315429cac75f459f5e3c24"} Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.332888 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v589d" Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.333011 4966 scope.go:117] "RemoveContainer" containerID="a00db30035a61e3c337537775096fdc09c9c677915315429cac75f459f5e3c24" Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.332984 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v589d" event={"ID":"80ce0394-153b-4ebd-a2fd-ee42d2d29416","Type":"ContainerDied","Data":"a1e28e1c766fc7d73e554e0f1c4639efbdcabadea868a5f612fa3d6e46482969"} Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.373324 4966 scope.go:117] "RemoveContainer" containerID="565b609c089cac4f312ba0ce147248dc9844c2c4a8f07d3e1eeec256824460bd" Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.387852 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v589d"] Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.401596 4966 scope.go:117] "RemoveContainer" containerID="72137ca16bbc2af921e8a9f49ae410cba2c3829aae6e0743506c47aa498068f6" Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.401787 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v589d"] Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.454075 4966 scope.go:117] "RemoveContainer" containerID="a00db30035a61e3c337537775096fdc09c9c677915315429cac75f459f5e3c24" Jan 27 16:52:12 crc kubenswrapper[4966]: E0127 16:52:12.454533 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a00db30035a61e3c337537775096fdc09c9c677915315429cac75f459f5e3c24\": container with ID starting with a00db30035a61e3c337537775096fdc09c9c677915315429cac75f459f5e3c24 not found: ID does not exist" containerID="a00db30035a61e3c337537775096fdc09c9c677915315429cac75f459f5e3c24" Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.454594 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a00db30035a61e3c337537775096fdc09c9c677915315429cac75f459f5e3c24"} err="failed to get container status \"a00db30035a61e3c337537775096fdc09c9c677915315429cac75f459f5e3c24\": rpc error: code = NotFound desc = could not find container \"a00db30035a61e3c337537775096fdc09c9c677915315429cac75f459f5e3c24\": container with ID starting with a00db30035a61e3c337537775096fdc09c9c677915315429cac75f459f5e3c24 not found: ID does not exist" Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.454634 4966 scope.go:117] "RemoveContainer" containerID="565b609c089cac4f312ba0ce147248dc9844c2c4a8f07d3e1eeec256824460bd" Jan 27 16:52:12 crc kubenswrapper[4966]: E0127 16:52:12.455193 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"565b609c089cac4f312ba0ce147248dc9844c2c4a8f07d3e1eeec256824460bd\": container with ID starting with 565b609c089cac4f312ba0ce147248dc9844c2c4a8f07d3e1eeec256824460bd not found: ID does not exist" containerID="565b609c089cac4f312ba0ce147248dc9844c2c4a8f07d3e1eeec256824460bd" Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.455228 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"565b609c089cac4f312ba0ce147248dc9844c2c4a8f07d3e1eeec256824460bd"} err="failed to get container status \"565b609c089cac4f312ba0ce147248dc9844c2c4a8f07d3e1eeec256824460bd\": rpc error: code = NotFound desc = could not find container \"565b609c089cac4f312ba0ce147248dc9844c2c4a8f07d3e1eeec256824460bd\": container with ID starting with 565b609c089cac4f312ba0ce147248dc9844c2c4a8f07d3e1eeec256824460bd not found: ID does not exist" Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.455248 4966 scope.go:117] "RemoveContainer" containerID="72137ca16bbc2af921e8a9f49ae410cba2c3829aae6e0743506c47aa498068f6" Jan 27 16:52:12 crc kubenswrapper[4966]: E0127 16:52:12.455774 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72137ca16bbc2af921e8a9f49ae410cba2c3829aae6e0743506c47aa498068f6\": container with ID starting with 72137ca16bbc2af921e8a9f49ae410cba2c3829aae6e0743506c47aa498068f6 not found: ID does not exist" containerID="72137ca16bbc2af921e8a9f49ae410cba2c3829aae6e0743506c47aa498068f6" Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.455814 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72137ca16bbc2af921e8a9f49ae410cba2c3829aae6e0743506c47aa498068f6"} err="failed to get container status \"72137ca16bbc2af921e8a9f49ae410cba2c3829aae6e0743506c47aa498068f6\": rpc error: code = NotFound desc = could not find container \"72137ca16bbc2af921e8a9f49ae410cba2c3829aae6e0743506c47aa498068f6\": container with ID starting with 72137ca16bbc2af921e8a9f49ae410cba2c3829aae6e0743506c47aa498068f6 not found: ID does not exist" Jan 27 16:52:12 crc kubenswrapper[4966]: I0127 16:52:12.542096 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80ce0394-153b-4ebd-a2fd-ee42d2d29416" path="/var/lib/kubelet/pods/80ce0394-153b-4ebd-a2fd-ee42d2d29416/volumes" Jan 27 16:52:16 crc kubenswrapper[4966]: I0127 16:52:16.519882 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pss88" Jan 27 16:52:16 crc kubenswrapper[4966]: I0127 16:52:16.614655 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pss88" Jan 27 16:52:16 crc kubenswrapper[4966]: I0127 16:52:16.772439 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pss88"] Jan 27 16:52:18 crc kubenswrapper[4966]: I0127 16:52:18.421827 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pss88" podUID="88fa5e0f-58b7-48cf-8433-35a3b7c7be0b" containerName="registry-server" containerID="cri-o://a6b23f53b35956248696b82f3eb73af98d0286a2bfc2daab1cd1777db96b32d7" gracePeriod=2 Jan 27 16:52:18 crc kubenswrapper[4966]: I0127 16:52:18.945570 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pss88" Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.001001 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fa5e0f-58b7-48cf-8433-35a3b7c7be0b-catalog-content\") pod \"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b\" (UID: \"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b\") " Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.001418 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksqx4\" (UniqueName: \"kubernetes.io/projected/88fa5e0f-58b7-48cf-8433-35a3b7c7be0b-kube-api-access-ksqx4\") pod \"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b\" (UID: \"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b\") " Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.001502 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fa5e0f-58b7-48cf-8433-35a3b7c7be0b-utilities\") pod \"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b\" (UID: \"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b\") " Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.002990 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88fa5e0f-58b7-48cf-8433-35a3b7c7be0b-utilities" (OuterVolumeSpecName: "utilities") pod "88fa5e0f-58b7-48cf-8433-35a3b7c7be0b" (UID: "88fa5e0f-58b7-48cf-8433-35a3b7c7be0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.019452 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88fa5e0f-58b7-48cf-8433-35a3b7c7be0b-kube-api-access-ksqx4" (OuterVolumeSpecName: "kube-api-access-ksqx4") pod "88fa5e0f-58b7-48cf-8433-35a3b7c7be0b" (UID: "88fa5e0f-58b7-48cf-8433-35a3b7c7be0b"). InnerVolumeSpecName "kube-api-access-ksqx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.067333 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88fa5e0f-58b7-48cf-8433-35a3b7c7be0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88fa5e0f-58b7-48cf-8433-35a3b7c7be0b" (UID: "88fa5e0f-58b7-48cf-8433-35a3b7c7be0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.104659 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fa5e0f-58b7-48cf-8433-35a3b7c7be0b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.104699 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fa5e0f-58b7-48cf-8433-35a3b7c7be0b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.104715 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksqx4\" (UniqueName: \"kubernetes.io/projected/88fa5e0f-58b7-48cf-8433-35a3b7c7be0b-kube-api-access-ksqx4\") on node \"crc\" DevicePath \"\"" Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.433460 4966 generic.go:334] "Generic (PLEG): container finished" podID="88fa5e0f-58b7-48cf-8433-35a3b7c7be0b" containerID="a6b23f53b35956248696b82f3eb73af98d0286a2bfc2daab1cd1777db96b32d7" exitCode=0 Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.433522 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pss88" event={"ID":"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b","Type":"ContainerDied","Data":"a6b23f53b35956248696b82f3eb73af98d0286a2bfc2daab1cd1777db96b32d7"} Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.433557 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pss88" event={"ID":"88fa5e0f-58b7-48cf-8433-35a3b7c7be0b","Type":"ContainerDied","Data":"2823aac83d50a545fd12e196a15a7d8f737ab467df85184b913c5c75326ab1ac"} Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.433578 4966 scope.go:117] "RemoveContainer" containerID="a6b23f53b35956248696b82f3eb73af98d0286a2bfc2daab1cd1777db96b32d7" Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.433774 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pss88" Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.464982 4966 scope.go:117] "RemoveContainer" containerID="fcb69ad8fdb24f5c1f12510b7f68bd6f656be923d337271c30c5ae5bfd3db81c" Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.489023 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pss88"] Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.495941 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pss88"] Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.505838 4966 scope.go:117] "RemoveContainer" containerID="2b5d1665933aab3f83df26cdabf835060df85850f40fa86e787dc144ad948015" Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.542569 4966 scope.go:117] "RemoveContainer" containerID="a6b23f53b35956248696b82f3eb73af98d0286a2bfc2daab1cd1777db96b32d7" Jan 27 16:52:19 crc kubenswrapper[4966]: E0127 16:52:19.543377 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6b23f53b35956248696b82f3eb73af98d0286a2bfc2daab1cd1777db96b32d7\": container with ID starting with a6b23f53b35956248696b82f3eb73af98d0286a2bfc2daab1cd1777db96b32d7 not found: ID does not exist" containerID="a6b23f53b35956248696b82f3eb73af98d0286a2bfc2daab1cd1777db96b32d7" Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.543451 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b23f53b35956248696b82f3eb73af98d0286a2bfc2daab1cd1777db96b32d7"} err="failed to get container status \"a6b23f53b35956248696b82f3eb73af98d0286a2bfc2daab1cd1777db96b32d7\": rpc error: code = NotFound desc = could not find container \"a6b23f53b35956248696b82f3eb73af98d0286a2bfc2daab1cd1777db96b32d7\": container with ID starting with a6b23f53b35956248696b82f3eb73af98d0286a2bfc2daab1cd1777db96b32d7 not found: ID does not exist" Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.543552 4966 scope.go:117] "RemoveContainer" containerID="fcb69ad8fdb24f5c1f12510b7f68bd6f656be923d337271c30c5ae5bfd3db81c" Jan 27 16:52:19 crc kubenswrapper[4966]: E0127 16:52:19.543929 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcb69ad8fdb24f5c1f12510b7f68bd6f656be923d337271c30c5ae5bfd3db81c\": container with ID starting with fcb69ad8fdb24f5c1f12510b7f68bd6f656be923d337271c30c5ae5bfd3db81c not found: ID does not exist" containerID="fcb69ad8fdb24f5c1f12510b7f68bd6f656be923d337271c30c5ae5bfd3db81c" Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.543965 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcb69ad8fdb24f5c1f12510b7f68bd6f656be923d337271c30c5ae5bfd3db81c"} err="failed to get container status \"fcb69ad8fdb24f5c1f12510b7f68bd6f656be923d337271c30c5ae5bfd3db81c\": rpc error: code = NotFound desc = could not find container \"fcb69ad8fdb24f5c1f12510b7f68bd6f656be923d337271c30c5ae5bfd3db81c\": container with ID starting with fcb69ad8fdb24f5c1f12510b7f68bd6f656be923d337271c30c5ae5bfd3db81c not found: ID does not exist" Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.544016 4966 scope.go:117] "RemoveContainer" containerID="2b5d1665933aab3f83df26cdabf835060df85850f40fa86e787dc144ad948015" Jan 27 16:52:19 crc kubenswrapper[4966]: E0127 16:52:19.544307 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b5d1665933aab3f83df26cdabf835060df85850f40fa86e787dc144ad948015\": container with ID starting with 2b5d1665933aab3f83df26cdabf835060df85850f40fa86e787dc144ad948015 not found: ID does not exist" containerID="2b5d1665933aab3f83df26cdabf835060df85850f40fa86e787dc144ad948015" Jan 27 16:52:19 crc kubenswrapper[4966]: I0127 16:52:19.544349 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b5d1665933aab3f83df26cdabf835060df85850f40fa86e787dc144ad948015"} err="failed to get container status \"2b5d1665933aab3f83df26cdabf835060df85850f40fa86e787dc144ad948015\": rpc error: code = NotFound desc = could not find container \"2b5d1665933aab3f83df26cdabf835060df85850f40fa86e787dc144ad948015\": container with ID starting with 2b5d1665933aab3f83df26cdabf835060df85850f40fa86e787dc144ad948015 not found: ID does not exist" Jan 27 16:52:20 crc kubenswrapper[4966]: I0127 16:52:20.535020 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88fa5e0f-58b7-48cf-8433-35a3b7c7be0b" path="/var/lib/kubelet/pods/88fa5e0f-58b7-48cf-8433-35a3b7c7be0b/volumes" Jan 27 16:52:40 crc kubenswrapper[4966]: I0127 16:52:40.119776 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:52:40 crc kubenswrapper[4966]: I0127 16:52:40.120368 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:52:41 crc kubenswrapper[4966]: I0127 16:52:41.415109 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-62zgh"] Jan 27 16:52:41 crc kubenswrapper[4966]: E0127 16:52:41.417935 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88fa5e0f-58b7-48cf-8433-35a3b7c7be0b" containerName="registry-server" Jan 27 16:52:41 crc kubenswrapper[4966]: I0127 16:52:41.418160 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="88fa5e0f-58b7-48cf-8433-35a3b7c7be0b" containerName="registry-server" Jan 27 16:52:41 crc kubenswrapper[4966]: E0127 16:52:41.418336 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88fa5e0f-58b7-48cf-8433-35a3b7c7be0b" containerName="extract-content" Jan 27 16:52:41 crc kubenswrapper[4966]: I0127 16:52:41.418497 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="88fa5e0f-58b7-48cf-8433-35a3b7c7be0b" containerName="extract-content" Jan 27 16:52:41 crc kubenswrapper[4966]: E0127 16:52:41.418689 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80ce0394-153b-4ebd-a2fd-ee42d2d29416" containerName="extract-content" Jan 27 16:52:41 crc kubenswrapper[4966]: I0127 16:52:41.418839 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="80ce0394-153b-4ebd-a2fd-ee42d2d29416" containerName="extract-content" Jan 27 16:52:41 crc kubenswrapper[4966]: E0127 16:52:41.419647 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80ce0394-153b-4ebd-a2fd-ee42d2d29416" containerName="extract-utilities" Jan 27 16:52:41 crc kubenswrapper[4966]: I0127 16:52:41.419846 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="80ce0394-153b-4ebd-a2fd-ee42d2d29416" containerName="extract-utilities" Jan 27 16:52:41 crc kubenswrapper[4966]: E0127 16:52:41.420205 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80ce0394-153b-4ebd-a2fd-ee42d2d29416" containerName="registry-server" Jan 27 16:52:41 crc kubenswrapper[4966]: I0127 16:52:41.420380 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="80ce0394-153b-4ebd-a2fd-ee42d2d29416" containerName="registry-server" Jan 27 16:52:41 crc kubenswrapper[4966]: E0127 16:52:41.420596 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88fa5e0f-58b7-48cf-8433-35a3b7c7be0b" containerName="extract-utilities" Jan 27 16:52:41 crc kubenswrapper[4966]: I0127 16:52:41.420762 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="88fa5e0f-58b7-48cf-8433-35a3b7c7be0b" containerName="extract-utilities" Jan 27 16:52:41 crc kubenswrapper[4966]: I0127 16:52:41.421574 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="80ce0394-153b-4ebd-a2fd-ee42d2d29416" containerName="registry-server" Jan 27 16:52:41 crc kubenswrapper[4966]: I0127 16:52:41.421822 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="88fa5e0f-58b7-48cf-8433-35a3b7c7be0b" containerName="registry-server" Jan 27 16:52:41 crc kubenswrapper[4966]: I0127 16:52:41.430133 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62zgh" Jan 27 16:52:41 crc kubenswrapper[4966]: I0127 16:52:41.440662 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62zgh"] Jan 27 16:52:41 crc kubenswrapper[4966]: I0127 16:52:41.620038 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/742399ce-03c6-4ca2-9804-62309b9d75e8-utilities\") pod \"certified-operators-62zgh\" (UID: \"742399ce-03c6-4ca2-9804-62309b9d75e8\") " pod="openshift-marketplace/certified-operators-62zgh" Jan 27 16:52:41 crc kubenswrapper[4966]: I0127 16:52:41.621346 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/742399ce-03c6-4ca2-9804-62309b9d75e8-catalog-content\") pod \"certified-operators-62zgh\" (UID: \"742399ce-03c6-4ca2-9804-62309b9d75e8\") " pod="openshift-marketplace/certified-operators-62zgh" Jan 27 16:52:41 crc kubenswrapper[4966]: I0127 16:52:41.621505 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48kck\" (UniqueName: \"kubernetes.io/projected/742399ce-03c6-4ca2-9804-62309b9d75e8-kube-api-access-48kck\") pod \"certified-operators-62zgh\" (UID: \"742399ce-03c6-4ca2-9804-62309b9d75e8\") " pod="openshift-marketplace/certified-operators-62zgh" Jan 27 16:52:41 crc kubenswrapper[4966]: I0127 16:52:41.723674 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/742399ce-03c6-4ca2-9804-62309b9d75e8-catalog-content\") pod \"certified-operators-62zgh\" (UID: \"742399ce-03c6-4ca2-9804-62309b9d75e8\") " pod="openshift-marketplace/certified-operators-62zgh" Jan 27 16:52:41 crc kubenswrapper[4966]: I0127 16:52:41.723726 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48kck\" (UniqueName: \"kubernetes.io/projected/742399ce-03c6-4ca2-9804-62309b9d75e8-kube-api-access-48kck\") pod \"certified-operators-62zgh\" (UID: \"742399ce-03c6-4ca2-9804-62309b9d75e8\") " pod="openshift-marketplace/certified-operators-62zgh" Jan 27 16:52:41 crc kubenswrapper[4966]: I0127 16:52:41.723851 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/742399ce-03c6-4ca2-9804-62309b9d75e8-utilities\") pod \"certified-operators-62zgh\" (UID: \"742399ce-03c6-4ca2-9804-62309b9d75e8\") " pod="openshift-marketplace/certified-operators-62zgh" Jan 27 16:52:41 crc kubenswrapper[4966]: I0127 16:52:41.724397 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/742399ce-03c6-4ca2-9804-62309b9d75e8-utilities\") pod \"certified-operators-62zgh\" (UID: \"742399ce-03c6-4ca2-9804-62309b9d75e8\") " pod="openshift-marketplace/certified-operators-62zgh" Jan 27 16:52:41 crc kubenswrapper[4966]: I0127 16:52:41.724413 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/742399ce-03c6-4ca2-9804-62309b9d75e8-catalog-content\") pod \"certified-operators-62zgh\" (UID: \"742399ce-03c6-4ca2-9804-62309b9d75e8\") " pod="openshift-marketplace/certified-operators-62zgh" Jan 27 16:52:41 crc kubenswrapper[4966]: I0127 16:52:41.957098 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48kck\" (UniqueName: \"kubernetes.io/projected/742399ce-03c6-4ca2-9804-62309b9d75e8-kube-api-access-48kck\") pod \"certified-operators-62zgh\" (UID: \"742399ce-03c6-4ca2-9804-62309b9d75e8\") " pod="openshift-marketplace/certified-operators-62zgh" Jan 27 16:52:42 crc kubenswrapper[4966]: I0127 16:52:42.066601 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62zgh" Jan 27 16:52:42 crc kubenswrapper[4966]: I0127 16:52:42.548550 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62zgh"] Jan 27 16:52:42 crc kubenswrapper[4966]: I0127 16:52:42.731315 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62zgh" event={"ID":"742399ce-03c6-4ca2-9804-62309b9d75e8","Type":"ContainerStarted","Data":"ad3f7a79d50e42348905579d6924d5e58d6f6e9d5e122c471c85e91f5c054a02"} Jan 27 16:52:43 crc kubenswrapper[4966]: I0127 16:52:43.746629 4966 generic.go:334] "Generic (PLEG): container finished" podID="742399ce-03c6-4ca2-9804-62309b9d75e8" containerID="f71628cd74e35ef13efa09f181b379e9dd516b98dc45bd6f6c13e53e4f2e16c2" exitCode=0 Jan 27 16:52:43 crc kubenswrapper[4966]: I0127 16:52:43.746940 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62zgh" event={"ID":"742399ce-03c6-4ca2-9804-62309b9d75e8","Type":"ContainerDied","Data":"f71628cd74e35ef13efa09f181b379e9dd516b98dc45bd6f6c13e53e4f2e16c2"} Jan 27 16:52:45 crc kubenswrapper[4966]: I0127 16:52:45.779100 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62zgh" event={"ID":"742399ce-03c6-4ca2-9804-62309b9d75e8","Type":"ContainerStarted","Data":"1901c9d69fc3673aa2db9d2118f62eb67a8bfa9db844e143d2b6223767593885"} Jan 27 16:52:46 crc kubenswrapper[4966]: I0127 16:52:46.796407 4966 generic.go:334] "Generic (PLEG): container finished" podID="742399ce-03c6-4ca2-9804-62309b9d75e8" containerID="1901c9d69fc3673aa2db9d2118f62eb67a8bfa9db844e143d2b6223767593885" exitCode=0 Jan 27 16:52:46 crc kubenswrapper[4966]: I0127 16:52:46.796509 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62zgh" event={"ID":"742399ce-03c6-4ca2-9804-62309b9d75e8","Type":"ContainerDied","Data":"1901c9d69fc3673aa2db9d2118f62eb67a8bfa9db844e143d2b6223767593885"} Jan 27 16:52:47 crc kubenswrapper[4966]: I0127 16:52:47.821750 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62zgh" event={"ID":"742399ce-03c6-4ca2-9804-62309b9d75e8","Type":"ContainerStarted","Data":"91529aa4f4a7f908a0b5f9b299e14a600aae9123687614a695f671f67549493e"} Jan 27 16:52:47 crc kubenswrapper[4966]: I0127 16:52:47.843923 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-62zgh" podStartSLOduration=3.402722109 podStartE2EDuration="6.843887385s" podCreationTimestamp="2026-01-27 16:52:41 +0000 UTC" firstStartedPulling="2026-01-27 16:52:43.749880529 +0000 UTC m=+4230.052674057" lastFinishedPulling="2026-01-27 16:52:47.191045845 +0000 UTC m=+4233.493839333" observedRunningTime="2026-01-27 16:52:47.842154881 +0000 UTC m=+4234.144948379" watchObservedRunningTime="2026-01-27 16:52:47.843887385 +0000 UTC m=+4234.146680883" Jan 27 16:52:52 crc kubenswrapper[4966]: I0127 16:52:52.066733 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-62zgh" Jan 27 16:52:52 crc kubenswrapper[4966]: I0127 16:52:52.067349 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-62zgh" Jan 27 16:52:52 crc kubenswrapper[4966]: I0127 16:52:52.147843 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-62zgh" Jan 27 16:52:52 crc kubenswrapper[4966]: I0127 16:52:52.961118 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-62zgh" Jan 27 16:52:53 crc kubenswrapper[4966]: I0127 16:52:53.029160 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-62zgh"] Jan 27 16:52:54 crc kubenswrapper[4966]: I0127 16:52:54.910193 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-62zgh" podUID="742399ce-03c6-4ca2-9804-62309b9d75e8" containerName="registry-server" containerID="cri-o://91529aa4f4a7f908a0b5f9b299e14a600aae9123687614a695f671f67549493e" gracePeriod=2 Jan 27 16:52:55 crc kubenswrapper[4966]: I0127 16:52:55.924682 4966 generic.go:334] "Generic (PLEG): container finished" podID="742399ce-03c6-4ca2-9804-62309b9d75e8" containerID="91529aa4f4a7f908a0b5f9b299e14a600aae9123687614a695f671f67549493e" exitCode=0 Jan 27 16:52:55 crc kubenswrapper[4966]: I0127 16:52:55.924769 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62zgh" event={"ID":"742399ce-03c6-4ca2-9804-62309b9d75e8","Type":"ContainerDied","Data":"91529aa4f4a7f908a0b5f9b299e14a600aae9123687614a695f671f67549493e"} Jan 27 16:52:55 crc kubenswrapper[4966]: I0127 16:52:55.925140 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62zgh" event={"ID":"742399ce-03c6-4ca2-9804-62309b9d75e8","Type":"ContainerDied","Data":"ad3f7a79d50e42348905579d6924d5e58d6f6e9d5e122c471c85e91f5c054a02"} Jan 27 16:52:55 crc kubenswrapper[4966]: I0127 16:52:55.925158 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad3f7a79d50e42348905579d6924d5e58d6f6e9d5e122c471c85e91f5c054a02" Jan 27 16:52:56 crc kubenswrapper[4966]: I0127 16:52:56.525276 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62zgh" Jan 27 16:52:56 crc kubenswrapper[4966]: I0127 16:52:56.627285 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/742399ce-03c6-4ca2-9804-62309b9d75e8-utilities\") pod \"742399ce-03c6-4ca2-9804-62309b9d75e8\" (UID: \"742399ce-03c6-4ca2-9804-62309b9d75e8\") " Jan 27 16:52:56 crc kubenswrapper[4966]: I0127 16:52:56.627562 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48kck\" (UniqueName: \"kubernetes.io/projected/742399ce-03c6-4ca2-9804-62309b9d75e8-kube-api-access-48kck\") pod \"742399ce-03c6-4ca2-9804-62309b9d75e8\" (UID: \"742399ce-03c6-4ca2-9804-62309b9d75e8\") " Jan 27 16:52:56 crc kubenswrapper[4966]: I0127 16:52:56.627646 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/742399ce-03c6-4ca2-9804-62309b9d75e8-catalog-content\") pod \"742399ce-03c6-4ca2-9804-62309b9d75e8\" (UID: \"742399ce-03c6-4ca2-9804-62309b9d75e8\") " Jan 27 16:52:56 crc kubenswrapper[4966]: I0127 16:52:56.632035 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/742399ce-03c6-4ca2-9804-62309b9d75e8-utilities" (OuterVolumeSpecName: "utilities") pod "742399ce-03c6-4ca2-9804-62309b9d75e8" (UID: "742399ce-03c6-4ca2-9804-62309b9d75e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:52:56 crc kubenswrapper[4966]: I0127 16:52:56.638610 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/742399ce-03c6-4ca2-9804-62309b9d75e8-kube-api-access-48kck" (OuterVolumeSpecName: "kube-api-access-48kck") pod "742399ce-03c6-4ca2-9804-62309b9d75e8" (UID: "742399ce-03c6-4ca2-9804-62309b9d75e8"). InnerVolumeSpecName "kube-api-access-48kck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:52:56 crc kubenswrapper[4966]: I0127 16:52:56.681744 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/742399ce-03c6-4ca2-9804-62309b9d75e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "742399ce-03c6-4ca2-9804-62309b9d75e8" (UID: "742399ce-03c6-4ca2-9804-62309b9d75e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:52:56 crc kubenswrapper[4966]: I0127 16:52:56.734120 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48kck\" (UniqueName: \"kubernetes.io/projected/742399ce-03c6-4ca2-9804-62309b9d75e8-kube-api-access-48kck\") on node \"crc\" DevicePath \"\"" Jan 27 16:52:56 crc kubenswrapper[4966]: I0127 16:52:56.734154 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/742399ce-03c6-4ca2-9804-62309b9d75e8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:52:56 crc kubenswrapper[4966]: I0127 16:52:56.734163 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/742399ce-03c6-4ca2-9804-62309b9d75e8-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:52:56 crc kubenswrapper[4966]: I0127 16:52:56.939766 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62zgh" Jan 27 16:52:56 crc kubenswrapper[4966]: I0127 16:52:56.989256 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-62zgh"] Jan 27 16:52:57 crc kubenswrapper[4966]: I0127 16:52:57.001517 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-62zgh"] Jan 27 16:52:58 crc kubenswrapper[4966]: I0127 16:52:58.535249 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="742399ce-03c6-4ca2-9804-62309b9d75e8" path="/var/lib/kubelet/pods/742399ce-03c6-4ca2-9804-62309b9d75e8/volumes" Jan 27 16:53:10 crc kubenswrapper[4966]: I0127 16:53:10.121083 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:53:10 crc kubenswrapper[4966]: I0127 16:53:10.121750 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:53:40 crc kubenswrapper[4966]: I0127 16:53:40.120022 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:53:40 crc kubenswrapper[4966]: I0127 16:53:40.120495 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:53:40 crc kubenswrapper[4966]: I0127 16:53:40.120532 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 16:53:40 crc kubenswrapper[4966]: I0127 16:53:40.121362 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892"} pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:53:40 crc kubenswrapper[4966]: I0127 16:53:40.121431 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" containerID="cri-o://1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" gracePeriod=600 Jan 27 16:53:40 crc kubenswrapper[4966]: E0127 16:53:40.248363 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:53:40 crc kubenswrapper[4966]: I0127 16:53:40.525177 4966 generic.go:334] "Generic (PLEG): container finished" podID="75889828-fc5d-4516-a3c4-db3affd4f810" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" exitCode=0 Jan 27 16:53:40 crc kubenswrapper[4966]: I0127 16:53:40.537317 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerDied","Data":"1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892"} Jan 27 16:53:40 crc kubenswrapper[4966]: I0127 16:53:40.537400 4966 scope.go:117] "RemoveContainer" containerID="00d97371b3f6c7f1870beff9d1559e1b79067f1ba3fc6424cfef1b0780289b3d" Jan 27 16:53:40 crc kubenswrapper[4966]: I0127 16:53:40.538244 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:53:40 crc kubenswrapper[4966]: E0127 16:53:40.538668 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:53:55 crc kubenswrapper[4966]: I0127 16:53:55.521330 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:53:55 crc kubenswrapper[4966]: E0127 16:53:55.522201 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:54:06 crc kubenswrapper[4966]: I0127 16:54:06.522298 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:54:06 crc kubenswrapper[4966]: E0127 16:54:06.524744 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:54:18 crc kubenswrapper[4966]: I0127 16:54:18.521141 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:54:18 crc kubenswrapper[4966]: E0127 16:54:18.523835 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:54:31 crc kubenswrapper[4966]: I0127 16:54:31.522381 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:54:31 crc kubenswrapper[4966]: E0127 16:54:31.524025 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:54:46 crc kubenswrapper[4966]: I0127 16:54:46.521319 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:54:46 crc kubenswrapper[4966]: E0127 16:54:46.522497 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:54:58 crc kubenswrapper[4966]: I0127 16:54:58.522087 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:54:58 crc kubenswrapper[4966]: E0127 16:54:58.523954 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:55:11 crc kubenswrapper[4966]: I0127 16:55:11.525452 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:55:11 crc kubenswrapper[4966]: E0127 16:55:11.527089 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:55:26 crc kubenswrapper[4966]: I0127 16:55:26.521334 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:55:26 crc kubenswrapper[4966]: E0127 16:55:26.522315 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:55:37 crc kubenswrapper[4966]: I0127 16:55:37.520662 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:55:37 crc kubenswrapper[4966]: E0127 16:55:37.521701 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:55:50 crc kubenswrapper[4966]: I0127 16:55:50.522939 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:55:50 crc kubenswrapper[4966]: E0127 16:55:50.524343 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:56:03 crc kubenswrapper[4966]: I0127 16:56:03.462260 4966 trace.go:236] Trace[327542497]: "Calculate volume metrics of swift for pod openstack/swift-storage-0" (27-Jan-2026 16:56:02.174) (total time: 1285ms): Jan 27 16:56:03 crc kubenswrapper[4966]: Trace[327542497]: [1.285918998s] [1.285918998s] END Jan 27 16:56:04 crc kubenswrapper[4966]: I0127 16:56:04.548043 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:56:04 crc kubenswrapper[4966]: E0127 16:56:04.549265 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:56:17 crc kubenswrapper[4966]: I0127 16:56:17.521203 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:56:17 crc kubenswrapper[4966]: E0127 16:56:17.522114 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:56:31 crc kubenswrapper[4966]: I0127 16:56:31.521159 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:56:31 crc kubenswrapper[4966]: E0127 16:56:31.522402 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:56:42 crc kubenswrapper[4966]: I0127 16:56:42.521054 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:56:42 crc kubenswrapper[4966]: E0127 16:56:42.521944 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:56:56 crc kubenswrapper[4966]: I0127 16:56:56.522195 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:56:56 crc kubenswrapper[4966]: E0127 16:56:56.523487 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:57:10 crc kubenswrapper[4966]: I0127 16:57:10.521322 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:57:10 crc kubenswrapper[4966]: E0127 16:57:10.523172 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:57:23 crc kubenswrapper[4966]: I0127 16:57:23.521121 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:57:23 crc kubenswrapper[4966]: E0127 16:57:23.522093 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:57:38 crc kubenswrapper[4966]: I0127 16:57:38.522839 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:57:38 crc kubenswrapper[4966]: E0127 16:57:38.524504 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:57:46 crc kubenswrapper[4966]: I0127 16:57:46.433849 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qxv9j"] Jan 27 16:57:46 crc kubenswrapper[4966]: E0127 16:57:46.435031 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742399ce-03c6-4ca2-9804-62309b9d75e8" containerName="extract-utilities" Jan 27 16:57:46 crc kubenswrapper[4966]: I0127 16:57:46.435047 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="742399ce-03c6-4ca2-9804-62309b9d75e8" containerName="extract-utilities" Jan 27 16:57:46 crc kubenswrapper[4966]: E0127 16:57:46.435086 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742399ce-03c6-4ca2-9804-62309b9d75e8" containerName="registry-server" Jan 27 16:57:46 crc kubenswrapper[4966]: I0127 16:57:46.435095 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="742399ce-03c6-4ca2-9804-62309b9d75e8" containerName="registry-server" Jan 27 16:57:46 crc kubenswrapper[4966]: E0127 16:57:46.435136 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742399ce-03c6-4ca2-9804-62309b9d75e8" containerName="extract-content" Jan 27 16:57:46 crc kubenswrapper[4966]: I0127 16:57:46.435146 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="742399ce-03c6-4ca2-9804-62309b9d75e8" containerName="extract-content" Jan 27 16:57:46 crc kubenswrapper[4966]: I0127 16:57:46.435420 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="742399ce-03c6-4ca2-9804-62309b9d75e8" containerName="registry-server" Jan 27 16:57:46 crc kubenswrapper[4966]: I0127 16:57:46.437856 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qxv9j" Jan 27 16:57:46 crc kubenswrapper[4966]: I0127 16:57:46.452215 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qxv9j"] Jan 27 16:57:46 crc kubenswrapper[4966]: I0127 16:57:46.581224 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gczfx\" (UniqueName: \"kubernetes.io/projected/39b44061-144f-48c2-a15c-dccfb90d9b59-kube-api-access-gczfx\") pod \"redhat-operators-qxv9j\" (UID: \"39b44061-144f-48c2-a15c-dccfb90d9b59\") " pod="openshift-marketplace/redhat-operators-qxv9j" Jan 27 16:57:46 crc kubenswrapper[4966]: I0127 16:57:46.581756 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b44061-144f-48c2-a15c-dccfb90d9b59-catalog-content\") pod \"redhat-operators-qxv9j\" (UID: \"39b44061-144f-48c2-a15c-dccfb90d9b59\") " pod="openshift-marketplace/redhat-operators-qxv9j" Jan 27 16:57:46 crc kubenswrapper[4966]: I0127 16:57:46.582008 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b44061-144f-48c2-a15c-dccfb90d9b59-utilities\") pod \"redhat-operators-qxv9j\" (UID: \"39b44061-144f-48c2-a15c-dccfb90d9b59\") " pod="openshift-marketplace/redhat-operators-qxv9j" Jan 27 16:57:46 crc kubenswrapper[4966]: I0127 16:57:46.684538 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gczfx\" (UniqueName: \"kubernetes.io/projected/39b44061-144f-48c2-a15c-dccfb90d9b59-kube-api-access-gczfx\") pod \"redhat-operators-qxv9j\" (UID: \"39b44061-144f-48c2-a15c-dccfb90d9b59\") " pod="openshift-marketplace/redhat-operators-qxv9j" Jan 27 16:57:46 crc kubenswrapper[4966]: I0127 16:57:46.684692 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b44061-144f-48c2-a15c-dccfb90d9b59-catalog-content\") pod \"redhat-operators-qxv9j\" (UID: \"39b44061-144f-48c2-a15c-dccfb90d9b59\") " pod="openshift-marketplace/redhat-operators-qxv9j" Jan 27 16:57:46 crc kubenswrapper[4966]: I0127 16:57:46.684750 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b44061-144f-48c2-a15c-dccfb90d9b59-utilities\") pod \"redhat-operators-qxv9j\" (UID: \"39b44061-144f-48c2-a15c-dccfb90d9b59\") " pod="openshift-marketplace/redhat-operators-qxv9j" Jan 27 16:57:46 crc kubenswrapper[4966]: I0127 16:57:46.685358 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b44061-144f-48c2-a15c-dccfb90d9b59-catalog-content\") pod \"redhat-operators-qxv9j\" (UID: \"39b44061-144f-48c2-a15c-dccfb90d9b59\") " pod="openshift-marketplace/redhat-operators-qxv9j" Jan 27 16:57:46 crc kubenswrapper[4966]: I0127 16:57:46.685396 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b44061-144f-48c2-a15c-dccfb90d9b59-utilities\") pod \"redhat-operators-qxv9j\" (UID: \"39b44061-144f-48c2-a15c-dccfb90d9b59\") " pod="openshift-marketplace/redhat-operators-qxv9j" Jan 27 16:57:46 crc kubenswrapper[4966]: I0127 16:57:46.707834 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gczfx\" (UniqueName: \"kubernetes.io/projected/39b44061-144f-48c2-a15c-dccfb90d9b59-kube-api-access-gczfx\") pod \"redhat-operators-qxv9j\" (UID: \"39b44061-144f-48c2-a15c-dccfb90d9b59\") " pod="openshift-marketplace/redhat-operators-qxv9j" Jan 27 16:57:46 crc kubenswrapper[4966]: I0127 16:57:46.766476 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qxv9j" Jan 27 16:57:47 crc kubenswrapper[4966]: I0127 16:57:47.378594 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qxv9j"] Jan 27 16:57:48 crc kubenswrapper[4966]: E0127 16:57:48.280131 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39b44061_144f_48c2_a15c_dccfb90d9b59.slice/crio-99b095a0c5ebfb4612ec8632f0ad79cfeb9c962195f972ce01333fb0b6faadfa.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39b44061_144f_48c2_a15c_dccfb90d9b59.slice/crio-conmon-99b095a0c5ebfb4612ec8632f0ad79cfeb9c962195f972ce01333fb0b6faadfa.scope\": RecentStats: unable to find data in memory cache]" Jan 27 16:57:48 crc kubenswrapper[4966]: E0127 16:57:48.280829 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39b44061_144f_48c2_a15c_dccfb90d9b59.slice/crio-conmon-99b095a0c5ebfb4612ec8632f0ad79cfeb9c962195f972ce01333fb0b6faadfa.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39b44061_144f_48c2_a15c_dccfb90d9b59.slice/crio-99b095a0c5ebfb4612ec8632f0ad79cfeb9c962195f972ce01333fb0b6faadfa.scope\": RecentStats: unable to find data in memory cache]" Jan 27 16:57:48 crc kubenswrapper[4966]: I0127 16:57:48.651083 4966 generic.go:334] "Generic (PLEG): container finished" podID="39b44061-144f-48c2-a15c-dccfb90d9b59" containerID="99b095a0c5ebfb4612ec8632f0ad79cfeb9c962195f972ce01333fb0b6faadfa" exitCode=0 Jan 27 16:57:48 crc kubenswrapper[4966]: I0127 16:57:48.651145 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxv9j" event={"ID":"39b44061-144f-48c2-a15c-dccfb90d9b59","Type":"ContainerDied","Data":"99b095a0c5ebfb4612ec8632f0ad79cfeb9c962195f972ce01333fb0b6faadfa"} Jan 27 16:57:48 crc kubenswrapper[4966]: I0127 16:57:48.651183 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxv9j" event={"ID":"39b44061-144f-48c2-a15c-dccfb90d9b59","Type":"ContainerStarted","Data":"a73cdcaf4d790b9465c8d44fb7751a365ccaf24be7eee4614c9a33341c62b216"} Jan 27 16:57:48 crc kubenswrapper[4966]: I0127 16:57:48.653246 4966 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:57:50 crc kubenswrapper[4966]: I0127 16:57:50.522454 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:57:50 crc kubenswrapper[4966]: E0127 16:57:50.523767 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:57:50 crc kubenswrapper[4966]: I0127 16:57:50.676929 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxv9j" event={"ID":"39b44061-144f-48c2-a15c-dccfb90d9b59","Type":"ContainerStarted","Data":"700881b43778e86a27bf07f073824d9128d9c5267919550eb7e2c9d335113ba4"} Jan 27 16:57:54 crc kubenswrapper[4966]: I0127 16:57:54.742779 4966 generic.go:334] "Generic (PLEG): container finished" podID="39b44061-144f-48c2-a15c-dccfb90d9b59" containerID="700881b43778e86a27bf07f073824d9128d9c5267919550eb7e2c9d335113ba4" exitCode=0 Jan 27 16:57:54 crc kubenswrapper[4966]: I0127 16:57:54.742830 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxv9j" event={"ID":"39b44061-144f-48c2-a15c-dccfb90d9b59","Type":"ContainerDied","Data":"700881b43778e86a27bf07f073824d9128d9c5267919550eb7e2c9d335113ba4"} Jan 27 16:57:55 crc kubenswrapper[4966]: I0127 16:57:55.767033 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxv9j" event={"ID":"39b44061-144f-48c2-a15c-dccfb90d9b59","Type":"ContainerStarted","Data":"7e9ca827b0f3ff6a75d2e425440ce9cdbbe63e9402107c8aa9226165b6c280d2"} Jan 27 16:57:55 crc kubenswrapper[4966]: I0127 16:57:55.798541 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qxv9j" podStartSLOduration=3.267966214 podStartE2EDuration="9.798505124s" podCreationTimestamp="2026-01-27 16:57:46 +0000 UTC" firstStartedPulling="2026-01-27 16:57:48.653011079 +0000 UTC m=+4534.955804567" lastFinishedPulling="2026-01-27 16:57:55.183549979 +0000 UTC m=+4541.486343477" observedRunningTime="2026-01-27 16:57:55.786583431 +0000 UTC m=+4542.089377019" watchObservedRunningTime="2026-01-27 16:57:55.798505124 +0000 UTC m=+4542.101298692" Jan 27 16:57:56 crc kubenswrapper[4966]: I0127 16:57:56.766637 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qxv9j" Jan 27 16:57:56 crc kubenswrapper[4966]: I0127 16:57:56.770726 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qxv9j" Jan 27 16:57:57 crc kubenswrapper[4966]: I0127 16:57:57.845706 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qxv9j" podUID="39b44061-144f-48c2-a15c-dccfb90d9b59" containerName="registry-server" probeResult="failure" output=< Jan 27 16:57:57 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 16:57:57 crc kubenswrapper[4966]: > Jan 27 16:58:03 crc kubenswrapper[4966]: I0127 16:58:03.521466 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:58:03 crc kubenswrapper[4966]: E0127 16:58:03.523543 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:58:07 crc kubenswrapper[4966]: I0127 16:58:07.829282 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qxv9j" podUID="39b44061-144f-48c2-a15c-dccfb90d9b59" containerName="registry-server" probeResult="failure" output=< Jan 27 16:58:07 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 16:58:07 crc kubenswrapper[4966]: > Jan 27 16:58:14 crc kubenswrapper[4966]: I0127 16:58:14.521864 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:58:14 crc kubenswrapper[4966]: E0127 16:58:14.522865 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:58:17 crc kubenswrapper[4966]: I0127 16:58:17.716277 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qxv9j" Jan 27 16:58:17 crc kubenswrapper[4966]: I0127 16:58:17.784185 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qxv9j" Jan 27 16:58:17 crc kubenswrapper[4966]: I0127 16:58:17.976512 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qxv9j"] Jan 27 16:58:19 crc kubenswrapper[4966]: I0127 16:58:19.060942 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qxv9j" podUID="39b44061-144f-48c2-a15c-dccfb90d9b59" containerName="registry-server" containerID="cri-o://7e9ca827b0f3ff6a75d2e425440ce9cdbbe63e9402107c8aa9226165b6c280d2" gracePeriod=2 Jan 27 16:58:19 crc kubenswrapper[4966]: I0127 16:58:19.690879 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qxv9j" Jan 27 16:58:19 crc kubenswrapper[4966]: I0127 16:58:19.783299 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gczfx\" (UniqueName: \"kubernetes.io/projected/39b44061-144f-48c2-a15c-dccfb90d9b59-kube-api-access-gczfx\") pod \"39b44061-144f-48c2-a15c-dccfb90d9b59\" (UID: \"39b44061-144f-48c2-a15c-dccfb90d9b59\") " Jan 27 16:58:19 crc kubenswrapper[4966]: I0127 16:58:19.783503 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b44061-144f-48c2-a15c-dccfb90d9b59-utilities\") pod \"39b44061-144f-48c2-a15c-dccfb90d9b59\" (UID: \"39b44061-144f-48c2-a15c-dccfb90d9b59\") " Jan 27 16:58:19 crc kubenswrapper[4966]: I0127 16:58:19.783726 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b44061-144f-48c2-a15c-dccfb90d9b59-catalog-content\") pod \"39b44061-144f-48c2-a15c-dccfb90d9b59\" (UID: \"39b44061-144f-48c2-a15c-dccfb90d9b59\") " Jan 27 16:58:19 crc kubenswrapper[4966]: I0127 16:58:19.784338 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39b44061-144f-48c2-a15c-dccfb90d9b59-utilities" (OuterVolumeSpecName: "utilities") pod "39b44061-144f-48c2-a15c-dccfb90d9b59" (UID: "39b44061-144f-48c2-a15c-dccfb90d9b59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:58:19 crc kubenswrapper[4966]: I0127 16:58:19.796012 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b44061-144f-48c2-a15c-dccfb90d9b59-kube-api-access-gczfx" (OuterVolumeSpecName: "kube-api-access-gczfx") pod "39b44061-144f-48c2-a15c-dccfb90d9b59" (UID: "39b44061-144f-48c2-a15c-dccfb90d9b59"). InnerVolumeSpecName "kube-api-access-gczfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:58:19 crc kubenswrapper[4966]: I0127 16:58:19.887646 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gczfx\" (UniqueName: \"kubernetes.io/projected/39b44061-144f-48c2-a15c-dccfb90d9b59-kube-api-access-gczfx\") on node \"crc\" DevicePath \"\"" Jan 27 16:58:19 crc kubenswrapper[4966]: I0127 16:58:19.888249 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b44061-144f-48c2-a15c-dccfb90d9b59-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:58:19 crc kubenswrapper[4966]: I0127 16:58:19.947208 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39b44061-144f-48c2-a15c-dccfb90d9b59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39b44061-144f-48c2-a15c-dccfb90d9b59" (UID: "39b44061-144f-48c2-a15c-dccfb90d9b59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:58:19 crc kubenswrapper[4966]: I0127 16:58:19.990855 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b44061-144f-48c2-a15c-dccfb90d9b59-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:58:20 crc kubenswrapper[4966]: I0127 16:58:20.077993 4966 generic.go:334] "Generic (PLEG): container finished" podID="39b44061-144f-48c2-a15c-dccfb90d9b59" containerID="7e9ca827b0f3ff6a75d2e425440ce9cdbbe63e9402107c8aa9226165b6c280d2" exitCode=0 Jan 27 16:58:20 crc kubenswrapper[4966]: I0127 16:58:20.078246 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxv9j" event={"ID":"39b44061-144f-48c2-a15c-dccfb90d9b59","Type":"ContainerDied","Data":"7e9ca827b0f3ff6a75d2e425440ce9cdbbe63e9402107c8aa9226165b6c280d2"} Jan 27 16:58:20 crc kubenswrapper[4966]: I0127 16:58:20.078599 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxv9j" event={"ID":"39b44061-144f-48c2-a15c-dccfb90d9b59","Type":"ContainerDied","Data":"a73cdcaf4d790b9465c8d44fb7751a365ccaf24be7eee4614c9a33341c62b216"} Jan 27 16:58:20 crc kubenswrapper[4966]: I0127 16:58:20.078634 4966 scope.go:117] "RemoveContainer" containerID="7e9ca827b0f3ff6a75d2e425440ce9cdbbe63e9402107c8aa9226165b6c280d2" Jan 27 16:58:20 crc kubenswrapper[4966]: I0127 16:58:20.078381 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qxv9j" Jan 27 16:58:20 crc kubenswrapper[4966]: I0127 16:58:20.111210 4966 scope.go:117] "RemoveContainer" containerID="700881b43778e86a27bf07f073824d9128d9c5267919550eb7e2c9d335113ba4" Jan 27 16:58:20 crc kubenswrapper[4966]: I0127 16:58:20.130246 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qxv9j"] Jan 27 16:58:20 crc kubenswrapper[4966]: I0127 16:58:20.148190 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qxv9j"] Jan 27 16:58:20 crc kubenswrapper[4966]: I0127 16:58:20.155396 4966 scope.go:117] "RemoveContainer" containerID="99b095a0c5ebfb4612ec8632f0ad79cfeb9c962195f972ce01333fb0b6faadfa" Jan 27 16:58:20 crc kubenswrapper[4966]: I0127 16:58:20.202318 4966 scope.go:117] "RemoveContainer" containerID="7e9ca827b0f3ff6a75d2e425440ce9cdbbe63e9402107c8aa9226165b6c280d2" Jan 27 16:58:20 crc kubenswrapper[4966]: E0127 16:58:20.202739 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e9ca827b0f3ff6a75d2e425440ce9cdbbe63e9402107c8aa9226165b6c280d2\": container with ID starting with 7e9ca827b0f3ff6a75d2e425440ce9cdbbe63e9402107c8aa9226165b6c280d2 not found: ID does not exist" containerID="7e9ca827b0f3ff6a75d2e425440ce9cdbbe63e9402107c8aa9226165b6c280d2" Jan 27 16:58:20 crc kubenswrapper[4966]: I0127 16:58:20.202790 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e9ca827b0f3ff6a75d2e425440ce9cdbbe63e9402107c8aa9226165b6c280d2"} err="failed to get container status \"7e9ca827b0f3ff6a75d2e425440ce9cdbbe63e9402107c8aa9226165b6c280d2\": rpc error: code = NotFound desc = could not find container \"7e9ca827b0f3ff6a75d2e425440ce9cdbbe63e9402107c8aa9226165b6c280d2\": container with ID starting with 7e9ca827b0f3ff6a75d2e425440ce9cdbbe63e9402107c8aa9226165b6c280d2 not found: ID does not exist" Jan 27 16:58:20 crc kubenswrapper[4966]: I0127 16:58:20.202827 4966 scope.go:117] "RemoveContainer" containerID="700881b43778e86a27bf07f073824d9128d9c5267919550eb7e2c9d335113ba4" Jan 27 16:58:20 crc kubenswrapper[4966]: E0127 16:58:20.203302 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"700881b43778e86a27bf07f073824d9128d9c5267919550eb7e2c9d335113ba4\": container with ID starting with 700881b43778e86a27bf07f073824d9128d9c5267919550eb7e2c9d335113ba4 not found: ID does not exist" containerID="700881b43778e86a27bf07f073824d9128d9c5267919550eb7e2c9d335113ba4" Jan 27 16:58:20 crc kubenswrapper[4966]: I0127 16:58:20.203329 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"700881b43778e86a27bf07f073824d9128d9c5267919550eb7e2c9d335113ba4"} err="failed to get container status \"700881b43778e86a27bf07f073824d9128d9c5267919550eb7e2c9d335113ba4\": rpc error: code = NotFound desc = could not find container \"700881b43778e86a27bf07f073824d9128d9c5267919550eb7e2c9d335113ba4\": container with ID starting with 700881b43778e86a27bf07f073824d9128d9c5267919550eb7e2c9d335113ba4 not found: ID does not exist" Jan 27 16:58:20 crc kubenswrapper[4966]: I0127 16:58:20.203346 4966 scope.go:117] "RemoveContainer" containerID="99b095a0c5ebfb4612ec8632f0ad79cfeb9c962195f972ce01333fb0b6faadfa" Jan 27 16:58:20 crc kubenswrapper[4966]: E0127 16:58:20.203761 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99b095a0c5ebfb4612ec8632f0ad79cfeb9c962195f972ce01333fb0b6faadfa\": container with ID starting with 99b095a0c5ebfb4612ec8632f0ad79cfeb9c962195f972ce01333fb0b6faadfa not found: ID does not exist" containerID="99b095a0c5ebfb4612ec8632f0ad79cfeb9c962195f972ce01333fb0b6faadfa" Jan 27 16:58:20 crc kubenswrapper[4966]: I0127 16:58:20.203785 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99b095a0c5ebfb4612ec8632f0ad79cfeb9c962195f972ce01333fb0b6faadfa"} err="failed to get container status \"99b095a0c5ebfb4612ec8632f0ad79cfeb9c962195f972ce01333fb0b6faadfa\": rpc error: code = NotFound desc = could not find container \"99b095a0c5ebfb4612ec8632f0ad79cfeb9c962195f972ce01333fb0b6faadfa\": container with ID starting with 99b095a0c5ebfb4612ec8632f0ad79cfeb9c962195f972ce01333fb0b6faadfa not found: ID does not exist" Jan 27 16:58:20 crc kubenswrapper[4966]: I0127 16:58:20.541785 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39b44061-144f-48c2-a15c-dccfb90d9b59" path="/var/lib/kubelet/pods/39b44061-144f-48c2-a15c-dccfb90d9b59/volumes" Jan 27 16:58:29 crc kubenswrapper[4966]: I0127 16:58:29.521244 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:58:29 crc kubenswrapper[4966]: E0127 16:58:29.522494 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 16:58:43 crc kubenswrapper[4966]: I0127 16:58:43.521316 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 16:58:44 crc kubenswrapper[4966]: I0127 16:58:44.438560 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"aac0539be57f6932ac72b94428abfbd1ca11f23206b67d7996e17f5ed19cf93c"} Jan 27 16:58:48 crc kubenswrapper[4966]: I0127 16:58:48.424877 4966 scope.go:117] "RemoveContainer" containerID="91529aa4f4a7f908a0b5f9b299e14a600aae9123687614a695f671f67549493e" Jan 27 16:58:48 crc kubenswrapper[4966]: I0127 16:58:48.453399 4966 scope.go:117] "RemoveContainer" containerID="1901c9d69fc3673aa2db9d2118f62eb67a8bfa9db844e143d2b6223767593885" Jan 27 16:58:48 crc kubenswrapper[4966]: I0127 16:58:48.487622 4966 scope.go:117] "RemoveContainer" containerID="f71628cd74e35ef13efa09f181b379e9dd516b98dc45bd6f6c13e53e4f2e16c2" Jan 27 16:58:55 crc kubenswrapper[4966]: I0127 16:58:55.923382 4966 trace.go:236] Trace[2008327444]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (27-Jan-2026 16:58:54.838) (total time: 1085ms): Jan 27 16:58:55 crc kubenswrapper[4966]: Trace[2008327444]: [1.085263875s] [1.085263875s] END Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.313247 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 16:59:52 crc kubenswrapper[4966]: E0127 16:59:52.314495 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b44061-144f-48c2-a15c-dccfb90d9b59" containerName="extract-content" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.314515 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b44061-144f-48c2-a15c-dccfb90d9b59" containerName="extract-content" Jan 27 16:59:52 crc kubenswrapper[4966]: E0127 16:59:52.314533 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b44061-144f-48c2-a15c-dccfb90d9b59" containerName="registry-server" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.314542 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b44061-144f-48c2-a15c-dccfb90d9b59" containerName="registry-server" Jan 27 16:59:52 crc kubenswrapper[4966]: E0127 16:59:52.314558 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b44061-144f-48c2-a15c-dccfb90d9b59" containerName="extract-utilities" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.314567 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b44061-144f-48c2-a15c-dccfb90d9b59" containerName="extract-utilities" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.314876 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b44061-144f-48c2-a15c-dccfb90d9b59" containerName="registry-server" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.316150 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.318552 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.319368 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.321039 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.330145 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-xsspx" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.330756 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.412571 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.412663 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c89tt\" (UniqueName: \"kubernetes.io/projected/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-kube-api-access-c89tt\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.412699 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-config-data\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.412723 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.412749 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.412800 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.412839 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.412914 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.412956 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.514788 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.514938 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.514999 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.515052 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.515129 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c89tt\" (UniqueName: \"kubernetes.io/projected/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-kube-api-access-c89tt\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.515174 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-config-data\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.515202 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.515234 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.515303 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.516249 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.516268 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.516585 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.523726 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.524600 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-config-data\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.526131 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.526385 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.526840 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.536577 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c89tt\" (UniqueName: \"kubernetes.io/projected/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-kube-api-access-c89tt\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.573398 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " pod="openstack/tempest-tests-tempest" Jan 27 16:59:52 crc kubenswrapper[4966]: I0127 16:59:52.654491 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 16:59:53 crc kubenswrapper[4966]: I0127 16:59:53.212527 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 16:59:53 crc kubenswrapper[4966]: I0127 16:59:53.809054 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6","Type":"ContainerStarted","Data":"e3f37d8428c71dbe62c3ba1da130e6d705a20de3dddbfb25893b1f5b9ecebbc2"} Jan 27 17:00:00 crc kubenswrapper[4966]: I0127 17:00:00.238786 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492220-r4xwn"] Jan 27 17:00:00 crc kubenswrapper[4966]: I0127 17:00:00.243704 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-r4xwn" Jan 27 17:00:00 crc kubenswrapper[4966]: I0127 17:00:00.255834 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 17:00:00 crc kubenswrapper[4966]: I0127 17:00:00.255859 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 17:00:00 crc kubenswrapper[4966]: I0127 17:00:00.303186 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492220-r4xwn"] Jan 27 17:00:00 crc kubenswrapper[4966]: I0127 17:00:00.366852 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2edfedc-db62-4fd6-9ea7-747d3a4af2ba-secret-volume\") pod \"collect-profiles-29492220-r4xwn\" (UID: \"b2edfedc-db62-4fd6-9ea7-747d3a4af2ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-r4xwn" Jan 27 17:00:00 crc kubenswrapper[4966]: I0127 17:00:00.367401 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2edfedc-db62-4fd6-9ea7-747d3a4af2ba-config-volume\") pod \"collect-profiles-29492220-r4xwn\" (UID: \"b2edfedc-db62-4fd6-9ea7-747d3a4af2ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-r4xwn" Jan 27 17:00:00 crc kubenswrapper[4966]: I0127 17:00:00.367685 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rbbm\" (UniqueName: \"kubernetes.io/projected/b2edfedc-db62-4fd6-9ea7-747d3a4af2ba-kube-api-access-8rbbm\") pod \"collect-profiles-29492220-r4xwn\" (UID: \"b2edfedc-db62-4fd6-9ea7-747d3a4af2ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-r4xwn" Jan 27 17:00:00 crc kubenswrapper[4966]: I0127 17:00:00.469374 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rbbm\" (UniqueName: \"kubernetes.io/projected/b2edfedc-db62-4fd6-9ea7-747d3a4af2ba-kube-api-access-8rbbm\") pod \"collect-profiles-29492220-r4xwn\" (UID: \"b2edfedc-db62-4fd6-9ea7-747d3a4af2ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-r4xwn" Jan 27 17:00:00 crc kubenswrapper[4966]: I0127 17:00:00.469798 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2edfedc-db62-4fd6-9ea7-747d3a4af2ba-secret-volume\") pod \"collect-profiles-29492220-r4xwn\" (UID: \"b2edfedc-db62-4fd6-9ea7-747d3a4af2ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-r4xwn" Jan 27 17:00:00 crc kubenswrapper[4966]: I0127 17:00:00.469947 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2edfedc-db62-4fd6-9ea7-747d3a4af2ba-config-volume\") pod \"collect-profiles-29492220-r4xwn\" (UID: \"b2edfedc-db62-4fd6-9ea7-747d3a4af2ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-r4xwn" Jan 27 17:00:00 crc kubenswrapper[4966]: I0127 17:00:00.477833 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2edfedc-db62-4fd6-9ea7-747d3a4af2ba-secret-volume\") pod \"collect-profiles-29492220-r4xwn\" (UID: \"b2edfedc-db62-4fd6-9ea7-747d3a4af2ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-r4xwn" Jan 27 17:00:00 crc kubenswrapper[4966]: I0127 17:00:00.521415 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rbbm\" (UniqueName: \"kubernetes.io/projected/b2edfedc-db62-4fd6-9ea7-747d3a4af2ba-kube-api-access-8rbbm\") pod \"collect-profiles-29492220-r4xwn\" (UID: \"b2edfedc-db62-4fd6-9ea7-747d3a4af2ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-r4xwn" Jan 27 17:00:00 crc kubenswrapper[4966]: I0127 17:00:00.523461 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2edfedc-db62-4fd6-9ea7-747d3a4af2ba-config-volume\") pod \"collect-profiles-29492220-r4xwn\" (UID: \"b2edfedc-db62-4fd6-9ea7-747d3a4af2ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-r4xwn" Jan 27 17:00:00 crc kubenswrapper[4966]: I0127 17:00:00.600090 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-r4xwn" Jan 27 17:00:45 crc kubenswrapper[4966]: E0127 17:00:45.631234 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 27 17:00:45 crc kubenswrapper[4966]: E0127 17:00:45.636945 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c89tt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 17:00:45 crc kubenswrapper[4966]: E0127 17:00:45.641071 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6" Jan 27 17:00:46 crc kubenswrapper[4966]: E0127 17:00:46.435977 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6" Jan 27 17:00:47 crc kubenswrapper[4966]: I0127 17:00:47.398077 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492220-r4xwn"] Jan 27 17:00:47 crc kubenswrapper[4966]: I0127 17:00:47.446868 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-r4xwn" event={"ID":"b2edfedc-db62-4fd6-9ea7-747d3a4af2ba","Type":"ContainerStarted","Data":"3673de9c38226adde81ba5bcb112e7779dffd6e0cb7aeae832bfc0f59041bcfc"} Jan 27 17:00:48 crc kubenswrapper[4966]: I0127 17:00:48.463138 4966 generic.go:334] "Generic (PLEG): container finished" podID="b2edfedc-db62-4fd6-9ea7-747d3a4af2ba" containerID="38c5f07ba946825b7b53d497d81a06a851aba6669dad4fbf13edabace0fa214e" exitCode=0 Jan 27 17:00:48 crc kubenswrapper[4966]: I0127 17:00:48.463220 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-r4xwn" event={"ID":"b2edfedc-db62-4fd6-9ea7-747d3a4af2ba","Type":"ContainerDied","Data":"38c5f07ba946825b7b53d497d81a06a851aba6669dad4fbf13edabace0fa214e"} Jan 27 17:00:50 crc kubenswrapper[4966]: I0127 17:00:50.905728 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-r4xwn" Jan 27 17:00:51 crc kubenswrapper[4966]: I0127 17:00:51.044612 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2edfedc-db62-4fd6-9ea7-747d3a4af2ba-config-volume\") pod \"b2edfedc-db62-4fd6-9ea7-747d3a4af2ba\" (UID: \"b2edfedc-db62-4fd6-9ea7-747d3a4af2ba\") " Jan 27 17:00:51 crc kubenswrapper[4966]: I0127 17:00:51.044719 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2edfedc-db62-4fd6-9ea7-747d3a4af2ba-secret-volume\") pod \"b2edfedc-db62-4fd6-9ea7-747d3a4af2ba\" (UID: \"b2edfedc-db62-4fd6-9ea7-747d3a4af2ba\") " Jan 27 17:00:51 crc kubenswrapper[4966]: I0127 17:00:51.044943 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rbbm\" (UniqueName: \"kubernetes.io/projected/b2edfedc-db62-4fd6-9ea7-747d3a4af2ba-kube-api-access-8rbbm\") pod \"b2edfedc-db62-4fd6-9ea7-747d3a4af2ba\" (UID: \"b2edfedc-db62-4fd6-9ea7-747d3a4af2ba\") " Jan 27 17:00:51 crc kubenswrapper[4966]: I0127 17:00:51.045384 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2edfedc-db62-4fd6-9ea7-747d3a4af2ba-config-volume" (OuterVolumeSpecName: "config-volume") pod "b2edfedc-db62-4fd6-9ea7-747d3a4af2ba" (UID: "b2edfedc-db62-4fd6-9ea7-747d3a4af2ba"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:51 crc kubenswrapper[4966]: I0127 17:00:51.045671 4966 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2edfedc-db62-4fd6-9ea7-747d3a4af2ba-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:51 crc kubenswrapper[4966]: I0127 17:00:51.053717 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2edfedc-db62-4fd6-9ea7-747d3a4af2ba-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b2edfedc-db62-4fd6-9ea7-747d3a4af2ba" (UID: "b2edfedc-db62-4fd6-9ea7-747d3a4af2ba"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:00:51 crc kubenswrapper[4966]: I0127 17:00:51.078176 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2edfedc-db62-4fd6-9ea7-747d3a4af2ba-kube-api-access-8rbbm" (OuterVolumeSpecName: "kube-api-access-8rbbm") pod "b2edfedc-db62-4fd6-9ea7-747d3a4af2ba" (UID: "b2edfedc-db62-4fd6-9ea7-747d3a4af2ba"). InnerVolumeSpecName "kube-api-access-8rbbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:00:51 crc kubenswrapper[4966]: I0127 17:00:51.147989 4966 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2edfedc-db62-4fd6-9ea7-747d3a4af2ba-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:51 crc kubenswrapper[4966]: I0127 17:00:51.148020 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rbbm\" (UniqueName: \"kubernetes.io/projected/b2edfedc-db62-4fd6-9ea7-747d3a4af2ba-kube-api-access-8rbbm\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:51 crc kubenswrapper[4966]: I0127 17:00:51.503109 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-r4xwn" event={"ID":"b2edfedc-db62-4fd6-9ea7-747d3a4af2ba","Type":"ContainerDied","Data":"3673de9c38226adde81ba5bcb112e7779dffd6e0cb7aeae832bfc0f59041bcfc"} Jan 27 17:00:51 crc kubenswrapper[4966]: I0127 17:00:51.503150 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-r4xwn" Jan 27 17:00:51 crc kubenswrapper[4966]: I0127 17:00:51.503156 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3673de9c38226adde81ba5bcb112e7779dffd6e0cb7aeae832bfc0f59041bcfc" Jan 27 17:00:52 crc kubenswrapper[4966]: I0127 17:00:52.011209 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv"] Jan 27 17:00:52 crc kubenswrapper[4966]: I0127 17:00:52.021014 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492175-6q6vv"] Jan 27 17:00:52 crc kubenswrapper[4966]: I0127 17:00:52.551114 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b761748d-7255-42cc-a966-7aff3bda5c8c" path="/var/lib/kubelet/pods/b761748d-7255-42cc-a966-7aff3bda5c8c/volumes" Jan 27 17:01:00 crc kubenswrapper[4966]: I0127 17:01:00.176100 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29492221-dflqh"] Jan 27 17:01:00 crc kubenswrapper[4966]: E0127 17:01:00.177346 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2edfedc-db62-4fd6-9ea7-747d3a4af2ba" containerName="collect-profiles" Jan 27 17:01:00 crc kubenswrapper[4966]: I0127 17:01:00.177363 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2edfedc-db62-4fd6-9ea7-747d3a4af2ba" containerName="collect-profiles" Jan 27 17:01:00 crc kubenswrapper[4966]: I0127 17:01:00.177717 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2edfedc-db62-4fd6-9ea7-747d3a4af2ba" containerName="collect-profiles" Jan 27 17:01:00 crc kubenswrapper[4966]: I0127 17:01:00.178736 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492221-dflqh" Jan 27 17:01:00 crc kubenswrapper[4966]: I0127 17:01:00.192916 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492221-dflqh"] Jan 27 17:01:00 crc kubenswrapper[4966]: I0127 17:01:00.308774 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqjhs\" (UniqueName: \"kubernetes.io/projected/564b32c4-a589-4d48-82f9-5d56159d4674-kube-api-access-sqjhs\") pod \"keystone-cron-29492221-dflqh\" (UID: \"564b32c4-a589-4d48-82f9-5d56159d4674\") " pod="openstack/keystone-cron-29492221-dflqh" Jan 27 17:01:00 crc kubenswrapper[4966]: I0127 17:01:00.309266 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/564b32c4-a589-4d48-82f9-5d56159d4674-fernet-keys\") pod \"keystone-cron-29492221-dflqh\" (UID: \"564b32c4-a589-4d48-82f9-5d56159d4674\") " pod="openstack/keystone-cron-29492221-dflqh" Jan 27 17:01:00 crc kubenswrapper[4966]: I0127 17:01:00.309453 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564b32c4-a589-4d48-82f9-5d56159d4674-config-data\") pod \"keystone-cron-29492221-dflqh\" (UID: \"564b32c4-a589-4d48-82f9-5d56159d4674\") " pod="openstack/keystone-cron-29492221-dflqh" Jan 27 17:01:00 crc kubenswrapper[4966]: I0127 17:01:00.309573 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564b32c4-a589-4d48-82f9-5d56159d4674-combined-ca-bundle\") pod \"keystone-cron-29492221-dflqh\" (UID: \"564b32c4-a589-4d48-82f9-5d56159d4674\") " pod="openstack/keystone-cron-29492221-dflqh" Jan 27 17:01:00 crc kubenswrapper[4966]: I0127 17:01:00.412582 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/564b32c4-a589-4d48-82f9-5d56159d4674-fernet-keys\") pod \"keystone-cron-29492221-dflqh\" (UID: \"564b32c4-a589-4d48-82f9-5d56159d4674\") " pod="openstack/keystone-cron-29492221-dflqh" Jan 27 17:01:00 crc kubenswrapper[4966]: I0127 17:01:00.412766 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564b32c4-a589-4d48-82f9-5d56159d4674-config-data\") pod \"keystone-cron-29492221-dflqh\" (UID: \"564b32c4-a589-4d48-82f9-5d56159d4674\") " pod="openstack/keystone-cron-29492221-dflqh" Jan 27 17:01:00 crc kubenswrapper[4966]: I0127 17:01:00.412839 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564b32c4-a589-4d48-82f9-5d56159d4674-combined-ca-bundle\") pod \"keystone-cron-29492221-dflqh\" (UID: \"564b32c4-a589-4d48-82f9-5d56159d4674\") " pod="openstack/keystone-cron-29492221-dflqh" Jan 27 17:01:00 crc kubenswrapper[4966]: I0127 17:01:00.413135 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqjhs\" (UniqueName: \"kubernetes.io/projected/564b32c4-a589-4d48-82f9-5d56159d4674-kube-api-access-sqjhs\") pod \"keystone-cron-29492221-dflqh\" (UID: \"564b32c4-a589-4d48-82f9-5d56159d4674\") " pod="openstack/keystone-cron-29492221-dflqh" Jan 27 17:01:00 crc kubenswrapper[4966]: I0127 17:01:00.420216 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564b32c4-a589-4d48-82f9-5d56159d4674-combined-ca-bundle\") pod \"keystone-cron-29492221-dflqh\" (UID: \"564b32c4-a589-4d48-82f9-5d56159d4674\") " pod="openstack/keystone-cron-29492221-dflqh" Jan 27 17:01:00 crc kubenswrapper[4966]: I0127 17:01:00.420377 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564b32c4-a589-4d48-82f9-5d56159d4674-config-data\") pod \"keystone-cron-29492221-dflqh\" (UID: \"564b32c4-a589-4d48-82f9-5d56159d4674\") " pod="openstack/keystone-cron-29492221-dflqh" Jan 27 17:01:00 crc kubenswrapper[4966]: I0127 17:01:00.420383 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/564b32c4-a589-4d48-82f9-5d56159d4674-fernet-keys\") pod \"keystone-cron-29492221-dflqh\" (UID: \"564b32c4-a589-4d48-82f9-5d56159d4674\") " pod="openstack/keystone-cron-29492221-dflqh" Jan 27 17:01:00 crc kubenswrapper[4966]: I0127 17:01:00.433051 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqjhs\" (UniqueName: \"kubernetes.io/projected/564b32c4-a589-4d48-82f9-5d56159d4674-kube-api-access-sqjhs\") pod \"keystone-cron-29492221-dflqh\" (UID: \"564b32c4-a589-4d48-82f9-5d56159d4674\") " pod="openstack/keystone-cron-29492221-dflqh" Jan 27 17:01:00 crc kubenswrapper[4966]: I0127 17:01:00.497983 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492221-dflqh" Jan 27 17:01:01 crc kubenswrapper[4966]: I0127 17:01:01.160060 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492221-dflqh"] Jan 27 17:01:01 crc kubenswrapper[4966]: I0127 17:01:01.665260 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492221-dflqh" event={"ID":"564b32c4-a589-4d48-82f9-5d56159d4674","Type":"ContainerStarted","Data":"fe21c219b91df57e5e0e03687f2f92ffc485387f3ab30d9f1e641c178eac4209"} Jan 27 17:01:01 crc kubenswrapper[4966]: I0127 17:01:01.665570 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492221-dflqh" event={"ID":"564b32c4-a589-4d48-82f9-5d56159d4674","Type":"ContainerStarted","Data":"b47ed84c619c27c45925360395600278187aa1659dfb07a8dd2b0d5dbc5310cd"} Jan 27 17:01:01 crc kubenswrapper[4966]: I0127 17:01:01.695885 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29492221-dflqh" podStartSLOduration=1.695865484 podStartE2EDuration="1.695865484s" podCreationTimestamp="2026-01-27 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:01:01.681122982 +0000 UTC m=+4727.983916470" watchObservedRunningTime="2026-01-27 17:01:01.695865484 +0000 UTC m=+4727.998658972" Jan 27 17:01:02 crc kubenswrapper[4966]: I0127 17:01:02.408234 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 27 17:01:04 crc kubenswrapper[4966]: I0127 17:01:04.717224 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6","Type":"ContainerStarted","Data":"eae15376315732b2d15dc71ea83591a96fbb2edefb826ed3e0188b21d55def84"} Jan 27 17:01:04 crc kubenswrapper[4966]: I0127 17:01:04.741729 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.559294179 podStartE2EDuration="1m13.741713043s" podCreationTimestamp="2026-01-27 16:59:51 +0000 UTC" firstStartedPulling="2026-01-27 16:59:53.222872779 +0000 UTC m=+4659.525666267" lastFinishedPulling="2026-01-27 17:01:02.405291633 +0000 UTC m=+4728.708085131" observedRunningTime="2026-01-27 17:01:04.737511422 +0000 UTC m=+4731.040304930" watchObservedRunningTime="2026-01-27 17:01:04.741713043 +0000 UTC m=+4731.044506521" Jan 27 17:01:05 crc kubenswrapper[4966]: I0127 17:01:05.730639 4966 generic.go:334] "Generic (PLEG): container finished" podID="564b32c4-a589-4d48-82f9-5d56159d4674" containerID="fe21c219b91df57e5e0e03687f2f92ffc485387f3ab30d9f1e641c178eac4209" exitCode=0 Jan 27 17:01:05 crc kubenswrapper[4966]: I0127 17:01:05.730726 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492221-dflqh" event={"ID":"564b32c4-a589-4d48-82f9-5d56159d4674","Type":"ContainerDied","Data":"fe21c219b91df57e5e0e03687f2f92ffc485387f3ab30d9f1e641c178eac4209"} Jan 27 17:01:07 crc kubenswrapper[4966]: I0127 17:01:07.331215 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492221-dflqh" Jan 27 17:01:07 crc kubenswrapper[4966]: I0127 17:01:07.413863 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564b32c4-a589-4d48-82f9-5d56159d4674-combined-ca-bundle\") pod \"564b32c4-a589-4d48-82f9-5d56159d4674\" (UID: \"564b32c4-a589-4d48-82f9-5d56159d4674\") " Jan 27 17:01:07 crc kubenswrapper[4966]: I0127 17:01:07.414051 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/564b32c4-a589-4d48-82f9-5d56159d4674-fernet-keys\") pod \"564b32c4-a589-4d48-82f9-5d56159d4674\" (UID: \"564b32c4-a589-4d48-82f9-5d56159d4674\") " Jan 27 17:01:07 crc kubenswrapper[4966]: I0127 17:01:07.414276 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564b32c4-a589-4d48-82f9-5d56159d4674-config-data\") pod \"564b32c4-a589-4d48-82f9-5d56159d4674\" (UID: \"564b32c4-a589-4d48-82f9-5d56159d4674\") " Jan 27 17:01:07 crc kubenswrapper[4966]: I0127 17:01:07.414409 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqjhs\" (UniqueName: \"kubernetes.io/projected/564b32c4-a589-4d48-82f9-5d56159d4674-kube-api-access-sqjhs\") pod \"564b32c4-a589-4d48-82f9-5d56159d4674\" (UID: \"564b32c4-a589-4d48-82f9-5d56159d4674\") " Jan 27 17:01:07 crc kubenswrapper[4966]: I0127 17:01:07.419872 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564b32c4-a589-4d48-82f9-5d56159d4674-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "564b32c4-a589-4d48-82f9-5d56159d4674" (UID: "564b32c4-a589-4d48-82f9-5d56159d4674"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:01:07 crc kubenswrapper[4966]: I0127 17:01:07.420205 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/564b32c4-a589-4d48-82f9-5d56159d4674-kube-api-access-sqjhs" (OuterVolumeSpecName: "kube-api-access-sqjhs") pod "564b32c4-a589-4d48-82f9-5d56159d4674" (UID: "564b32c4-a589-4d48-82f9-5d56159d4674"). InnerVolumeSpecName "kube-api-access-sqjhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:01:07 crc kubenswrapper[4966]: I0127 17:01:07.456493 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564b32c4-a589-4d48-82f9-5d56159d4674-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "564b32c4-a589-4d48-82f9-5d56159d4674" (UID: "564b32c4-a589-4d48-82f9-5d56159d4674"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:01:07 crc kubenswrapper[4966]: I0127 17:01:07.491464 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564b32c4-a589-4d48-82f9-5d56159d4674-config-data" (OuterVolumeSpecName: "config-data") pod "564b32c4-a589-4d48-82f9-5d56159d4674" (UID: "564b32c4-a589-4d48-82f9-5d56159d4674"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:01:07 crc kubenswrapper[4966]: I0127 17:01:07.518632 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564b32c4-a589-4d48-82f9-5d56159d4674-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:07 crc kubenswrapper[4966]: I0127 17:01:07.518668 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqjhs\" (UniqueName: \"kubernetes.io/projected/564b32c4-a589-4d48-82f9-5d56159d4674-kube-api-access-sqjhs\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:07 crc kubenswrapper[4966]: I0127 17:01:07.518679 4966 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564b32c4-a589-4d48-82f9-5d56159d4674-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:07 crc kubenswrapper[4966]: I0127 17:01:07.518689 4966 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/564b32c4-a589-4d48-82f9-5d56159d4674-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:07 crc kubenswrapper[4966]: I0127 17:01:07.764238 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492221-dflqh" event={"ID":"564b32c4-a589-4d48-82f9-5d56159d4674","Type":"ContainerDied","Data":"b47ed84c619c27c45925360395600278187aa1659dfb07a8dd2b0d5dbc5310cd"} Jan 27 17:01:07 crc kubenswrapper[4966]: I0127 17:01:07.764621 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b47ed84c619c27c45925360395600278187aa1659dfb07a8dd2b0d5dbc5310cd" Jan 27 17:01:07 crc kubenswrapper[4966]: I0127 17:01:07.764310 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492221-dflqh" Jan 27 17:01:10 crc kubenswrapper[4966]: I0127 17:01:10.120039 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:01:10 crc kubenswrapper[4966]: I0127 17:01:10.120463 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:01:40 crc kubenswrapper[4966]: I0127 17:01:40.119287 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:01:40 crc kubenswrapper[4966]: I0127 17:01:40.119859 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:01:48 crc kubenswrapper[4966]: I0127 17:01:48.773356 4966 scope.go:117] "RemoveContainer" containerID="4e9ad1f3fd2a08c212ef5e1fcd2ca015a7101d1706a9f50d663681c8f29b1aee" Jan 27 17:02:04 crc kubenswrapper[4966]: I0127 17:02:04.618140 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-85mqp"] Jan 27 17:02:04 crc kubenswrapper[4966]: E0127 17:02:04.619490 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="564b32c4-a589-4d48-82f9-5d56159d4674" containerName="keystone-cron" Jan 27 17:02:04 crc kubenswrapper[4966]: I0127 17:02:04.619509 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="564b32c4-a589-4d48-82f9-5d56159d4674" containerName="keystone-cron" Jan 27 17:02:04 crc kubenswrapper[4966]: I0127 17:02:04.619841 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="564b32c4-a589-4d48-82f9-5d56159d4674" containerName="keystone-cron" Jan 27 17:02:04 crc kubenswrapper[4966]: I0127 17:02:04.624377 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-85mqp" Jan 27 17:02:04 crc kubenswrapper[4966]: I0127 17:02:04.714862 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-85mqp"] Jan 27 17:02:04 crc kubenswrapper[4966]: I0127 17:02:04.816356 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055a60e0-b7aa-4b35-9807-01fe7095113d-catalog-content\") pod \"community-operators-85mqp\" (UID: \"055a60e0-b7aa-4b35-9807-01fe7095113d\") " pod="openshift-marketplace/community-operators-85mqp" Jan 27 17:02:04 crc kubenswrapper[4966]: I0127 17:02:04.817048 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxzcq\" (UniqueName: \"kubernetes.io/projected/055a60e0-b7aa-4b35-9807-01fe7095113d-kube-api-access-dxzcq\") pod \"community-operators-85mqp\" (UID: \"055a60e0-b7aa-4b35-9807-01fe7095113d\") " pod="openshift-marketplace/community-operators-85mqp" Jan 27 17:02:04 crc kubenswrapper[4966]: I0127 17:02:04.817995 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055a60e0-b7aa-4b35-9807-01fe7095113d-utilities\") pod \"community-operators-85mqp\" (UID: \"055a60e0-b7aa-4b35-9807-01fe7095113d\") " pod="openshift-marketplace/community-operators-85mqp" Jan 27 17:02:04 crc kubenswrapper[4966]: I0127 17:02:04.919772 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxzcq\" (UniqueName: \"kubernetes.io/projected/055a60e0-b7aa-4b35-9807-01fe7095113d-kube-api-access-dxzcq\") pod \"community-operators-85mqp\" (UID: \"055a60e0-b7aa-4b35-9807-01fe7095113d\") " pod="openshift-marketplace/community-operators-85mqp" Jan 27 17:02:04 crc kubenswrapper[4966]: I0127 17:02:04.919833 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055a60e0-b7aa-4b35-9807-01fe7095113d-utilities\") pod \"community-operators-85mqp\" (UID: \"055a60e0-b7aa-4b35-9807-01fe7095113d\") " pod="openshift-marketplace/community-operators-85mqp" Jan 27 17:02:04 crc kubenswrapper[4966]: I0127 17:02:04.919874 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055a60e0-b7aa-4b35-9807-01fe7095113d-catalog-content\") pod \"community-operators-85mqp\" (UID: \"055a60e0-b7aa-4b35-9807-01fe7095113d\") " pod="openshift-marketplace/community-operators-85mqp" Jan 27 17:02:04 crc kubenswrapper[4966]: I0127 17:02:04.926061 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055a60e0-b7aa-4b35-9807-01fe7095113d-catalog-content\") pod \"community-operators-85mqp\" (UID: \"055a60e0-b7aa-4b35-9807-01fe7095113d\") " pod="openshift-marketplace/community-operators-85mqp" Jan 27 17:02:04 crc kubenswrapper[4966]: I0127 17:02:04.926798 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055a60e0-b7aa-4b35-9807-01fe7095113d-utilities\") pod \"community-operators-85mqp\" (UID: \"055a60e0-b7aa-4b35-9807-01fe7095113d\") " pod="openshift-marketplace/community-operators-85mqp" Jan 27 17:02:04 crc kubenswrapper[4966]: I0127 17:02:04.955447 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxzcq\" (UniqueName: \"kubernetes.io/projected/055a60e0-b7aa-4b35-9807-01fe7095113d-kube-api-access-dxzcq\") pod \"community-operators-85mqp\" (UID: \"055a60e0-b7aa-4b35-9807-01fe7095113d\") " pod="openshift-marketplace/community-operators-85mqp" Jan 27 17:02:04 crc kubenswrapper[4966]: I0127 17:02:04.965607 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-85mqp" Jan 27 17:02:05 crc kubenswrapper[4966]: I0127 17:02:05.774059 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-85mqp"] Jan 27 17:02:06 crc kubenswrapper[4966]: I0127 17:02:06.515231 4966 generic.go:334] "Generic (PLEG): container finished" podID="055a60e0-b7aa-4b35-9807-01fe7095113d" containerID="4e3017c62d9a17bc475e6ac95dba93e1231a00670830abde9338be0ad9bdbe00" exitCode=0 Jan 27 17:02:06 crc kubenswrapper[4966]: I0127 17:02:06.515435 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85mqp" event={"ID":"055a60e0-b7aa-4b35-9807-01fe7095113d","Type":"ContainerDied","Data":"4e3017c62d9a17bc475e6ac95dba93e1231a00670830abde9338be0ad9bdbe00"} Jan 27 17:02:06 crc kubenswrapper[4966]: I0127 17:02:06.515519 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85mqp" event={"ID":"055a60e0-b7aa-4b35-9807-01fe7095113d","Type":"ContainerStarted","Data":"48daecc54db6494bfa81292f5c46d4f90a04f2a5e576ebd4a83e6885c1f3e2a8"} Jan 27 17:02:10 crc kubenswrapper[4966]: I0127 17:02:10.119333 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:02:10 crc kubenswrapper[4966]: I0127 17:02:10.121340 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:02:10 crc kubenswrapper[4966]: I0127 17:02:10.121876 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 17:02:10 crc kubenswrapper[4966]: I0127 17:02:10.123732 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aac0539be57f6932ac72b94428abfbd1ca11f23206b67d7996e17f5ed19cf93c"} pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 17:02:10 crc kubenswrapper[4966]: I0127 17:02:10.123827 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" containerID="cri-o://aac0539be57f6932ac72b94428abfbd1ca11f23206b67d7996e17f5ed19cf93c" gracePeriod=600 Jan 27 17:02:10 crc kubenswrapper[4966]: I0127 17:02:10.568521 4966 generic.go:334] "Generic (PLEG): container finished" podID="75889828-fc5d-4516-a3c4-db3affd4f810" containerID="aac0539be57f6932ac72b94428abfbd1ca11f23206b67d7996e17f5ed19cf93c" exitCode=0 Jan 27 17:02:10 crc kubenswrapper[4966]: I0127 17:02:10.568582 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerDied","Data":"aac0539be57f6932ac72b94428abfbd1ca11f23206b67d7996e17f5ed19cf93c"} Jan 27 17:02:10 crc kubenswrapper[4966]: I0127 17:02:10.568620 4966 scope.go:117] "RemoveContainer" containerID="1bb7da8a427196e8cc78819e02c2b5ff1b8aea5a895b047bc9581e28224a0892" Jan 27 17:02:15 crc kubenswrapper[4966]: I0127 17:02:15.633647 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85mqp" event={"ID":"055a60e0-b7aa-4b35-9807-01fe7095113d","Type":"ContainerStarted","Data":"d7a05510a11a99728ea20e6ee856e10ff7570727172dfb1e7817f0aba3b8ec44"} Jan 27 17:02:15 crc kubenswrapper[4966]: I0127 17:02:15.640522 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246"} Jan 27 17:02:18 crc kubenswrapper[4966]: I0127 17:02:18.681395 4966 generic.go:334] "Generic (PLEG): container finished" podID="055a60e0-b7aa-4b35-9807-01fe7095113d" containerID="d7a05510a11a99728ea20e6ee856e10ff7570727172dfb1e7817f0aba3b8ec44" exitCode=0 Jan 27 17:02:18 crc kubenswrapper[4966]: I0127 17:02:18.681946 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85mqp" event={"ID":"055a60e0-b7aa-4b35-9807-01fe7095113d","Type":"ContainerDied","Data":"d7a05510a11a99728ea20e6ee856e10ff7570727172dfb1e7817f0aba3b8ec44"} Jan 27 17:02:19 crc kubenswrapper[4966]: I0127 17:02:19.693956 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85mqp" event={"ID":"055a60e0-b7aa-4b35-9807-01fe7095113d","Type":"ContainerStarted","Data":"c5ef936eaa3fb8347ffddd13c05f053ebbd12d16c3c2c2007442555e9a147b8b"} Jan 27 17:02:19 crc kubenswrapper[4966]: I0127 17:02:19.769932 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-85mqp" podStartSLOduration=3.002903433 podStartE2EDuration="15.758332589s" podCreationTimestamp="2026-01-27 17:02:04 +0000 UTC" firstStartedPulling="2026-01-27 17:02:06.517684126 +0000 UTC m=+4792.820477614" lastFinishedPulling="2026-01-27 17:02:19.273113282 +0000 UTC m=+4805.575906770" observedRunningTime="2026-01-27 17:02:19.731385864 +0000 UTC m=+4806.034179372" watchObservedRunningTime="2026-01-27 17:02:19.758332589 +0000 UTC m=+4806.061126087" Jan 27 17:02:24 crc kubenswrapper[4966]: I0127 17:02:24.967648 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-85mqp" Jan 27 17:02:24 crc kubenswrapper[4966]: I0127 17:02:24.969165 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-85mqp" Jan 27 17:02:26 crc kubenswrapper[4966]: I0127 17:02:26.842163 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-85mqp" podUID="055a60e0-b7aa-4b35-9807-01fe7095113d" containerName="registry-server" probeResult="failure" output=< Jan 27 17:02:26 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:02:26 crc kubenswrapper[4966]: > Jan 27 17:02:36 crc kubenswrapper[4966]: I0127 17:02:36.095468 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-85mqp" podUID="055a60e0-b7aa-4b35-9807-01fe7095113d" containerName="registry-server" probeResult="failure" output=< Jan 27 17:02:36 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:02:36 crc kubenswrapper[4966]: > Jan 27 17:02:45 crc kubenswrapper[4966]: I0127 17:02:45.057777 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-85mqp" Jan 27 17:02:45 crc kubenswrapper[4966]: I0127 17:02:45.119500 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-85mqp" Jan 27 17:02:46 crc kubenswrapper[4966]: I0127 17:02:46.210379 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-85mqp"] Jan 27 17:02:46 crc kubenswrapper[4966]: I0127 17:02:46.303707 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6zhtk"] Jan 27 17:02:46 crc kubenswrapper[4966]: I0127 17:02:46.313320 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6zhtk" podUID="b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680" containerName="registry-server" containerID="cri-o://03e9341e5e55a8cc92fc93883fd156c84e643556f0f0801d69970f40ebb61cbe" gracePeriod=2 Jan 27 17:02:47 crc kubenswrapper[4966]: I0127 17:02:47.053849 4966 generic.go:334] "Generic (PLEG): container finished" podID="b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680" containerID="03e9341e5e55a8cc92fc93883fd156c84e643556f0f0801d69970f40ebb61cbe" exitCode=0 Jan 27 17:02:47 crc kubenswrapper[4966]: I0127 17:02:47.055980 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zhtk" event={"ID":"b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680","Type":"ContainerDied","Data":"03e9341e5e55a8cc92fc93883fd156c84e643556f0f0801d69970f40ebb61cbe"} Jan 27 17:02:47 crc kubenswrapper[4966]: I0127 17:02:47.946460 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zhtk" Jan 27 17:02:48 crc kubenswrapper[4966]: I0127 17:02:48.079216 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zhtk" event={"ID":"b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680","Type":"ContainerDied","Data":"3eb70555e7ace9fe82fa043398ec19659b1710d56b0ecba16d395b6eacdfa509"} Jan 27 17:02:48 crc kubenswrapper[4966]: I0127 17:02:48.079405 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zhtk" Jan 27 17:02:48 crc kubenswrapper[4966]: I0127 17:02:48.081067 4966 scope.go:117] "RemoveContainer" containerID="03e9341e5e55a8cc92fc93883fd156c84e643556f0f0801d69970f40ebb61cbe" Jan 27 17:02:48 crc kubenswrapper[4966]: I0127 17:02:48.095517 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680-utilities\") pod \"b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680\" (UID: \"b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680\") " Jan 27 17:02:48 crc kubenswrapper[4966]: I0127 17:02:48.096084 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680-catalog-content\") pod \"b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680\" (UID: \"b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680\") " Jan 27 17:02:48 crc kubenswrapper[4966]: I0127 17:02:48.096124 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgv94\" (UniqueName: \"kubernetes.io/projected/b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680-kube-api-access-cgv94\") pod \"b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680\" (UID: \"b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680\") " Jan 27 17:02:48 crc kubenswrapper[4966]: I0127 17:02:48.102573 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680-utilities" (OuterVolumeSpecName: "utilities") pod "b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680" (UID: "b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:02:48 crc kubenswrapper[4966]: I0127 17:02:48.154599 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680-kube-api-access-cgv94" (OuterVolumeSpecName: "kube-api-access-cgv94") pod "b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680" (UID: "b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680"). InnerVolumeSpecName "kube-api-access-cgv94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:02:48 crc kubenswrapper[4966]: I0127 17:02:48.160643 4966 scope.go:117] "RemoveContainer" containerID="b54e0e295b2862e6e03d97caba4759065123e6e8f4c4eca16a4bcb39ba9bcbdb" Jan 27 17:02:48 crc kubenswrapper[4966]: I0127 17:02:48.199675 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgv94\" (UniqueName: \"kubernetes.io/projected/b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680-kube-api-access-cgv94\") on node \"crc\" DevicePath \"\"" Jan 27 17:02:48 crc kubenswrapper[4966]: I0127 17:02:48.199716 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:02:48 crc kubenswrapper[4966]: I0127 17:02:48.250007 4966 scope.go:117] "RemoveContainer" containerID="770a5f8200e923718c40271c35681a073e52ac27df4549fb0b825500d9fde32b" Jan 27 17:02:48 crc kubenswrapper[4966]: I0127 17:02:48.295540 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680" (UID: "b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:02:48 crc kubenswrapper[4966]: I0127 17:02:48.302663 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:02:48 crc kubenswrapper[4966]: I0127 17:02:48.416435 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6zhtk"] Jan 27 17:02:48 crc kubenswrapper[4966]: I0127 17:02:48.426861 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6zhtk"] Jan 27 17:02:48 crc kubenswrapper[4966]: I0127 17:02:48.536005 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680" path="/var/lib/kubelet/pods/b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680/volumes" Jan 27 17:02:52 crc kubenswrapper[4966]: I0127 17:02:52.353689 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-72wzd"] Jan 27 17:02:52 crc kubenswrapper[4966]: E0127 17:02:52.363343 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680" containerName="extract-utilities" Jan 27 17:02:52 crc kubenswrapper[4966]: I0127 17:02:52.365628 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680" containerName="extract-utilities" Jan 27 17:02:52 crc kubenswrapper[4966]: E0127 17:02:52.365678 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680" containerName="extract-content" Jan 27 17:02:52 crc kubenswrapper[4966]: I0127 17:02:52.365688 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680" containerName="extract-content" Jan 27 17:02:52 crc kubenswrapper[4966]: E0127 17:02:52.365696 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680" containerName="registry-server" Jan 27 17:02:52 crc kubenswrapper[4966]: I0127 17:02:52.365702 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680" containerName="registry-server" Jan 27 17:02:52 crc kubenswrapper[4966]: I0127 17:02:52.366028 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8b8a2b7-39e4-4fc2-afd3-7f07fb8bc680" containerName="registry-server" Jan 27 17:02:52 crc kubenswrapper[4966]: I0127 17:02:52.381865 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-72wzd" Jan 27 17:02:52 crc kubenswrapper[4966]: I0127 17:02:52.505376 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55af454-c7f3-4d9c-8c30-108b947410a7-catalog-content\") pod \"certified-operators-72wzd\" (UID: \"f55af454-c7f3-4d9c-8c30-108b947410a7\") " pod="openshift-marketplace/certified-operators-72wzd" Jan 27 17:02:52 crc kubenswrapper[4966]: I0127 17:02:52.505536 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdz56\" (UniqueName: \"kubernetes.io/projected/f55af454-c7f3-4d9c-8c30-108b947410a7-kube-api-access-pdz56\") pod \"certified-operators-72wzd\" (UID: \"f55af454-c7f3-4d9c-8c30-108b947410a7\") " pod="openshift-marketplace/certified-operators-72wzd" Jan 27 17:02:52 crc kubenswrapper[4966]: I0127 17:02:52.505961 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55af454-c7f3-4d9c-8c30-108b947410a7-utilities\") pod \"certified-operators-72wzd\" (UID: \"f55af454-c7f3-4d9c-8c30-108b947410a7\") " pod="openshift-marketplace/certified-operators-72wzd" Jan 27 17:02:52 crc kubenswrapper[4966]: I0127 17:02:52.506961 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-72wzd"] Jan 27 17:02:52 crc kubenswrapper[4966]: I0127 17:02:52.608176 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55af454-c7f3-4d9c-8c30-108b947410a7-utilities\") pod \"certified-operators-72wzd\" (UID: \"f55af454-c7f3-4d9c-8c30-108b947410a7\") " pod="openshift-marketplace/certified-operators-72wzd" Jan 27 17:02:52 crc kubenswrapper[4966]: I0127 17:02:52.608307 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55af454-c7f3-4d9c-8c30-108b947410a7-catalog-content\") pod \"certified-operators-72wzd\" (UID: \"f55af454-c7f3-4d9c-8c30-108b947410a7\") " pod="openshift-marketplace/certified-operators-72wzd" Jan 27 17:02:52 crc kubenswrapper[4966]: I0127 17:02:52.608371 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdz56\" (UniqueName: \"kubernetes.io/projected/f55af454-c7f3-4d9c-8c30-108b947410a7-kube-api-access-pdz56\") pod \"certified-operators-72wzd\" (UID: \"f55af454-c7f3-4d9c-8c30-108b947410a7\") " pod="openshift-marketplace/certified-operators-72wzd" Jan 27 17:02:52 crc kubenswrapper[4966]: I0127 17:02:52.616463 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55af454-c7f3-4d9c-8c30-108b947410a7-utilities\") pod \"certified-operators-72wzd\" (UID: \"f55af454-c7f3-4d9c-8c30-108b947410a7\") " pod="openshift-marketplace/certified-operators-72wzd" Jan 27 17:02:52 crc kubenswrapper[4966]: I0127 17:02:52.617956 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55af454-c7f3-4d9c-8c30-108b947410a7-catalog-content\") pod \"certified-operators-72wzd\" (UID: \"f55af454-c7f3-4d9c-8c30-108b947410a7\") " pod="openshift-marketplace/certified-operators-72wzd" Jan 27 17:02:52 crc kubenswrapper[4966]: I0127 17:02:52.661813 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdz56\" (UniqueName: \"kubernetes.io/projected/f55af454-c7f3-4d9c-8c30-108b947410a7-kube-api-access-pdz56\") pod \"certified-operators-72wzd\" (UID: \"f55af454-c7f3-4d9c-8c30-108b947410a7\") " pod="openshift-marketplace/certified-operators-72wzd" Jan 27 17:02:52 crc kubenswrapper[4966]: I0127 17:02:52.717088 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-72wzd" Jan 27 17:02:56 crc kubenswrapper[4966]: I0127 17:02:56.215134 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-72wzd"] Jan 27 17:02:57 crc kubenswrapper[4966]: I0127 17:02:57.248395 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72wzd" event={"ID":"f55af454-c7f3-4d9c-8c30-108b947410a7","Type":"ContainerDied","Data":"732751d060a03373bc2ae2fdfa444e01c0f71fd0882aafd7b8b28e7f2c7c5249"} Jan 27 17:02:57 crc kubenswrapper[4966]: I0127 17:02:57.249540 4966 generic.go:334] "Generic (PLEG): container finished" podID="f55af454-c7f3-4d9c-8c30-108b947410a7" containerID="732751d060a03373bc2ae2fdfa444e01c0f71fd0882aafd7b8b28e7f2c7c5249" exitCode=0 Jan 27 17:02:57 crc kubenswrapper[4966]: I0127 17:02:57.249918 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72wzd" event={"ID":"f55af454-c7f3-4d9c-8c30-108b947410a7","Type":"ContainerStarted","Data":"aab47489e062102b6caaf7371ad364ce268e8fa5341c9e1e33f3196f5334984d"} Jan 27 17:02:57 crc kubenswrapper[4966]: I0127 17:02:57.257972 4966 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 17:02:57 crc kubenswrapper[4966]: I0127 17:02:57.750177 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="1dc01362-ea5a-48fe-b67f-1e00b193c36e" containerName="galera" probeResult="failure" output="command timed out" Jan 27 17:02:57 crc kubenswrapper[4966]: I0127 17:02:57.750182 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="1dc01362-ea5a-48fe-b67f-1e00b193c36e" containerName="galera" probeResult="failure" output="command timed out" Jan 27 17:02:57 crc kubenswrapper[4966]: I0127 17:02:57.896826 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-lr7wd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:02:57 crc kubenswrapper[4966]: I0127 17:02:57.896855 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-lr7wd container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:02:57 crc kubenswrapper[4966]: I0127 17:02:57.896945 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" podUID="a58b269c-6e15-4eda-aa6f-00e51aa132fe" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:02:57 crc kubenswrapper[4966]: I0127 17:02:57.896951 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" podUID="a58b269c-6e15-4eda-aa6f-00e51aa132fe" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:02:57 crc kubenswrapper[4966]: I0127 17:02:57.912318 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-5wzxv container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:02:57 crc kubenswrapper[4966]: I0127 17:02:57.912378 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" podUID="fa2c58c2-9b23-4360-897e-582237775277" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:02:57 crc kubenswrapper[4966]: I0127 17:02:57.912483 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-5wzxv container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:02:57 crc kubenswrapper[4966]: I0127 17:02:57.912503 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" podUID="fa2c58c2-9b23-4360-897e-582237775277" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:02:58 crc kubenswrapper[4966]: I0127 17:02:58.040192 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" podUID="37cde3a9-999c-4c96-a024-1769c058c4c8" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:02:58 crc kubenswrapper[4966]: I0127 17:02:58.040235 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" podUID="37cde3a9-999c-4c96-a024-1769c058c4c8" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:02:58 crc kubenswrapper[4966]: I0127 17:02:58.756286 4966 patch_prober.go:28] interesting pod/console-operator-58897d9998-wpbsk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:02:58 crc kubenswrapper[4966]: I0127 17:02:58.756304 4966 patch_prober.go:28] interesting pod/console-operator-58897d9998-wpbsk container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:02:58 crc kubenswrapper[4966]: I0127 17:02:58.756817 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" podUID="12b9edc6-687c-47b9-b8c6-8fa656fc40de" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:02:58 crc kubenswrapper[4966]: I0127 17:02:58.756855 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" podUID="12b9edc6-687c-47b9-b8c6-8fa656fc40de" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:02:58 crc kubenswrapper[4966]: I0127 17:02:58.968062 4966 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-fws6n container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:02:58 crc kubenswrapper[4966]: I0127 17:02:58.968108 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" podUID="c4ecd65d-fa3e-456b-8db6-314cc20216ed" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:02:59 crc kubenswrapper[4966]: I0127 17:02:59.087964 4966 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zk5gr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:02:59 crc kubenswrapper[4966]: I0127 17:02:59.088030 4966 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zk5gr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:02:59 crc kubenswrapper[4966]: I0127 17:02:59.088060 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" podUID="1317de86-7041-4b5a-8403-98489b8dc338" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:02:59 crc kubenswrapper[4966]: I0127 17:02:59.088024 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" podUID="1317de86-7041-4b5a-8403-98489b8dc338" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:02:59 crc kubenswrapper[4966]: I0127 17:02:59.088178 4966 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fgklq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:02:59 crc kubenswrapper[4966]: I0127 17:02:59.088173 4966 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fgklq container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:02:59 crc kubenswrapper[4966]: I0127 17:02:59.088202 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" podUID="70f9fda4-72f3-4f6a-8b6a-38dffbb6c958" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:02:59 crc kubenswrapper[4966]: I0127 17:02:59.088238 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" podUID="70f9fda4-72f3-4f6a-8b6a-38dffbb6c958" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:02:59 crc kubenswrapper[4966]: I0127 17:02:59.275645 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72wzd" event={"ID":"f55af454-c7f3-4d9c-8c30-108b947410a7","Type":"ContainerStarted","Data":"7ff269dddff4baf8c92fba5237badaa6769045011cd9f6e80a4a6e62557e9b90"} Jan 27 17:02:59 crc kubenswrapper[4966]: I0127 17:02:59.288265 4966 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-cr8s9 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:02:59 crc kubenswrapper[4966]: I0127 17:02:59.288349 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" podUID="a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:02:59 crc kubenswrapper[4966]: I0127 17:02:59.740246 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="a1be6855-0a73-406a-93d5-625f7fca558b" containerName="galera" probeResult="failure" output="command timed out" Jan 27 17:02:59 crc kubenswrapper[4966]: I0127 17:02:59.740904 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a1be6855-0a73-406a-93d5-625f7fca558b" containerName="galera" probeResult="failure" output="command timed out" Jan 27 17:03:00 crc kubenswrapper[4966]: I0127 17:03:00.156087 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" podUID="2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:00 crc kubenswrapper[4966]: I0127 17:03:00.156112 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" podUID="2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:00 crc kubenswrapper[4966]: I0127 17:03:00.747511 4966 patch_prober.go:28] interesting pod/route-controller-manager-86647b877f-m6l5t container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:00 crc kubenswrapper[4966]: I0127 17:03:00.747578 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" podUID="e0d60f56-c8ec-4004-a1c4-4f014dbccf7f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:00 crc kubenswrapper[4966]: I0127 17:03:00.747498 4966 patch_prober.go:28] interesting pod/route-controller-manager-86647b877f-m6l5t container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:00 crc kubenswrapper[4966]: I0127 17:03:00.747913 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" podUID="e0d60f56-c8ec-4004-a1c4-4f014dbccf7f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:01 crc kubenswrapper[4966]: I0127 17:03:01.451200 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-48xxm" podUID="57175838-b13a-4dc9-bf85-ed8668e3d88c" containerName="registry-server" probeResult="failure" output=< Jan 27 17:03:01 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:03:01 crc kubenswrapper[4966]: > Jan 27 17:03:01 crc kubenswrapper[4966]: I0127 17:03:01.451206 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-48xxm" podUID="57175838-b13a-4dc9-bf85-ed8668e3d88c" containerName="registry-server" probeResult="failure" output=< Jan 27 17:03:01 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:03:01 crc kubenswrapper[4966]: > Jan 27 17:03:01 crc kubenswrapper[4966]: I0127 17:03:01.523047 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-fpvwf" podUID="e75b042c-789e-43fc-8736-b3f5093f21db" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:02 crc kubenswrapper[4966]: I0127 17:03:02.344128 4966 patch_prober.go:28] interesting pod/oauth-openshift-666545c866-rs2mz container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:02 crc kubenswrapper[4966]: I0127 17:03:02.344571 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" podUID="d0983135-11bf-4938-9360-757bc4556ec0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:02 crc kubenswrapper[4966]: I0127 17:03:02.344169 4966 patch_prober.go:28] interesting pod/oauth-openshift-666545c866-rs2mz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:02 crc kubenswrapper[4966]: I0127 17:03:02.344669 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" podUID="d0983135-11bf-4938-9360-757bc4556ec0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:10 crc kubenswrapper[4966]: I0127 17:03:10.757702 4966 patch_prober.go:28] interesting pod/route-controller-manager-86647b877f-m6l5t container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:10 crc kubenswrapper[4966]: I0127 17:03:10.757707 4966 patch_prober.go:28] interesting pod/route-controller-manager-86647b877f-m6l5t container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:10 crc kubenswrapper[4966]: I0127 17:03:10.761093 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" podUID="e0d60f56-c8ec-4004-a1c4-4f014dbccf7f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:10 crc kubenswrapper[4966]: I0127 17:03:10.761125 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" podUID="e0d60f56-c8ec-4004-a1c4-4f014dbccf7f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:11 crc kubenswrapper[4966]: I0127 17:03:11.580188 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-fpvwf" podUID="e75b042c-789e-43fc-8736-b3f5093f21db" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:11 crc kubenswrapper[4966]: I0127 17:03:11.580196 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" podUID="3e510e0a-a47d-416e-aec1-c7de88b0a2af" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:11 crc kubenswrapper[4966]: I0127 17:03:11.621183 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" podUID="3e510e0a-a47d-416e-aec1-c7de88b0a2af" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:11 crc kubenswrapper[4966]: I0127 17:03:11.621232 4966 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9876t container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.6:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:11 crc kubenswrapper[4966]: I0127 17:03:11.621252 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-fpvwf" podUID="e75b042c-789e-43fc-8736-b3f5093f21db" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:11 crc kubenswrapper[4966]: I0127 17:03:11.621299 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-fpvwf" podUID="e75b042c-789e-43fc-8736-b3f5093f21db" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:11 crc kubenswrapper[4966]: I0127 17:03:11.621308 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-9876t" podUID="0443c8da-0b0f-4632-b990-f83e403a8b82" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.6:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:11 crc kubenswrapper[4966]: I0127 17:03:11.621357 4966 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9876t container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.6:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:11 crc kubenswrapper[4966]: I0127 17:03:11.621554 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-9876t" podUID="0443c8da-0b0f-4632-b990-f83e403a8b82" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.6:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:11 crc kubenswrapper[4966]: I0127 17:03:11.763136 4966 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-x6l4k container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.15:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:11 crc kubenswrapper[4966]: I0127 17:03:11.763578 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" podUID="b9f6b9a4-ded2-467b-9e87-6fafa667f709" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.15:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:12 crc kubenswrapper[4966]: E0127 17:03:12.251362 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf55af454_c7f3_4d9c_8c30_108b947410a7.slice/crio-7ff269dddff4baf8c92fba5237badaa6769045011cd9f6e80a4a6e62557e9b90.scope\": RecentStats: unable to find data in memory cache]" Jan 27 17:03:12 crc kubenswrapper[4966]: I0127 17:03:12.302042 4966 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lptwz container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:12 crc kubenswrapper[4966]: I0127 17:03:12.302119 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" podUID="f58eecb1-9324-4165-9446-631a0438392e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:12 crc kubenswrapper[4966]: I0127 17:03:12.302384 4966 patch_prober.go:28] interesting pod/oauth-openshift-666545c866-rs2mz container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:12 crc kubenswrapper[4966]: I0127 17:03:12.302379 4966 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lptwz container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:12 crc kubenswrapper[4966]: I0127 17:03:12.302401 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" podUID="d0983135-11bf-4938-9360-757bc4556ec0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:12 crc kubenswrapper[4966]: I0127 17:03:12.302434 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" podUID="f58eecb1-9324-4165-9446-631a0438392e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:12 crc kubenswrapper[4966]: I0127 17:03:12.302503 4966 patch_prober.go:28] interesting pod/oauth-openshift-666545c866-rs2mz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:12 crc kubenswrapper[4966]: I0127 17:03:12.302518 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" podUID="d0983135-11bf-4938-9360-757bc4556ec0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:12 crc kubenswrapper[4966]: I0127 17:03:12.456037 4966 generic.go:334] "Generic (PLEG): container finished" podID="f55af454-c7f3-4d9c-8c30-108b947410a7" containerID="7ff269dddff4baf8c92fba5237badaa6769045011cd9f6e80a4a6e62557e9b90" exitCode=0 Jan 27 17:03:12 crc kubenswrapper[4966]: I0127 17:03:12.457371 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72wzd" event={"ID":"f55af454-c7f3-4d9c-8c30-108b947410a7","Type":"ContainerDied","Data":"7ff269dddff4baf8c92fba5237badaa6769045011cd9f6e80a4a6e62557e9b90"} Jan 27 17:03:12 crc kubenswrapper[4966]: I0127 17:03:12.579118 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-wpv4z" podUID="ce06a03b-db66-49be-ace7-f79a0b78dc62" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:12 crc kubenswrapper[4966]: I0127 17:03:12.579428 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-wpv4z" podUID="ce06a03b-db66-49be-ace7-f79a0b78dc62" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:12 crc kubenswrapper[4966]: I0127 17:03:12.654265 4966 trace.go:236] Trace[329122915]: "Calculate volume metrics of mysql-db for pod openstack/openstack-galera-0" (27-Jan-2026 17:03:09.971) (total time: 2677ms): Jan 27 17:03:12 crc kubenswrapper[4966]: Trace[329122915]: [2.67712059s] [2.67712059s] END Jan 27 17:03:12 crc kubenswrapper[4966]: I0127 17:03:12.894920 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-lr7wd container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:12 crc kubenswrapper[4966]: I0127 17:03:12.894982 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" podUID="a58b269c-6e15-4eda-aa6f-00e51aa132fe" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:12 crc kubenswrapper[4966]: I0127 17:03:12.895051 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-lr7wd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:12 crc kubenswrapper[4966]: I0127 17:03:12.895121 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" podUID="a58b269c-6e15-4eda-aa6f-00e51aa132fe" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:12 crc kubenswrapper[4966]: I0127 17:03:12.911421 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-5wzxv container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:12 crc kubenswrapper[4966]: I0127 17:03:12.911471 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-5wzxv container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:12 crc kubenswrapper[4966]: I0127 17:03:12.911470 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" podUID="fa2c58c2-9b23-4360-897e-582237775277" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:12 crc kubenswrapper[4966]: I0127 17:03:12.911522 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" podUID="fa2c58c2-9b23-4360-897e-582237775277" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:13 crc kubenswrapper[4966]: I0127 17:03:13.686208 4966 patch_prober.go:28] interesting pod/controller-manager-7df4589fcc-vdfk5 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:13 crc kubenswrapper[4966]: I0127 17:03:13.686594 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" podUID="03e547fd-14a6-41eb-9bf7-8aea75e60ddf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:13 crc kubenswrapper[4966]: I0127 17:03:13.686253 4966 patch_prober.go:28] interesting pod/controller-manager-7df4589fcc-vdfk5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:13 crc kubenswrapper[4966]: I0127 17:03:13.686891 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" podUID="03e547fd-14a6-41eb-9bf7-8aea75e60ddf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:14 crc kubenswrapper[4966]: I0127 17:03:14.200079 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h" podUID="093d4126-d96d-475a-9519-020f2f73a742" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:14 crc kubenswrapper[4966]: I0127 17:03:14.200092 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h" podUID="093d4126-d96d-475a-9519-020f2f73a742" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:14 crc kubenswrapper[4966]: I0127 17:03:14.314523 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr" podUID="45594823-cdbb-4586-95d2-f2af9f6460b9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:14 crc kubenswrapper[4966]: I0127 17:03:14.314849 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr" podUID="45594823-cdbb-4586-95d2-f2af9f6460b9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:14 crc kubenswrapper[4966]: I0127 17:03:14.466118 4966 patch_prober.go:28] interesting pod/metrics-server-5fcbd5f794-2hhjm container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:14 crc kubenswrapper[4966]: I0127 17:03:14.466492 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" podUID="525a9ae1-69bf-4f75-b283-c0844b828a90" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:14 crc kubenswrapper[4966]: I0127 17:03:14.466604 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j7j9c" podUID="624197a8-447a-4004-a1e0-679ce29dbe86" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:14 crc kubenswrapper[4966]: I0127 17:03:14.466644 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j7j9c" podUID="624197a8-447a-4004-a1e0-679ce29dbe86" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:14 crc kubenswrapper[4966]: I0127 17:03:14.636154 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv" podUID="08ac68d1-220d-4098-9eed-6d0e3b752e5d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:14 crc kubenswrapper[4966]: I0127 17:03:14.720146 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs" podUID="64b84834-e9db-4f50-a7c7-6d24302652d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:14 crc kubenswrapper[4966]: I0127 17:03:14.720563 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv" podUID="08ac68d1-220d-4098-9eed-6d0e3b752e5d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:14 crc kubenswrapper[4966]: I0127 17:03:14.744963 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="20044d98-b229-4e9a-946f-b18902841fe6" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 27 17:03:14 crc kubenswrapper[4966]: I0127 17:03:14.745405 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="20044d98-b229-4e9a-946f-b18902841fe6" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 27 17:03:14 crc kubenswrapper[4966]: I0127 17:03:14.885462 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw" podUID="3ae401e5-feea-47d3-9c86-1e33635a461a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:14 crc kubenswrapper[4966]: I0127 17:03:14.885813 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw" podUID="3ae401e5-feea-47d3-9c86-1e33635a461a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:14 crc kubenswrapper[4966]: I0127 17:03:14.885869 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs" podUID="64b84834-e9db-4f50-a7c7-6d24302652d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:14 crc kubenswrapper[4966]: I0127 17:03:14.969274 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc" podUID="8645d6d2-f7cd-4578-9a1a-8b07beeae08c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:15 crc kubenswrapper[4966]: I0127 17:03:15.051057 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp" podUID="6006cb9c-d22f-47b1-b8b6-cb999ecab7df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:15 crc kubenswrapper[4966]: I0127 17:03:15.051068 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-crddt" podUID="3bc21fb1-fc35-42e4-ab0f-bdd047cd05d7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:15 crc kubenswrapper[4966]: I0127 17:03:15.051107 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp" podUID="6006cb9c-d22f-47b1-b8b6-cb999ecab7df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:15 crc kubenswrapper[4966]: I0127 17:03:15.051167 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:15 crc kubenswrapper[4966]: I0127 17:03:15.051207 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:15 crc kubenswrapper[4966]: I0127 17:03:15.051216 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:15 crc kubenswrapper[4966]: I0127 17:03:15.051245 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf" podUID="e2cfe3d1-d500-418e-bc6b-4da3482999c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:15 crc kubenswrapper[4966]: I0127 17:03:15.051148 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf" podUID="e2cfe3d1-d500-418e-bc6b-4da3482999c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:15 crc kubenswrapper[4966]: I0127 17:03:15.051268 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:15 crc kubenswrapper[4966]: I0127 17:03:15.051299 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rv5wp" podUID="f096bdf7-f589-4344-b71f-ab9db2eded5f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:15 crc kubenswrapper[4966]: I0127 17:03:15.051336 4966 patch_prober.go:28] interesting pod/monitoring-plugin-785c968969-bl9x5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:15 crc kubenswrapper[4966]: I0127 17:03:15.051358 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-785c968969-bl9x5" podUID="9bc3bce9-60e2-4ab9-ab45-28e69ba4a877" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:15 crc kubenswrapper[4966]: I0127 17:03:15.051366 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69" podUID="dd6f6600-3072-42e6-a8ca-5e72c960425a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:15 crc kubenswrapper[4966]: I0127 17:03:15.051547 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-crddt" podUID="3bc21fb1-fc35-42e4-ab0f-bdd047cd05d7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:15 crc kubenswrapper[4966]: I0127 17:03:15.051398 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69" podUID="dd6f6600-3072-42e6-a8ca-5e72c960425a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:15 crc kubenswrapper[4966]: I0127 17:03:15.495494 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72wzd" event={"ID":"f55af454-c7f3-4d9c-8c30-108b947410a7","Type":"ContainerStarted","Data":"50a69cadd976d7712e9ca39e9d297d310a97358081f27d3218a13e593202b1b0"} Jan 27 17:03:16 crc kubenswrapper[4966]: I0127 17:03:16.048205 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" podUID="cfa058e6-1d6f-4dc2-8058-c00b201175b5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:16 crc kubenswrapper[4966]: I0127 17:03:16.114392 4966 patch_prober.go:28] interesting pod/thanos-querier-78c6ff45cc-gspnf container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:16 crc kubenswrapper[4966]: I0127 17:03:16.114521 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" podUID="76f00c13-2195-40be-829a-ce9e9c94a795" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:16 crc kubenswrapper[4966]: I0127 17:03:16.369711 4966 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:16 crc kubenswrapper[4966]: I0127 17:03:16.369877 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:16 crc kubenswrapper[4966]: I0127 17:03:16.445181 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" podUID="20e54080-e732-4925-b0c2-35669744821d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:16 crc kubenswrapper[4966]: I0127 17:03:16.636256 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" podUID="e49e9fb2-a5f0-4106-b239-93d488e4f515" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:16 crc kubenswrapper[4966]: I0127 17:03:16.848973 4966 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-88nmc container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:16 crc kubenswrapper[4966]: I0127 17:03:16.849059 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" podUID="c70eec8b-c8da-4620-9c5e-bb19e5d66424" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:16 crc kubenswrapper[4966]: I0127 17:03:16.849774 4966 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:16 crc kubenswrapper[4966]: I0127 17:03:16.849802 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:16 crc kubenswrapper[4966]: I0127 17:03:16.979165 4966 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-95xm4 container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:16 crc kubenswrapper[4966]: I0127 17:03:16.979291 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-95xm4" podUID="80c26c09-83a0-4b08-979b-a138a5ed5d4b" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:16 crc kubenswrapper[4966]: I0127 17:03:16.993994 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="20223cbd-a9e0-4eb8-b051-0833bebe5975" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:16 crc kubenswrapper[4966]: I0127 17:03:16.994039 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="20223cbd-a9e0-4eb8-b051-0833bebe5975" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:17 crc kubenswrapper[4966]: I0127 17:03:17.093515 4966 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-6s27c container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:17 crc kubenswrapper[4966]: I0127 17:03:17.093589 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" podUID="9c5e1e82-3053-4895-91ce-56475540fc35" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:17 crc kubenswrapper[4966]: I0127 17:03:17.746295 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="832412d6-8f0c-4372-b056-87d49ac6f4bd" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 17:03:17 crc kubenswrapper[4966]: I0127 17:03:17.746310 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="1dc01362-ea5a-48fe-b67f-1e00b193c36e" containerName="galera" probeResult="failure" output="command timed out" Jan 27 17:03:17 crc kubenswrapper[4966]: I0127 17:03:17.746298 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="832412d6-8f0c-4372-b056-87d49ac6f4bd" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 17:03:17 crc kubenswrapper[4966]: I0127 17:03:17.758164 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="1dc01362-ea5a-48fe-b67f-1e00b193c36e" containerName="galera" probeResult="failure" output="command timed out" Jan 27 17:03:17 crc kubenswrapper[4966]: I0127 17:03:17.848850 4966 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-88nmc container/loki-distributor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.51:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:17 crc kubenswrapper[4966]: I0127 17:03:17.849223 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" podUID="c70eec8b-c8da-4620-9c5e-bb19e5d66424" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.51:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:17 crc kubenswrapper[4966]: I0127 17:03:17.895706 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-lr7wd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:17 crc kubenswrapper[4966]: I0127 17:03:17.895782 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" podUID="a58b269c-6e15-4eda-aa6f-00e51aa132fe" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:17 crc kubenswrapper[4966]: I0127 17:03:17.896276 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:17 crc kubenswrapper[4966]: I0127 17:03:17.896514 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:17 crc kubenswrapper[4966]: I0127 17:03:17.896346 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:17 crc kubenswrapper[4966]: I0127 17:03:17.896584 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:17 crc kubenswrapper[4966]: I0127 17:03:17.912052 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-5wzxv container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:17 crc kubenswrapper[4966]: I0127 17:03:17.912105 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" podUID="fa2c58c2-9b23-4360-897e-582237775277" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:18 crc kubenswrapper[4966]: I0127 17:03:18.041222 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" podUID="37cde3a9-999c-4c96-a024-1769c058c4c8" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:18 crc kubenswrapper[4966]: I0127 17:03:18.041239 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" podUID="37cde3a9-999c-4c96-a024-1769c058c4c8" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:18 crc kubenswrapper[4966]: I0127 17:03:18.051536 4966 trace.go:236] Trace[1792612193]: "Calculate volume metrics of ovndbcluster-sb-etc-ovn for pod openstack/ovsdbserver-sb-0" (27-Jan-2026 17:03:14.721) (total time: 3326ms): Jan 27 17:03:18 crc kubenswrapper[4966]: Trace[1792612193]: [3.326365502s] [3.326365502s] END Jan 27 17:03:18 crc kubenswrapper[4966]: I0127 17:03:18.746507 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="a1be6855-0a73-406a-93d5-625f7fca558b" containerName="galera" probeResult="failure" output="command timed out" Jan 27 17:03:18 crc kubenswrapper[4966]: I0127 17:03:18.746730 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a1be6855-0a73-406a-93d5-625f7fca558b" containerName="galera" probeResult="failure" output="command timed out" Jan 27 17:03:18 crc kubenswrapper[4966]: I0127 17:03:18.831139 4966 patch_prober.go:28] interesting pod/console-operator-58897d9998-wpbsk container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:18 crc kubenswrapper[4966]: I0127 17:03:18.831157 4966 patch_prober.go:28] interesting pod/console-operator-58897d9998-wpbsk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:18 crc kubenswrapper[4966]: I0127 17:03:18.831198 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" podUID="12b9edc6-687c-47b9-b8c6-8fa656fc40de" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:18 crc kubenswrapper[4966]: I0127 17:03:18.831258 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" podUID="12b9edc6-687c-47b9-b8c6-8fa656fc40de" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:19 crc kubenswrapper[4966]: I0127 17:03:19.088282 4966 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fgklq container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:19 crc kubenswrapper[4966]: I0127 17:03:19.088336 4966 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fgklq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:19 crc kubenswrapper[4966]: I0127 17:03:19.088423 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" podUID="70f9fda4-72f3-4f6a-8b6a-38dffbb6c958" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:19 crc kubenswrapper[4966]: I0127 17:03:19.088350 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" podUID="70f9fda4-72f3-4f6a-8b6a-38dffbb6c958" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:19 crc kubenswrapper[4966]: I0127 17:03:19.292195 4966 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-cr8s9 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:19 crc kubenswrapper[4966]: I0127 17:03:19.292342 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" podUID="a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:19 crc kubenswrapper[4966]: I0127 17:03:19.376102 4966 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-cr8s9 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:19 crc kubenswrapper[4966]: I0127 17:03:19.376160 4966 patch_prober.go:28] interesting pod/router-default-5444994796-wzrqq container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:19 crc kubenswrapper[4966]: I0127 17:03:19.376191 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" podUID="a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:19 crc kubenswrapper[4966]: I0127 17:03:19.376207 4966 patch_prober.go:28] interesting pod/router-default-5444994796-wzrqq container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:19 crc kubenswrapper[4966]: I0127 17:03:19.376236 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-wzrqq" podUID="08398814-3579-49c5-bf30-b8e700fabdab" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:19 crc kubenswrapper[4966]: I0127 17:03:19.376283 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-wzrqq" podUID="08398814-3579-49c5-bf30-b8e700fabdab" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:20 crc kubenswrapper[4966]: I0127 17:03:20.157333 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" podUID="2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:20 crc kubenswrapper[4966]: I0127 17:03:20.157393 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" podUID="2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:20 crc kubenswrapper[4966]: I0127 17:03:20.173206 4966 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-5bg28 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:20 crc kubenswrapper[4966]: I0127 17:03:20.173341 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5bg28" podUID="ee84a560-7150-49bd-94ac-e190aab8bc92" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:20 crc kubenswrapper[4966]: I0127 17:03:20.673285 4966 patch_prober.go:28] interesting pod/route-controller-manager-86647b877f-m6l5t container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:20 crc kubenswrapper[4966]: I0127 17:03:20.673376 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" podUID="e0d60f56-c8ec-4004-a1c4-4f014dbccf7f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:20 crc kubenswrapper[4966]: I0127 17:03:20.673306 4966 patch_prober.go:28] interesting pod/route-controller-manager-86647b877f-m6l5t container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:20 crc kubenswrapper[4966]: I0127 17:03:20.673528 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" podUID="e0d60f56-c8ec-4004-a1c4-4f014dbccf7f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:20 crc kubenswrapper[4966]: I0127 17:03:20.676431 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" Jan 27 17:03:20 crc kubenswrapper[4966]: I0127 17:03:20.680552 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"e2295b9d9c5579954267f6c3ac4518b2cfb177f96abe9f67553ea254d6c45bbd"} pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Jan 27 17:03:20 crc kubenswrapper[4966]: I0127 17:03:20.681945 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" podUID="e0d60f56-c8ec-4004-a1c4-4f014dbccf7f" containerName="route-controller-manager" containerID="cri-o://e2295b9d9c5579954267f6c3ac4518b2cfb177f96abe9f67553ea254d6c45bbd" gracePeriod=30 Jan 27 17:03:20 crc kubenswrapper[4966]: I0127 17:03:20.750190 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="20044d98-b229-4e9a-946f-b18902841fe6" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.095131 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-nn8hx" podUID="9fb28925-f952-48ea-88e5-db1ec4dba047" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.095210 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-nn8hx" podUID="9fb28925-f952-48ea-88e5-db1ec4dba047" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.115248 4966 patch_prober.go:28] interesting pod/thanos-querier-78c6ff45cc-gspnf container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.115330 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" podUID="76f00c13-2195-40be-829a-ce9e9c94a795" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.676054 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-fpvwf" podUID="e75b042c-789e-43fc-8736-b3f5093f21db" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.676096 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" podUID="3e510e0a-a47d-416e-aec1-c7de88b0a2af" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.758241 4966 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9876t container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.6:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.758292 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" podUID="3e510e0a-a47d-416e-aec1-c7de88b0a2af" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.758319 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-9876t" podUID="0443c8da-0b0f-4632-b990-f83e403a8b82" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.6:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.758319 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-fpvwf" podUID="e75b042c-789e-43fc-8736-b3f5093f21db" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.758319 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-fpvwf" podUID="e75b042c-789e-43fc-8736-b3f5093f21db" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.758387 4966 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9876t container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.6:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.758405 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-fpvwf" Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.758408 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-9876t" podUID="0443c8da-0b0f-4632-b990-f83e403a8b82" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.6:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.759721 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"e224ab4c87320968867721f0b5d5d97a3453e0f608b2818b76ea299d6c120917"} pod="metallb-system/frr-k8s-fpvwf" containerMessage="Container frr failed liveness probe, will be restarted" Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.760739 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-fpvwf" podUID="e75b042c-789e-43fc-8736-b3f5093f21db" containerName="frr" containerID="cri-o://e224ab4c87320968867721f0b5d5d97a3453e0f608b2818b76ea299d6c120917" gracePeriod=2 Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.840147 4966 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-x6l4k container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.15:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.840216 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" podUID="b9f6b9a4-ded2-467b-9e87-6fafa667f709" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.15:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.840276 4966 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-x6l4k container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.15:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.840291 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" podUID="b9f6b9a4-ded2-467b-9e87-6fafa667f709" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.15:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.942134 4966 patch_prober.go:28] interesting pod/console-76cf6b7d9d-8vc2q container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:21 crc kubenswrapper[4966]: I0127 17:03:21.942492 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-76cf6b7d9d-8vc2q" podUID="f71765ab-530f-4029-9091-f63413efd9c2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:22 crc kubenswrapper[4966]: I0127 17:03:22.301135 4966 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lptwz container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:22 crc kubenswrapper[4966]: I0127 17:03:22.301204 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" podUID="f58eecb1-9324-4165-9446-631a0438392e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:22 crc kubenswrapper[4966]: I0127 17:03:22.301399 4966 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lptwz container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:22 crc kubenswrapper[4966]: I0127 17:03:22.301463 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" podUID="f58eecb1-9324-4165-9446-631a0438392e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:22 crc kubenswrapper[4966]: I0127 17:03:22.578205 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-wpv4z" podUID="ce06a03b-db66-49be-ace7-f79a0b78dc62" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:22 crc kubenswrapper[4966]: I0127 17:03:22.578212 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-wpv4z" podUID="ce06a03b-db66-49be-ace7-f79a0b78dc62" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:22 crc kubenswrapper[4966]: I0127 17:03:22.717223 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-72wzd" Jan 27 17:03:22 crc kubenswrapper[4966]: I0127 17:03:22.717578 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-72wzd" Jan 27 17:03:22 crc kubenswrapper[4966]: I0127 17:03:22.743473 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="832412d6-8f0c-4372-b056-87d49ac6f4bd" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 17:03:22 crc kubenswrapper[4966]: I0127 17:03:22.743487 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="832412d6-8f0c-4372-b056-87d49ac6f4bd" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 17:03:22 crc kubenswrapper[4966]: I0127 17:03:22.896625 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-lr7wd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:22 crc kubenswrapper[4966]: I0127 17:03:22.896670 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-lr7wd container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:22 crc kubenswrapper[4966]: I0127 17:03:22.896737 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" podUID="a58b269c-6e15-4eda-aa6f-00e51aa132fe" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:22 crc kubenswrapper[4966]: I0127 17:03:22.896676 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" podUID="a58b269c-6e15-4eda-aa6f-00e51aa132fe" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:22 crc kubenswrapper[4966]: I0127 17:03:22.912317 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-5wzxv container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:22 crc kubenswrapper[4966]: I0127 17:03:22.912353 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-5wzxv container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:22 crc kubenswrapper[4966]: I0127 17:03:22.912363 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" podUID="fa2c58c2-9b23-4360-897e-582237775277" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:22 crc kubenswrapper[4966]: I0127 17:03:22.912382 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" podUID="fa2c58c2-9b23-4360-897e-582237775277" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:23 crc kubenswrapper[4966]: I0127 17:03:23.591874 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fpvwf" event={"ID":"e75b042c-789e-43fc-8736-b3f5093f21db","Type":"ContainerDied","Data":"e224ab4c87320968867721f0b5d5d97a3453e0f608b2818b76ea299d6c120917"} Jan 27 17:03:23 crc kubenswrapper[4966]: I0127 17:03:23.591810 4966 generic.go:334] "Generic (PLEG): container finished" podID="e75b042c-789e-43fc-8736-b3f5093f21db" containerID="e224ab4c87320968867721f0b5d5d97a3453e0f608b2818b76ea299d6c120917" exitCode=143 Jan 27 17:03:23 crc kubenswrapper[4966]: I0127 17:03:23.686526 4966 patch_prober.go:28] interesting pod/controller-manager-7df4589fcc-vdfk5 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:23 crc kubenswrapper[4966]: I0127 17:03:23.686584 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" podUID="03e547fd-14a6-41eb-9bf7-8aea75e60ddf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:23 crc kubenswrapper[4966]: I0127 17:03:23.686788 4966 patch_prober.go:28] interesting pod/controller-manager-7df4589fcc-vdfk5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:23 crc kubenswrapper[4966]: I0127 17:03:23.686864 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" podUID="03e547fd-14a6-41eb-9bf7-8aea75e60ddf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:23 crc kubenswrapper[4966]: I0127 17:03:23.851177 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:23 crc kubenswrapper[4966]: I0127 17:03:23.851237 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:23 crc kubenswrapper[4966]: I0127 17:03:23.851297 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:23 crc kubenswrapper[4966]: I0127 17:03:23.851310 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.044149 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-75d854449c-9v6lm" podUID="cfd02a37-95ae-43f0-9e50-2e9d78202bd9" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.044550 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-75d854449c-9v6lm" podUID="cfd02a37-95ae-43f0-9e50-2e9d78202bd9" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.185064 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h" podUID="093d4126-d96d-475a-9519-020f2f73a742" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.185064 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9s62h" podUID="eb03df91-4797-41be-a7fb-7ca572014c88" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.274048 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr" podUID="45594823-cdbb-4586-95d2-f2af9f6460b9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.315090 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-88lhp" podUID="9dcc8f2a-06d2-493e-b0ce-50120cef400e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.426060 4966 patch_prober.go:28] interesting pod/metrics-server-5fcbd5f794-2hhjm container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.426083 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j7j9c" podUID="624197a8-447a-4004-a1e0-679ce29dbe86" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.426127 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" podUID="525a9ae1-69bf-4f75-b283-c0844b828a90" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.426149 4966 patch_prober.go:28] interesting pod/metrics-server-5fcbd5f794-2hhjm container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.426220 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" podUID="525a9ae1-69bf-4f75-b283-c0844b828a90" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.479293 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-77jlm" podUID="ed16ab57-79cc-42b3-9ae5-663ba4c7b8ff" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.605080 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv" podUID="08ac68d1-220d-4098-9eed-6d0e3b752e5d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.605088 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-h6qtl" podUID="871381eb-c218-433c-a004-fea884f4ced0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.688050 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw" podUID="3ae401e5-feea-47d3-9c86-1e33635a461a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.688130 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-j6s8k" podUID="dd40e2cd-59aa-442b-b27a-209632cba6e4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.759795 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-48xxm" podUID="57175838-b13a-4dc9-bf85-ed8668e3d88c" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.759812 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-q8wrz" podUID="818e66f7-b294-448b-9d55-99de7ebd3f34" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.759959 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-48xxm" podUID="57175838-b13a-4dc9-bf85-ed8668e3d88c" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.760782 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-q8wrz" podUID="818e66f7-b294-448b-9d55-99de7ebd3f34" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.813121 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rv5wp" podUID="f096bdf7-f589-4344-b71f-ab9db2eded5f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.813143 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs" podUID="64b84834-e9db-4f50-a7c7-6d24302652d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.895102 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp" podUID="6006cb9c-d22f-47b1-b8b6-cb999ecab7df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.895162 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc" podUID="8645d6d2-f7cd-4578-9a1a-8b07beeae08c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.979051 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-crddt" podUID="3bc21fb1-fc35-42e4-ab0f-bdd047cd05d7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.979056 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf" podUID="e2cfe3d1-d500-418e-bc6b-4da3482999c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.979107 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69" podUID="dd6f6600-3072-42e6-a8ca-5e72c960425a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.979168 4966 patch_prober.go:28] interesting pod/monitoring-plugin-785c968969-bl9x5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.979214 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-8bb444544-qmbfx" podUID="734cfb67-80ec-42a1-8d52-298ae82e1a6b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:24 crc kubenswrapper[4966]: I0127 17:03:24.979274 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-785c968969-bl9x5" podUID="9bc3bce9-60e2-4ab9-ab45-28e69ba4a877" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:25 crc kubenswrapper[4966]: I0127 17:03:25.119163 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-l5xd2" podUID="434d2d44-cb00-40d2-90b5-64dd65faadc8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:25 crc kubenswrapper[4966]: I0127 17:03:25.455096 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-dvjsg" podUID="8ebef5e4-f520-44af-9488-659932ab7ff8" containerName="registry-server" probeResult="failure" output=< Jan 27 17:03:25 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:03:25 crc kubenswrapper[4966]: > Jan 27 17:03:25 crc kubenswrapper[4966]: I0127 17:03:25.455090 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-dvjsg" podUID="8ebef5e4-f520-44af-9488-659932ab7ff8" containerName="registry-server" probeResult="failure" output=< Jan 27 17:03:25 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:03:25 crc kubenswrapper[4966]: > Jan 27 17:03:25 crc kubenswrapper[4966]: I0127 17:03:25.556296 4966 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-xfl7t container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.68:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:25 crc kubenswrapper[4966]: I0127 17:03:25.556372 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" podUID="b55dd294-68a4-4eba-b567-796255583e28" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.68:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:25 crc kubenswrapper[4966]: I0127 17:03:25.556522 4966 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-xfl7t container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.68:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:25 crc kubenswrapper[4966]: I0127 17:03:25.556590 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" podUID="b55dd294-68a4-4eba-b567-796255583e28" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.68:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:25 crc kubenswrapper[4966]: I0127 17:03:25.651018 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fpvwf" event={"ID":"e75b042c-789e-43fc-8736-b3f5093f21db","Type":"ContainerStarted","Data":"7808933cbbe637209e11b750181817923cc04aa8075e3eae9a7550d4a56d0841"} Jan 27 17:03:25 crc kubenswrapper[4966]: I0127 17:03:25.740351 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-86gp2" podUID="25afd019-0360-4ea5-ac94-94c6f42bb8a8" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.041365 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-7br4n" podUID="0bcb091b-8f56-46c3-8437-2505b27684da" containerName="registry-server" probeResult="failure" output=< Jan 27 17:03:26 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:03:26 crc kubenswrapper[4966]: > Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.060115 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-7br4n" podUID="0bcb091b-8f56-46c3-8437-2505b27684da" containerName="registry-server" probeResult="failure" output=< Jan 27 17:03:26 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:03:26 crc kubenswrapper[4966]: > Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.060139 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-72wzd" podUID="f55af454-c7f3-4d9c-8c30-108b947410a7" containerName="registry-server" probeResult="failure" output=< Jan 27 17:03:26 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:03:26 crc kubenswrapper[4966]: > Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.091122 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" podUID="cfa058e6-1d6f-4dc2-8058-c00b201175b5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.091327 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" podUID="cfa058e6-1d6f-4dc2-8058-c00b201175b5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.114106 4966 patch_prober.go:28] interesting pod/thanos-querier-78c6ff45cc-gspnf container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.114162 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" podUID="76f00c13-2195-40be-829a-ce9e9c94a795" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.259419 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-85mqp" podUID="055a60e0-b7aa-4b35-9807-01fe7095113d" containerName="registry-server" probeResult="failure" output=< Jan 27 17:03:26 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:03:26 crc kubenswrapper[4966]: > Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.261541 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-85mqp" podUID="055a60e0-b7aa-4b35-9807-01fe7095113d" containerName="registry-server" probeResult="failure" output=< Jan 27 17:03:26 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:03:26 crc kubenswrapper[4966]: > Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.370599 4966 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.370666 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.485084 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" podUID="20e54080-e732-4925-b0c2-35669744821d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.485128 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" podUID="20e54080-e732-4925-b0c2-35669744821d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.662734 4966 generic.go:334] "Generic (PLEG): container finished" podID="e0d60f56-c8ec-4004-a1c4-4f014dbccf7f" containerID="e2295b9d9c5579954267f6c3ac4518b2cfb177f96abe9f67553ea254d6c45bbd" exitCode=0 Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.662824 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" event={"ID":"e0d60f56-c8ec-4004-a1c4-4f014dbccf7f","Type":"ContainerDied","Data":"e2295b9d9c5579954267f6c3ac4518b2cfb177f96abe9f67553ea254d6c45bbd"} Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.677148 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" podUID="e49e9fb2-a5f0-4106-b239-93d488e4f515" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.677149 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" podUID="e49e9fb2-a5f0-4106-b239-93d488e4f515" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.745502 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="20044d98-b229-4e9a-946f-b18902841fe6" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.751047 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.753466 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"c64846f7b44d76aed3bc93bd63c096a1a9c767154eac045ed6ae2b13727625f1"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.754368 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20044d98-b229-4e9a-946f-b18902841fe6" containerName="ceilometer-central-agent" containerID="cri-o://c64846f7b44d76aed3bc93bd63c096a1a9c767154eac045ed6ae2b13727625f1" gracePeriod=30 Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.849124 4966 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-88nmc container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": context deadline exceeded" start-of-body= Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.849496 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" podUID="c70eec8b-c8da-4620-9c5e-bb19e5d66424" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": context deadline exceeded" Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.851172 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.851202 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.851236 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.851230 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.851285 4966 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.851334 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.979112 4966 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-95xm4 container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.979169 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-95xm4" podUID="80c26c09-83a0-4b08-979b-a138a5ed5d4b" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.994174 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="20223cbd-a9e0-4eb8-b051-0833bebe5975" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:26 crc kubenswrapper[4966]: I0127 17:03:26.994209 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="20223cbd-a9e0-4eb8-b051-0833bebe5975" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:27 crc kubenswrapper[4966]: I0127 17:03:27.093694 4966 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-6s27c container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:27 crc kubenswrapper[4966]: I0127 17:03:27.093765 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" podUID="9c5e1e82-3053-4895-91ce-56475540fc35" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:27 crc kubenswrapper[4966]: I0127 17:03:27.241216 4966 patch_prober.go:28] interesting pod/loki-operator-controller-manager-7c74c5b958-9l7lg container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:27 crc kubenswrapper[4966]: I0127 17:03:27.241258 4966 patch_prober.go:28] interesting pod/loki-operator-controller-manager-7c74c5b958-9l7lg container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:27 crc kubenswrapper[4966]: I0127 17:03:27.241289 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" podUID="8ee78ad6-4785-4aee-a8cb-c16b147764d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:27 crc kubenswrapper[4966]: I0127 17:03:27.241321 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" podUID="8ee78ad6-4785-4aee-a8cb-c16b147764d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:27 crc kubenswrapper[4966]: I0127 17:03:27.337072 4966 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7cfns container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:27 crc kubenswrapper[4966]: I0127 17:03:27.337101 4966 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7cfns container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:27 crc kubenswrapper[4966]: I0127 17:03:27.337160 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-7cfns" podUID="692eec10-7d08-44ba-aa26-0ac0eacfb1e7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:27 crc kubenswrapper[4966]: I0127 17:03:27.337172 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7cfns" podUID="692eec10-7d08-44ba-aa26-0ac0eacfb1e7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:27 crc kubenswrapper[4966]: I0127 17:03:27.740306 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="1dc01362-ea5a-48fe-b67f-1e00b193c36e" containerName="galera" probeResult="failure" output="command timed out" Jan 27 17:03:27 crc kubenswrapper[4966]: I0127 17:03:27.740334 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="1dc01362-ea5a-48fe-b67f-1e00b193c36e" containerName="galera" probeResult="failure" output="command timed out" Jan 27 17:03:27 crc kubenswrapper[4966]: I0127 17:03:27.741859 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="832412d6-8f0c-4372-b056-87d49ac6f4bd" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 17:03:27 crc kubenswrapper[4966]: I0127 17:03:27.741972 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="832412d6-8f0c-4372-b056-87d49ac6f4bd" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 17:03:27 crc kubenswrapper[4966]: I0127 17:03:27.742072 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 17:03:27 crc kubenswrapper[4966]: I0127 17:03:27.895440 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-lr7wd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:27 crc kubenswrapper[4966]: I0127 17:03:27.895528 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" podUID="a58b269c-6e15-4eda-aa6f-00e51aa132fe" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:27 crc kubenswrapper[4966]: I0127 17:03:27.912853 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-5wzxv container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:27 crc kubenswrapper[4966]: I0127 17:03:27.912921 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" podUID="fa2c58c2-9b23-4360-897e-582237775277" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:27 crc kubenswrapper[4966]: I0127 17:03:27.981382 4966 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:27 crc kubenswrapper[4966]: I0127 17:03:27.981452 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="e440811a-ec7d-4606-a78b-6b3d5062e044" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.56:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:28 crc kubenswrapper[4966]: I0127 17:03:28.041133 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" podUID="37cde3a9-999c-4c96-a024-1769c058c4c8" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:28 crc kubenswrapper[4966]: I0127 17:03:28.041178 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" podUID="37cde3a9-999c-4c96-a024-1769c058c4c8" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:28 crc kubenswrapper[4966]: I0127 17:03:28.071165 4966 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:28 crc kubenswrapper[4966]: I0127 17:03:28.071229 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="0caab707-59fa-4d4d-976b-e1f99d30fc01" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:28 crc kubenswrapper[4966]: I0127 17:03:28.162282 4966 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.75:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:28 crc kubenswrapper[4966]: I0127 17:03:28.162607 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="988dcb32-33f1-4e22-8b8c-a1a3b09828b3" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.75:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:28 crc kubenswrapper[4966]: I0127 17:03:28.527782 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.7:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:28 crc kubenswrapper[4966]: I0127 17:03:28.527862 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.7:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:28 crc kubenswrapper[4966]: I0127 17:03:28.740327 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a1be6855-0a73-406a-93d5-625f7fca558b" containerName="galera" probeResult="failure" output="command timed out" Jan 27 17:03:28 crc kubenswrapper[4966]: I0127 17:03:28.740757 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="a1be6855-0a73-406a-93d5-625f7fca558b" containerName="galera" probeResult="failure" output="command timed out" Jan 27 17:03:28 crc kubenswrapper[4966]: I0127 17:03:28.745264 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-5jgjj" podUID="ae54b281-3b7e-412e-8575-9096f191343e" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Jan 27 17:03:28 crc kubenswrapper[4966]: I0127 17:03:28.745608 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-5jgjj" podUID="ae54b281-3b7e-412e-8575-9096f191343e" containerName="ovsdb-server" probeResult="failure" output="command timed out" Jan 27 17:03:28 crc kubenswrapper[4966]: I0127 17:03:28.746755 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-5jgjj" podUID="ae54b281-3b7e-412e-8575-9096f191343e" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Jan 27 17:03:28 crc kubenswrapper[4966]: I0127 17:03:28.748343 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-5jgjj" podUID="ae54b281-3b7e-412e-8575-9096f191343e" containerName="ovsdb-server" probeResult="failure" output="command timed out" Jan 27 17:03:28 crc kubenswrapper[4966]: I0127 17:03:28.751676 4966 patch_prober.go:28] interesting pod/console-operator-58897d9998-wpbsk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:28 crc kubenswrapper[4966]: I0127 17:03:28.751729 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" podUID="12b9edc6-687c-47b9-b8c6-8fa656fc40de" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:28 crc kubenswrapper[4966]: I0127 17:03:28.754811 4966 patch_prober.go:28] interesting pod/console-operator-58897d9998-wpbsk container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:28 crc kubenswrapper[4966]: I0127 17:03:28.754883 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" podUID="12b9edc6-687c-47b9-b8c6-8fa656fc40de" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:28 crc kubenswrapper[4966]: I0127 17:03:28.927180 4966 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-fws6n container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:28 crc kubenswrapper[4966]: I0127 17:03:28.927234 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" podUID="c4ecd65d-fa3e-456b-8db6-314cc20216ed" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.067553 4966 trace.go:236] Trace[1257353029]: "Calculate volume metrics of swift for pod openstack/swift-storage-0" (27-Jan-2026 17:03:19.971) (total time: 9092ms): Jan 27 17:03:29 crc kubenswrapper[4966]: Trace[1257353029]: [9.092146714s] [9.092146714s] END Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.102107 4966 patch_prober.go:28] interesting pod/downloads-7954f5f757-kmmq4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.102176 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-kmmq4" podUID="36c5078e-fb86-4817-a08e-6d4b4e2bee7f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.102306 4966 patch_prober.go:28] interesting pod/downloads-7954f5f757-kmmq4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.102357 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kmmq4" podUID="36c5078e-fb86-4817-a08e-6d4b4e2bee7f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.102407 4966 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fgklq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.102421 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" podUID="70f9fda4-72f3-4f6a-8b6a-38dffbb6c958" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.102446 4966 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zk5gr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.102458 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" podUID="1317de86-7041-4b5a-8403-98489b8dc338" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.102493 4966 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fgklq container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.102566 4966 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-m7rc2 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.102571 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" podUID="70f9fda4-72f3-4f6a-8b6a-38dffbb6c958" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.102599 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" podUID="f02135d5-ce67-4a94-9f94-60c29b672231" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.102639 4966 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zk5gr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.102653 4966 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-m7rc2 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.102653 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" podUID="1317de86-7041-4b5a-8403-98489b8dc338" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.102669 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" podUID="f02135d5-ce67-4a94-9f94-60c29b672231" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.292181 4966 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-cr8s9 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.293403 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" podUID="a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.292520 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-96rmm" podUID="fccdaa28-9674-4bb6-9c58-3f3905df1e56" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.292418 4966 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-cr8s9 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.293578 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" podUID="a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.375475 4966 patch_prober.go:28] interesting pod/router-default-5444994796-wzrqq container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.375543 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-wzrqq" podUID="08398814-3579-49c5-bf30-b8e700fabdab" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.375611 4966 patch_prober.go:28] interesting pod/router-default-5444994796-wzrqq container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.375630 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-wzrqq" podUID="08398814-3579-49c5-bf30-b8e700fabdab" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.672838 4966 patch_prober.go:28] interesting pod/route-controller-manager-86647b877f-m6l5t container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.672924 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" podUID="e0d60f56-c8ec-4004-a1c4-4f014dbccf7f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.759201 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-784794b655-q7lf8" podUID="c9a9151b-f291-44db-a0fb-904cf48b7e37" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.934220 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.934268 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.934294 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.934326 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.947666 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.947725 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.960837 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"a4165bb5e68d0d51bd39f333d00d61bbdde7b503e116c44eaf4366167bd10168"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Jan 27 17:03:29 crc kubenswrapper[4966]: I0127 17:03:29.960958 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" containerID="cri-o://a4165bb5e68d0d51bd39f333d00d61bbdde7b503e116c44eaf4366167bd10168" gracePeriod=30 Jan 27 17:03:30 crc kubenswrapper[4966]: I0127 17:03:30.156077 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" podUID="2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:30 crc kubenswrapper[4966]: I0127 17:03:30.156159 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" podUID="2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:30 crc kubenswrapper[4966]: I0127 17:03:30.172840 4966 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-5bg28 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:30 crc kubenswrapper[4966]: I0127 17:03:30.172915 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5bg28" podUID="ee84a560-7150-49bd-94ac-e190aab8bc92" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:30 crc kubenswrapper[4966]: I0127 17:03:30.482887 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-fpvwf" Jan 27 17:03:30 crc kubenswrapper[4966]: I0127 17:03:30.659959 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": EOF" start-of-body= Jan 27 17:03:30 crc kubenswrapper[4966]: I0127 17:03:30.660024 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": EOF" Jan 27 17:03:30 crc kubenswrapper[4966]: I0127 17:03:30.716276 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" event={"ID":"e0d60f56-c8ec-4004-a1c4-4f014dbccf7f","Type":"ContainerStarted","Data":"40ee2ab08eb93a99cef40b3fd58fb7a8a10e809c7d24262e6b87e5f4914e4406"} Jan 27 17:03:30 crc kubenswrapper[4966]: I0127 17:03:30.716810 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" Jan 27 17:03:30 crc kubenswrapper[4966]: I0127 17:03:30.717086 4966 patch_prober.go:28] interesting pod/route-controller-manager-86647b877f-m6l5t container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 27 17:03:30 crc kubenswrapper[4966]: I0127 17:03:30.717123 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" podUID="e0d60f56-c8ec-4004-a1c4-4f014dbccf7f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.095192 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-nn8hx" podUID="9fb28925-f952-48ea-88e5-db1ec4dba047" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.095233 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-nn8hx" podUID="9fb28925-f952-48ea-88e5-db1ec4dba047" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.113998 4966 patch_prober.go:28] interesting pod/thanos-querier-78c6ff45cc-gspnf container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.114101 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" podUID="76f00c13-2195-40be-829a-ce9e9c94a795" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.131129 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-96rmm" podUID="fccdaa28-9674-4bb6-9c58-3f3905df1e56" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.550158 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" podUID="3e510e0a-a47d-416e-aec1-c7de88b0a2af" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.550605 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.553684 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr-k8s-webhook-server" containerStatusID={"Type":"cri-o","ID":"02ade3bf4a2516b3c82e52c35c6a892ef6a46a0fc6b01c0fcf75bdb056adf70c"} pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" containerMessage="Container frr-k8s-webhook-server failed liveness probe, will be restarted" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.553741 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" podUID="3e510e0a-a47d-416e-aec1-c7de88b0a2af" containerName="frr-k8s-webhook-server" containerID="cri-o://02ade3bf4a2516b3c82e52c35c6a892ef6a46a0fc6b01c0fcf75bdb056adf70c" gracePeriod=10 Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.675108 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" podUID="3e510e0a-a47d-416e-aec1-c7de88b0a2af" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.675242 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.743043 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="832412d6-8f0c-4372-b056-87d49ac6f4bd" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.759203 4966 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9876t container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.6:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.759219 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-fpvwf" podUID="e75b042c-789e-43fc-8736-b3f5093f21db" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.759267 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-9876t" podUID="0443c8da-0b0f-4632-b990-f83e403a8b82" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.6:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.759338 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-fpvwf" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.759369 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.759361 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-fpvwf" podUID="e75b042c-789e-43fc-8736-b3f5093f21db" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.759219 4966 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-fpvwf" podUID="e75b042c-789e-43fc-8736-b3f5093f21db" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.759451 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-fpvwf" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.759448 4966 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9876t container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.6:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.759492 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-9876t" podUID="0443c8da-0b0f-4632-b990-f83e403a8b82" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.6:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.759512 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.760420 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="operator" containerStatusID={"Type":"cri-o","ID":"21003d8cc66cfa2e9138bf3f651cfa4e728059aa8e17d564ddc296377bd08473"} pod="openshift-operators/observability-operator-59bdc8b94-9876t" containerMessage="Container operator failed liveness probe, will be restarted" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.760482 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators/observability-operator-59bdc8b94-9876t" podUID="0443c8da-0b0f-4632-b990-f83e403a8b82" containerName="operator" containerID="cri-o://21003d8cc66cfa2e9138bf3f651cfa4e728059aa8e17d564ddc296377bd08473" gracePeriod=30 Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.760525 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"b7c9af743b5f3d202359cf8dde06682358d22535b1fcf5e71dfe0b97f8fa02d1"} pod="metallb-system/frr-k8s-fpvwf" containerMessage="Container controller failed liveness probe, will be restarted" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.760658 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-fpvwf" podUID="e75b042c-789e-43fc-8736-b3f5093f21db" containerName="controller" containerID="cri-o://b7c9af743b5f3d202359cf8dde06682358d22535b1fcf5e71dfe0b97f8fa02d1" gracePeriod=2 Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.800217 4966 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-x6l4k container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.15:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.800483 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" podUID="b9f6b9a4-ded2-467b-9e87-6fafa667f709" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.15:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.800546 4966 patch_prober.go:28] interesting pod/route-controller-manager-86647b877f-m6l5t container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.800676 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" podUID="e0d60f56-c8ec-4004-a1c4-4f014dbccf7f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.800619 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.851011 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.851092 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.942286 4966 patch_prober.go:28] interesting pod/console-76cf6b7d9d-8vc2q container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": context deadline exceeded" start-of-body= Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.942363 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-76cf6b7d9d-8vc2q" podUID="f71765ab-530f-4029-9091-f63413efd9c2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": context deadline exceeded" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.993187 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="20223cbd-a9e0-4eb8-b051-0833bebe5975" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:31 crc kubenswrapper[4966]: I0127 17:03:31.993321 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="20223cbd-a9e0-4eb8-b051-0833bebe5975" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.052914 4966 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xgjgw container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.052990 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xgjgw" podUID="cc84a1d6-0aba-48d3-9fcf-d5bd5719e0f7" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.301419 4966 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lptwz container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.301684 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" podUID="f58eecb1-9324-4165-9446-631a0438392e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.301441 4966 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lptwz container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.301793 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" podUID="f58eecb1-9324-4165-9446-631a0438392e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.301742 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.301906 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.302975 4966 patch_prober.go:28] interesting pod/oauth-openshift-666545c866-rs2mz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.303002 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" podUID="d0983135-11bf-4938-9360-757bc4556ec0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.302973 4966 patch_prober.go:28] interesting pod/oauth-openshift-666545c866-rs2mz container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.303097 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" podUID="d0983135-11bf-4938-9360-757bc4556ec0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.305479 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus-operator-admission-webhook" containerStatusID={"Type":"cri-o","ID":"9c8d1c00d3ce58b79a757a98f778173e98590799e4aec661eba4e7e4c12f7247"} pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" containerMessage="Container prometheus-operator-admission-webhook failed liveness probe, will be restarted" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.305590 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" podUID="f58eecb1-9324-4165-9446-631a0438392e" containerName="prometheus-operator-admission-webhook" containerID="cri-o://9c8d1c00d3ce58b79a757a98f778173e98590799e4aec661eba4e7e4c12f7247" gracePeriod=30 Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.579947 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-wpv4z" podUID="ce06a03b-db66-49be-ace7-f79a0b78dc62" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.580377 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-wpv4z" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.580863 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-wpv4z" podUID="ce06a03b-db66-49be-ace7-f79a0b78dc62" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.580916 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-wpv4z" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.619986 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"b8e2f25137a3a0bdaaadce4e11fe9656739f79145d8fc96c9f1e4179d2dc4905"} pod="metallb-system/speaker-wpv4z" containerMessage="Container speaker failed liveness probe, will be restarted" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.620067 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-wpv4z" podUID="ce06a03b-db66-49be-ace7-f79a0b78dc62" containerName="speaker" containerID="cri-o://b8e2f25137a3a0bdaaadce4e11fe9656739f79145d8fc96c9f1e4179d2dc4905" gracePeriod=2 Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.702997 4966 status_manager.go:875] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-72wzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f55af454-c7f3-4d9c-8c30-108b947410a7\\\"},\\\"status\\\":{\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a69cadd976d7712e9ca39e9d297d310a97358081f27d3218a13e593202b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T17:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/extracted-catalog\\\",\\\"name\\\":\\\"catalog-content\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdz56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\"}}\" for pod \"openshift-marketplace\"/\"certified-operators-72wzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": context deadline exceeded" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.784946 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" event={"ID":"0046cf8f-c67b-4936-b3d6-1f7ac02eb919","Type":"ContainerDied","Data":"a4165bb5e68d0d51bd39f333d00d61bbdde7b503e116c44eaf4366167bd10168"} Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.786553 4966 generic.go:334] "Generic (PLEG): container finished" podID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerID="a4165bb5e68d0d51bd39f333d00d61bbdde7b503e116c44eaf4366167bd10168" exitCode=0 Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.789215 4966 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-gl8pc container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.789294 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gl8pc" podUID="3fe20580-4a7b-4b46-9cc2-07c852e9c866" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.29:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.841220 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-fpvwf" podUID="e75b042c-789e-43fc-8736-b3f5093f21db" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.841251 4966 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9876t container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.6:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.841408 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-9876t" podUID="0443c8da-0b0f-4632-b990-f83e403a8b82" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.6:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.882161 4966 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-x6l4k container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.15:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.882275 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" podUID="b9f6b9a4-ded2-467b-9e87-6fafa667f709" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.15:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.896062 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-lr7wd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.896115 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" podUID="a58b269c-6e15-4eda-aa6f-00e51aa132fe" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.896437 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-lr7wd container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.896578 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" podUID="a58b269c-6e15-4eda-aa6f-00e51aa132fe" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.913349 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-5wzxv container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.913391 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" podUID="fa2c58c2-9b23-4360-897e-582237775277" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.913424 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-5wzxv container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:32 crc kubenswrapper[4966]: I0127 17:03:32.913526 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" podUID="fa2c58c2-9b23-4360-897e-582237775277" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:33 crc kubenswrapper[4966]: I0127 17:03:33.302251 4966 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lptwz container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:33 crc kubenswrapper[4966]: I0127 17:03:33.302314 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" podUID="f58eecb1-9324-4165-9446-631a0438392e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:33 crc kubenswrapper[4966]: I0127 17:03:33.623105 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-wpv4z" podUID="ce06a03b-db66-49be-ace7-f79a0b78dc62" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:33 crc kubenswrapper[4966]: I0127 17:03:33.740865 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="832412d6-8f0c-4372-b056-87d49ac6f4bd" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 17:03:33 crc kubenswrapper[4966]: I0127 17:03:33.768061 4966 patch_prober.go:28] interesting pod/controller-manager-7df4589fcc-vdfk5 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:33 crc kubenswrapper[4966]: I0127 17:03:33.768128 4966 patch_prober.go:28] interesting pod/controller-manager-7df4589fcc-vdfk5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:33 crc kubenswrapper[4966]: I0127 17:03:33.768575 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" podUID="03e547fd-14a6-41eb-9bf7-8aea75e60ddf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:33 crc kubenswrapper[4966]: I0127 17:03:33.768619 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" podUID="03e547fd-14a6-41eb-9bf7-8aea75e60ddf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:33 crc kubenswrapper[4966]: I0127 17:03:33.768633 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 17:03:33 crc kubenswrapper[4966]: I0127 17:03:33.770185 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"e0308cf11a67b3756b015108ae56eb2f5293f8f037d81cc480162fc732578ef6"} pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" containerMessage="Container controller-manager failed liveness probe, will be restarted" Jan 27 17:03:33 crc kubenswrapper[4966]: I0127 17:03:33.770231 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" podUID="03e547fd-14a6-41eb-9bf7-8aea75e60ddf" containerName="controller-manager" containerID="cri-o://e0308cf11a67b3756b015108ae56eb2f5293f8f037d81cc480162fc732578ef6" gracePeriod=30 Jan 27 17:03:33 crc kubenswrapper[4966]: I0127 17:03:33.812170 4966 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:33 crc kubenswrapper[4966]: I0127 17:03:33.812498 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:33 crc kubenswrapper[4966]: I0127 17:03:33.878068 4966 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:33 crc kubenswrapper[4966]: I0127 17:03:33.878133 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.004079 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-75d854449c-9v6lm" podUID="cfd02a37-95ae-43f0-9e50-2e9d78202bd9" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.266087 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9s62h" podUID="eb03df91-4797-41be-a7fb-7ca572014c88" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.266137 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h" podUID="093d4126-d96d-475a-9519-020f2f73a742" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.266098 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h" podUID="093d4126-d96d-475a-9519-020f2f73a742" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.266108 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9s62h" podUID="eb03df91-4797-41be-a7fb-7ca572014c88" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.266262 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.431220 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-88lhp" podUID="9dcc8f2a-06d2-493e-b0ce-50120cef400e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.431314 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr" podUID="45594823-cdbb-4586-95d2-f2af9f6460b9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.431385 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.431760 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr" podUID="45594823-cdbb-4586-95d2-f2af9f6460b9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.446839 4966 trace.go:236] Trace[433714582]: "Calculate volume metrics of glance for pod openstack/glance-default-external-api-0" (27-Jan-2026 17:03:31.857) (total time: 2569ms): Jan 27 17:03:34 crc kubenswrapper[4966]: Trace[433714582]: [2.569175021s] [2.569175021s] END Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.513051 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-88lhp" podUID="9dcc8f2a-06d2-493e-b0ce-50120cef400e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.513112 4966 patch_prober.go:28] interesting pod/metrics-server-5fcbd5f794-2hhjm container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.513151 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" podUID="525a9ae1-69bf-4f75-b283-c0844b828a90" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.513072 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j7j9c" podUID="624197a8-447a-4004-a1e0-679ce29dbe86" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.513184 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.513339 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j7j9c" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.515484 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="metrics-server" containerStatusID={"Type":"cri-o","ID":"f013bc67e00e99c0b09bf2f8d1090628c6bfd382bf7a380af3041ab2d9038e94"} pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" containerMessage="Container metrics-server failed liveness probe, will be restarted" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.515536 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" podUID="525a9ae1-69bf-4f75-b283-c0844b828a90" containerName="metrics-server" containerID="cri-o://f013bc67e00e99c0b09bf2f8d1090628c6bfd382bf7a380af3041ab2d9038e94" gracePeriod=170 Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.678287 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv" podUID="08ac68d1-220d-4098-9eed-6d0e3b752e5d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.678287 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j7j9c" podUID="624197a8-447a-4004-a1e0-679ce29dbe86" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.745451 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-48xxm" podUID="57175838-b13a-4dc9-bf85-ed8668e3d88c" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.745451 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-q8wrz" podUID="818e66f7-b294-448b-9d55-99de7ebd3f34" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.745544 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-48xxm" podUID="57175838-b13a-4dc9-bf85-ed8668e3d88c" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.745557 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-q8wrz" podUID="818e66f7-b294-448b-9d55-99de7ebd3f34" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.812625 4966 generic.go:334] "Generic (PLEG): container finished" podID="e75b042c-789e-43fc-8736-b3f5093f21db" containerID="b7c9af743b5f3d202359cf8dde06682358d22535b1fcf5e71dfe0b97f8fa02d1" exitCode=137 Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.812672 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fpvwf" event={"ID":"e75b042c-789e-43fc-8736-b3f5093f21db","Type":"ContainerDied","Data":"b7c9af743b5f3d202359cf8dde06682358d22535b1fcf5e71dfe0b97f8fa02d1"} Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.843147 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-h6qtl" podUID="871381eb-c218-433c-a004-fea884f4ced0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:34 crc kubenswrapper[4966]: I0127 17:03:34.968045 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-j6s8k" podUID="dd40e2cd-59aa-442b-b27a-209632cba6e4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.010056 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-j6s8k" podUID="dd40e2cd-59aa-442b-b27a-209632cba6e4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.010056 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-77jlm" podUID="ed16ab57-79cc-42b3-9ae5-663ba4c7b8ff" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.176046 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-77jlm" podUID="ed16ab57-79cc-42b3-9ae5-663ba4c7b8ff" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.176046 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf" podUID="e2cfe3d1-d500-418e-bc6b-4da3482999c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.176102 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv" podUID="08ac68d1-220d-4098-9eed-6d0e3b752e5d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.176157 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.176189 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.258027 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw" podUID="3ae401e5-feea-47d3-9c86-1e33635a461a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.258056 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69" podUID="dd6f6600-3072-42e6-a8ca-5e72c960425a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.258117 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.258155 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.340173 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc" podUID="8645d6d2-f7cd-4578-9a1a-8b07beeae08c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.504118 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-crddt" podUID="3bc21fb1-fc35-42e4-ab0f-bdd047cd05d7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.504122 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw" podUID="3ae401e5-feea-47d3-9c86-1e33635a461a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.504122 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-h6qtl" podUID="871381eb-c218-433c-a004-fea884f4ced0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.504339 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs" podUID="64b84834-e9db-4f50-a7c7-6d24302652d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.504363 4966 patch_prober.go:28] interesting pod/monitoring-plugin-785c968969-bl9x5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.504626 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-785c968969-bl9x5" podUID="9bc3bce9-60e2-4ab9-ab45-28e69ba4a877" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.504622 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.504681 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-785c968969-bl9x5" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.504380 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs" podUID="64b84834-e9db-4f50-a7c7-6d24302652d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.504675 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.504741 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.504795 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp" podUID="6006cb9c-d22f-47b1-b8b6-cb999ecab7df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.505459 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.555197 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.589161 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp" podUID="6006cb9c-d22f-47b1-b8b6-cb999ecab7df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.589198 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-l5xd2" podUID="434d2d44-cb00-40d2-90b5-64dd65faadc8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.589270 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf" podUID="e2cfe3d1-d500-418e-bc6b-4da3482999c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.589318 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69" podUID="dd6f6600-3072-42e6-a8ca-5e72c960425a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.589850 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc" podUID="8645d6d2-f7cd-4578-9a1a-8b07beeae08c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.589880 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h" podUID="093d4126-d96d-475a-9519-020f2f73a742" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.589946 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-8bb444544-qmbfx" podUID="734cfb67-80ec-42a1-8d52-298ae82e1a6b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.590013 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-8bb444544-qmbfx" podUID="734cfb67-80ec-42a1-8d52-298ae82e1a6b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.632157 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-crddt" podUID="3bc21fb1-fc35-42e4-ab0f-bdd047cd05d7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.632269 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-crddt" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.632566 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr" podUID="45594823-cdbb-4586-95d2-f2af9f6460b9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.632931 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j7j9c" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.632956 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-564965969-l5xd2" podUID="434d2d44-cb00-40d2-90b5-64dd65faadc8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.633016 4966 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-xfl7t container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.68:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.633039 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" podUID="b55dd294-68a4-4eba-b567-796255583e28" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.68:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.633049 4966 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-xfl7t container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.68:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.633079 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-xfl7t" podUID="b55dd294-68a4-4eba-b567-796255583e28" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.68:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.742134 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-86gp2" podUID="25afd019-0360-4ea5-ac94-94c6f42bb8a8" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.838975 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fpvwf" event={"ID":"e75b042c-789e-43fc-8736-b3f5093f21db","Type":"ContainerStarted","Data":"a1de2c07fa63fcc2a8c6a09464ce621471ed63868018f59c97beef1b2001a0b9"} Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.841276 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-fpvwf" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.843966 4966 generic.go:334] "Generic (PLEG): container finished" podID="ce06a03b-db66-49be-ace7-f79a0b78dc62" containerID="b8e2f25137a3a0bdaaadce4e11fe9656739f79145d8fc96c9f1e4179d2dc4905" exitCode=137 Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.844042 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wpv4z" event={"ID":"ce06a03b-db66-49be-ace7-f79a0b78dc62","Type":"ContainerDied","Data":"b8e2f25137a3a0bdaaadce4e11fe9656739f79145d8fc96c9f1e4179d2dc4905"} Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.854770 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" event={"ID":"0046cf8f-c67b-4936-b3d6-1f7ac02eb919","Type":"ContainerStarted","Data":"74101d9bfbfdbae61fbe52d0fecffda3397fd48d3b74ff220d49e2a22739daf7"} Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.854944 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.859492 4966 generic.go:334] "Generic (PLEG): container finished" podID="3e510e0a-a47d-416e-aec1-c7de88b0a2af" containerID="02ade3bf4a2516b3c82e52c35c6a892ef6a46a0fc6b01c0fcf75bdb056adf70c" exitCode=0 Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.859558 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" event={"ID":"3e510e0a-a47d-416e-aec1-c7de88b0a2af","Type":"ContainerDied","Data":"02ade3bf4a2516b3c82e52c35c6a892ef6a46a0fc6b01c0fcf75bdb056adf70c"} Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.862131 4966 generic.go:334] "Generic (PLEG): container finished" podID="f58eecb1-9324-4165-9446-631a0438392e" containerID="9c8d1c00d3ce58b79a757a98f778173e98590799e4aec661eba4e7e4c12f7247" exitCode=0 Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.862174 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" event={"ID":"f58eecb1-9324-4165-9446-631a0438392e","Type":"ContainerDied","Data":"9c8d1c00d3ce58b79a757a98f778173e98590799e4aec661eba4e7e4c12f7247"} Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.864669 4966 generic.go:334] "Generic (PLEG): container finished" podID="0443c8da-0b0f-4632-b990-f83e403a8b82" containerID="21003d8cc66cfa2e9138bf3f651cfa4e728059aa8e17d564ddc296377bd08473" exitCode=0 Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.864703 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9876t" event={"ID":"0443c8da-0b0f-4632-b990-f83e403a8b82","Type":"ContainerDied","Data":"21003d8cc66cfa2e9138bf3f651cfa4e728059aa8e17d564ddc296377bd08473"} Jan 27 17:03:35 crc kubenswrapper[4966]: I0127 17:03:35.949813 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2n8dp" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.048227 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" podUID="cfa058e6-1d6f-4dc2-8058-c00b201175b5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.048396 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.114968 4966 patch_prober.go:28] interesting pod/thanos-querier-78c6ff45cc-gspnf container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.115034 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-78c6ff45cc-gspnf" podUID="76f00c13-2195-40be-829a-ce9e9c94a795" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.258118 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv" podUID="08ac68d1-220d-4098-9eed-6d0e3b752e5d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.258104 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf" podUID="e2cfe3d1-d500-418e-bc6b-4da3482999c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.340174 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw" podUID="3ae401e5-feea-47d3-9c86-1e33635a461a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.340263 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69" podUID="dd6f6600-3072-42e6-a8ca-5e72c960425a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.370270 4966 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.370323 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.370363 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.373445 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-scheduler" containerStatusID={"Type":"cri-o","ID":"64f96604aa4197401e6a4c779ffa48a04a7050400f22f4da428070021f15e905"} pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" containerMessage="Container kube-scheduler failed liveness probe, will be restarted" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.373532 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" containerID="cri-o://64f96604aa4197401e6a4c779ffa48a04a7050400f22f4da428070021f15e905" gracePeriod=30 Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.445401 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" podUID="20e54080-e732-4925-b0c2-35669744821d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.445589 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.505583 4966 patch_prober.go:28] interesting pod/monitoring-plugin-785c968969-bl9x5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.505698 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-785c968969-bl9x5" podUID="9bc3bce9-60e2-4ab9-ab45-28e69ba4a877" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.564102 4966 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-fpvwf" podUID="e75b042c-789e-43fc-8736-b3f5093f21db" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.564166 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs" podUID="64b84834-e9db-4f50-a7c7-6d24302652d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.675213 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-crddt" podUID="3bc21fb1-fc35-42e4-ab0f-bdd047cd05d7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.745028 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-7br4n" podUID="0bcb091b-8f56-46c3-8437-2505b27684da" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.745206 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-7br4n" podUID="0bcb091b-8f56-46c3-8437-2505b27684da" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.745336 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-dvjsg" podUID="8ebef5e4-f520-44af-9488-659932ab7ff8" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.745617 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-dvjsg" podUID="8ebef5e4-f520-44af-9488-659932ab7ff8" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.847997 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.848446 4966 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-88nmc container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.848529 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" podUID="c70eec8b-c8da-4620-9c5e-bb19e5d66424" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.848648 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.849627 4966 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.849663 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.849734 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.878413 4966 generic.go:334] "Generic (PLEG): container finished" podID="03e547fd-14a6-41eb-9bf7-8aea75e60ddf" containerID="e0308cf11a67b3756b015108ae56eb2f5293f8f037d81cc480162fc732578ef6" exitCode=0 Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.878518 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" event={"ID":"03e547fd-14a6-41eb-9bf7-8aea75e60ddf","Type":"ContainerDied","Data":"e0308cf11a67b3756b015108ae56eb2f5293f8f037d81cc480162fc732578ef6"} Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.978562 4966 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-95xm4 container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.978653 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-95xm4" podUID="80c26c09-83a0-4b08-979b-a138a5ed5d4b" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.978742 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.993751 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="20223cbd-a9e0-4eb8-b051-0833bebe5975" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.993794 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="20223cbd-a9e0-4eb8-b051-0833bebe5975" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/healthy\": context deadline exceeded" Jan 27 17:03:36 crc kubenswrapper[4966]: I0127 17:03:36.993933 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.095333 4966 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-6s27c container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.095399 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" podUID="9c5e1e82-3053-4895-91ce-56475540fc35" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.095489 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.199079 4966 patch_prober.go:28] interesting pod/loki-operator-controller-manager-7c74c5b958-9l7lg container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.199699 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" podUID="8ee78ad6-4785-4aee-a8cb-c16b147764d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.235119 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.289252 4966 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7cfns container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.289338 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7cfns" podUID="692eec10-7d08-44ba-aa26-0ac0eacfb1e7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.489125 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" podUID="20e54080-e732-4925-b0c2-35669744821d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.760953 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="1dc01362-ea5a-48fe-b67f-1e00b193c36e" containerName="galera" probeResult="failure" output="command timed out" Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.761365 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.762390 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-72wzd" podUID="f55af454-c7f3-4d9c-8c30-108b947410a7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.763030 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="1dc01362-ea5a-48fe-b67f-1e00b193c36e" containerName="galera" probeResult="failure" output="command timed out" Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.763132 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.830395 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.830547 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.831055 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-95xm4" Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.851272 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.851330 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.851391 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.851421 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.894799 4966 generic.go:334] "Generic (PLEG): container finished" podID="20044d98-b229-4e9a-946f-b18902841fe6" containerID="c64846f7b44d76aed3bc93bd63c096a1a9c767154eac045ed6ae2b13727625f1" exitCode=0 Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.894861 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20044d98-b229-4e9a-946f-b18902841fe6","Type":"ContainerDied","Data":"c64846f7b44d76aed3bc93bd63c096a1a9c767154eac045ed6ae2b13727625f1"} Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.895147 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-lr7wd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.895201 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" podUID="a58b269c-6e15-4eda-aa6f-00e51aa132fe" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.898622 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" event={"ID":"f58eecb1-9324-4165-9446-631a0438392e","Type":"ContainerStarted","Data":"fc38117737d2b79c53f175265aa2e888ede0f5281bee36323087f55c4cba3fc5"} Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.899165 4966 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lptwz container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.899202 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" podUID="f58eecb1-9324-4165-9446-631a0438392e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.899616 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"3bb901e699a65f37bdaf20f6ec41a0f298b753e6f1d314eb609b826aa8bcd29a"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.911845 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-5wzxv container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:37 crc kubenswrapper[4966]: I0127 17:03:37.911939 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" podUID="fa2c58c2-9b23-4360-897e-582237775277" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.042174 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" podUID="37cde3a9-999c-4c96-a024-1769c058c4c8" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.042249 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" podUID="37cde3a9-999c-4c96-a024-1769c058c4c8" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.042273 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.050729 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cert-manager-webhook" containerStatusID={"Type":"cri-o","ID":"ddb977a8d41018ec4123c6e4bc785fd09ef6d6e5020dc06b779d31018fcf9484"} pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" containerMessage="Container cert-manager-webhook failed liveness probe, will be restarted" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.050804 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" podUID="37cde3a9-999c-4c96-a024-1769c058c4c8" containerName="cert-manager-webhook" containerID="cri-o://ddb977a8d41018ec4123c6e4bc785fd09ef6d6e5020dc06b779d31018fcf9484" gracePeriod=30 Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.450916 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-85mqp" podUID="055a60e0-b7aa-4b35-9807-01fe7095113d" containerName="registry-server" probeResult="failure" output=< Jan 27 17:03:38 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:03:38 crc kubenswrapper[4966]: > Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.450907 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-85mqp" podUID="055a60e0-b7aa-4b35-9807-01fe7095113d" containerName="registry-server" probeResult="failure" output=< Jan 27 17:03:38 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:03:38 crc kubenswrapper[4966]: > Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.740626 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="1dc01362-ea5a-48fe-b67f-1e00b193c36e" containerName="galera" probeResult="failure" output="command timed out" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.740670 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a1be6855-0a73-406a-93d5-625f7fca558b" containerName="galera" probeResult="failure" output="command timed out" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.740675 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="a1be6855-0a73-406a-93d5-625f7fca558b" containerName="galera" probeResult="failure" output="command timed out" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.741091 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.741112 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.749572 4966 patch_prober.go:28] interesting pod/console-operator-58897d9998-wpbsk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.749646 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" podUID="12b9edc6-687c-47b9-b8c6-8fa656fc40de" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.749713 4966 patch_prober.go:28] interesting pod/console-operator-58897d9998-wpbsk container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.749749 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" podUID="12b9edc6-687c-47b9-b8c6-8fa656fc40de" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.749866 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.749965 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.755310 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"8df87488c4564c6b6214014bff5f3c09ae41a9d1b9c170548638f483029f4791"} pod="openshift-console-operator/console-operator-58897d9998-wpbsk" containerMessage="Container console-operator failed liveness probe, will be restarted" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.755384 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" podUID="12b9edc6-687c-47b9-b8c6-8fa656fc40de" containerName="console-operator" containerID="cri-o://8df87488c4564c6b6214014bff5f3c09ae41a9d1b9c170548638f483029f4791" gracePeriod=30 Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.755684 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"8631c1d2e4e6006cc5c0581834e9d9d692faafcdd99032634f07f208377d2c31"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.914211 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" event={"ID":"03e547fd-14a6-41eb-9bf7-8aea75e60ddf","Type":"ContainerStarted","Data":"1a5574982dc2bea74f50230c48714776ddfd82583bff8e619ac733ed48c0f778"} Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.914390 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.915643 4966 patch_prober.go:28] interesting pod/controller-manager-7df4589fcc-vdfk5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.915691 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" podUID="03e547fd-14a6-41eb-9bf7-8aea75e60ddf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.918457 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" event={"ID":"3e510e0a-a47d-416e-aec1-c7de88b0a2af","Type":"ContainerStarted","Data":"e5b827d96f2b60f29878aaca661c2bcd3a7653fc60732f3e4df96d10b496b659"} Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.918875 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.923138 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9876t" event={"ID":"0443c8da-0b0f-4632-b990-f83e403a8b82","Type":"ContainerStarted","Data":"f5ae4df0429c2bc8609acbbe378f8ed901467052ac0355a4bdee1bd5ffb07837"} Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.923444 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.923567 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.923794 4966 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lptwz container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.923841 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" podUID="f58eecb1-9324-4165-9446-631a0438392e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.923959 4966 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9876t container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.6:8081/healthz\": dial tcp 10.217.0.6:8081: connect: connection refused" start-of-body= Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.923986 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-9876t" podUID="0443c8da-0b0f-4632-b990-f83e403a8b82" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.6:8081/healthz\": dial tcp 10.217.0.6:8081: connect: connection refused" Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.927401 4966 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-fws6n container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:38 crc kubenswrapper[4966]: I0127 17:03:38.927567 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-fws6n" podUID="c4ecd65d-fa3e-456b-8db6-314cc20216ed" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.046179 4966 patch_prober.go:28] interesting pod/downloads-7954f5f757-kmmq4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.046248 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-kmmq4" podUID="36c5078e-fb86-4817-a08e-6d4b4e2bee7f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.051230 4966 patch_prober.go:28] interesting pod/downloads-7954f5f757-kmmq4 container/download-server namespace/openshift-console: Readiness probe status=failure output="" start-of-body= Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.088324 4966 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fgklq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.088379 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" podUID="70f9fda4-72f3-4f6a-8b6a-38dffbb6c958" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.088457 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.089191 4966 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zk5gr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.089245 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" podUID="1317de86-7041-4b5a-8403-98489b8dc338" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.089410 4966 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-m7rc2 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.089433 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" podUID="f02135d5-ce67-4a94-9f94-60c29b672231" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.089496 4966 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fgklq container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.089512 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" podUID="70f9fda4-72f3-4f6a-8b6a-38dffbb6c958" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.089545 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.089567 4966 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-m7rc2 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.089601 4966 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zk5gr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.089656 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7rc2" podUID="f02135d5-ce67-4a94-9f94-60c29b672231" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.089677 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zk5gr" podUID="1317de86-7041-4b5a-8403-98489b8dc338" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.089665 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"83199eac14e6d24d69364c158fddb5436530ed5932ffdb5bb12c9bc0b9842de6"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" containerMessage="Container packageserver failed liveness probe, will be restarted" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.089748 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" podUID="70f9fda4-72f3-4f6a-8b6a-38dffbb6c958" containerName="packageserver" containerID="cri-o://83199eac14e6d24d69364c158fddb5436530ed5932ffdb5bb12c9bc0b9842de6" gracePeriod=30 Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.292867 4966 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-cr8s9 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.292919 4966 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-cr8s9 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.292933 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" podUID="a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.292976 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.292980 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" podUID="a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.293099 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.316657 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="package-server-manager" containerStatusID={"Type":"cri-o","ID":"712f7f3fa67c4eac572f4a1a00e314db48749e033361fb7e97d90c55e29b1f48"} pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" containerMessage="Container package-server-manager failed liveness probe, will be restarted" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.316703 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" podUID="a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7" containerName="package-server-manager" containerID="cri-o://712f7f3fa67c4eac572f4a1a00e314db48749e033361fb7e97d90c55e29b1f48" gracePeriod=30 Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.375296 4966 patch_prober.go:28] interesting pod/router-default-5444994796-wzrqq container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.375345 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-wzrqq" podUID="08398814-3579-49c5-bf30-b8e700fabdab" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.375382 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.375414 4966 patch_prober.go:28] interesting pod/router-default-5444994796-wzrqq container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.375444 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-wzrqq" podUID="08398814-3579-49c5-bf30-b8e700fabdab" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.375499 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.382818 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"bd020432a2d6ff1957eff4ef787cd85c20d0b8d3aaa3f6a34084f838c823a40b"} pod="openshift-ingress/router-default-5444994796-wzrqq" containerMessage="Container router failed liveness probe, will be restarted" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.382917 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-wzrqq" podUID="08398814-3579-49c5-bf30-b8e700fabdab" containerName="router" containerID="cri-o://bd020432a2d6ff1957eff4ef787cd85c20d0b8d3aaa3f6a34084f838c823a40b" gracePeriod=10 Jan 27 17:03:39 crc kubenswrapper[4966]: E0127 17:03:39.475150 4966 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T17:03:29Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T17:03:29Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T17:03:29Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T17:03:29Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.565236 4966 patch_prober.go:28] interesting pod/console-operator-58897d9998-wpbsk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": EOF" start-of-body= Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.565289 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" podUID="12b9edc6-687c-47b9-b8c6-8fa656fc40de" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": EOF" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.672526 4966 patch_prober.go:28] interesting pod/route-controller-manager-86647b877f-m6l5t container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.672885 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" podUID="e0d60f56-c8ec-4004-a1c4-4f014dbccf7f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.741653 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a1be6855-0a73-406a-93d5-625f7fca558b" containerName="galera" probeResult="failure" output="command timed out" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.799186 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-784794b655-q7lf8" podUID="c9a9151b-f291-44db-a0fb-904cf48b7e37" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.958743 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.996776 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-wpbsk_12b9edc6-687c-47b9-b8c6-8fa656fc40de/console-operator/0.log" Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.997597 4966 generic.go:334] "Generic (PLEG): container finished" podID="12b9edc6-687c-47b9-b8c6-8fa656fc40de" containerID="8df87488c4564c6b6214014bff5f3c09ae41a9d1b9c170548638f483029f4791" exitCode=1 Jan 27 17:03:39 crc kubenswrapper[4966]: I0127 17:03:39.997697 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" event={"ID":"12b9edc6-687c-47b9-b8c6-8fa656fc40de","Type":"ContainerDied","Data":"8df87488c4564c6b6214014bff5f3c09ae41a9d1b9c170548638f483029f4791"} Jan 27 17:03:40 crc kubenswrapper[4966]: I0127 17:03:40.001834 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wpv4z" event={"ID":"ce06a03b-db66-49be-ace7-f79a0b78dc62","Type":"ContainerStarted","Data":"4229ae08274bd4f1013a205c719bdcc6d46839e8f9a323d89ece18e3830cf769"} Jan 27 17:03:40 crc kubenswrapper[4966]: I0127 17:03:40.002397 4966 patch_prober.go:28] interesting pod/controller-manager-7df4589fcc-vdfk5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Jan 27 17:03:40 crc kubenswrapper[4966]: I0127 17:03:40.002455 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" podUID="03e547fd-14a6-41eb-9bf7-8aea75e60ddf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Jan 27 17:03:40 crc kubenswrapper[4966]: I0127 17:03:40.002612 4966 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lptwz container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Jan 27 17:03:40 crc kubenswrapper[4966]: I0127 17:03:40.002660 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" podUID="f58eecb1-9324-4165-9446-631a0438392e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Jan 27 17:03:40 crc kubenswrapper[4966]: I0127 17:03:40.002984 4966 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9876t container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.6:8081/healthz\": dial tcp 10.217.0.6:8081: connect: connection refused" start-of-body= Jan 27 17:03:40 crc kubenswrapper[4966]: I0127 17:03:40.003015 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-9876t" podUID="0443c8da-0b0f-4632-b990-f83e403a8b82" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.6:8081/healthz\": dial tcp 10.217.0.6:8081: connect: connection refused" Jan 27 17:03:40 crc kubenswrapper[4966]: I0127 17:03:40.005505 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-wpv4z" podUID="ce06a03b-db66-49be-ace7-f79a0b78dc62" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 27 17:03:40 crc kubenswrapper[4966]: I0127 17:03:40.089857 4966 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fgklq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:40 crc kubenswrapper[4966]: I0127 17:03:40.090222 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" podUID="70f9fda4-72f3-4f6a-8b6a-38dffbb6c958" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:40 crc kubenswrapper[4966]: I0127 17:03:40.529051 4966 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9876t container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.6:8081/healthz\": dial tcp 10.217.0.6:8081: connect: connection refused" start-of-body= Jan 27 17:03:40 crc kubenswrapper[4966]: I0127 17:03:40.530153 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-9876t" podUID="0443c8da-0b0f-4632-b990-f83e403a8b82" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.6:8081/healthz\": dial tcp 10.217.0.6:8081: connect: connection refused" Jan 27 17:03:40 crc kubenswrapper[4966]: I0127 17:03:40.530259 4966 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9876t container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.6:8081/healthz\": dial tcp 10.217.0.6:8081: connect: connection refused" start-of-body= Jan 27 17:03:40 crc kubenswrapper[4966]: I0127 17:03:40.530284 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-9876t" podUID="0443c8da-0b0f-4632-b990-f83e403a8b82" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.6:8081/healthz\": dial tcp 10.217.0.6:8081: connect: connection refused" Jan 27 17:03:40 crc kubenswrapper[4966]: I0127 17:03:40.746735 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" Jan 27 17:03:40 crc kubenswrapper[4966]: I0127 17:03:40.851477 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 27 17:03:40 crc kubenswrapper[4966]: I0127 17:03:40.851542 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 27 17:03:40 crc kubenswrapper[4966]: I0127 17:03:40.851483 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 27 17:03:40 crc kubenswrapper[4966]: I0127 17:03:40.851797 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 27 17:03:41 crc kubenswrapper[4966]: I0127 17:03:41.011779 4966 generic.go:334] "Generic (PLEG): container finished" podID="37cde3a9-999c-4c96-a024-1769c058c4c8" containerID="ddb977a8d41018ec4123c6e4bc785fd09ef6d6e5020dc06b779d31018fcf9484" exitCode=0 Jan 27 17:03:41 crc kubenswrapper[4966]: I0127 17:03:41.012073 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" event={"ID":"37cde3a9-999c-4c96-a024-1769c058c4c8","Type":"ContainerDied","Data":"ddb977a8d41018ec4123c6e4bc785fd09ef6d6e5020dc06b779d31018fcf9484"} Jan 27 17:03:41 crc kubenswrapper[4966]: I0127 17:03:41.021229 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20044d98-b229-4e9a-946f-b18902841fe6","Type":"ContainerStarted","Data":"aefd6da8bdccca8cabcbd3cf909bb6d7f8956d6bef2fcc844244d230ec0a53a2"} Jan 27 17:03:41 crc kubenswrapper[4966]: I0127 17:03:41.032934 4966 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="64f96604aa4197401e6a4c779ffa48a04a7050400f22f4da428070021f15e905" exitCode=0 Jan 27 17:03:41 crc kubenswrapper[4966]: I0127 17:03:41.033086 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"64f96604aa4197401e6a4c779ffa48a04a7050400f22f4da428070021f15e905"} Jan 27 17:03:41 crc kubenswrapper[4966]: I0127 17:03:41.033226 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-wpv4z" Jan 27 17:03:41 crc kubenswrapper[4966]: I0127 17:03:41.059670 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-fpvwf" Jan 27 17:03:41 crc kubenswrapper[4966]: I0127 17:03:41.300970 4966 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lptwz container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Jan 27 17:03:41 crc kubenswrapper[4966]: I0127 17:03:41.301005 4966 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lptwz container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Jan 27 17:03:41 crc kubenswrapper[4966]: I0127 17:03:41.301023 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" podUID="f58eecb1-9324-4165-9446-631a0438392e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Jan 27 17:03:41 crc kubenswrapper[4966]: I0127 17:03:41.301052 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" podUID="f58eecb1-9324-4165-9446-631a0438392e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Jan 27 17:03:41 crc kubenswrapper[4966]: I0127 17:03:41.451018 4966 trace.go:236] Trace[1671250096]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (27-Jan-2026 17:03:39.161) (total time: 2286ms): Jan 27 17:03:41 crc kubenswrapper[4966]: Trace[1671250096]: [2.286321177s] [2.286321177s] END Jan 27 17:03:41 crc kubenswrapper[4966]: I0127 17:03:41.457237 4966 trace.go:236] Trace[1040687807]: "Calculate volume metrics of glance for pod openstack/glance-default-internal-api-0" (27-Jan-2026 17:03:37.756) (total time: 3700ms): Jan 27 17:03:41 crc kubenswrapper[4966]: Trace[1040687807]: [3.700601127s] [3.700601127s] END Jan 27 17:03:41 crc kubenswrapper[4966]: I0127 17:03:41.606430 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-72wzd" podStartSLOduration=33.493900634 podStartE2EDuration="49.60490997s" podCreationTimestamp="2026-01-27 17:02:52 +0000 UTC" firstStartedPulling="2026-01-27 17:02:57.254803545 +0000 UTC m=+4843.557597033" lastFinishedPulling="2026-01-27 17:03:13.365812891 +0000 UTC m=+4859.668606369" observedRunningTime="2026-01-27 17:03:41.590754427 +0000 UTC m=+4887.893547935" watchObservedRunningTime="2026-01-27 17:03:41.60490997 +0000 UTC m=+4887.907703458" Jan 27 17:03:41 crc kubenswrapper[4966]: I0127 17:03:41.957971 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" podUID="37cde3a9-999c-4c96-a024-1769c058c4c8" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": dial tcp 10.217.0.44:6080: connect: connection refused" Jan 27 17:03:42 crc kubenswrapper[4966]: I0127 17:03:42.046955 4966 generic.go:334] "Generic (PLEG): container finished" podID="70f9fda4-72f3-4f6a-8b6a-38dffbb6c958" containerID="83199eac14e6d24d69364c158fddb5436530ed5932ffdb5bb12c9bc0b9842de6" exitCode=0 Jan 27 17:03:42 crc kubenswrapper[4966]: I0127 17:03:42.047031 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" event={"ID":"70f9fda4-72f3-4f6a-8b6a-38dffbb6c958","Type":"ContainerDied","Data":"83199eac14e6d24d69364c158fddb5436530ed5932ffdb5bb12c9bc0b9842de6"} Jan 27 17:03:42 crc kubenswrapper[4966]: I0127 17:03:42.057964 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-wpbsk_12b9edc6-687c-47b9-b8c6-8fa656fc40de/console-operator/0.log" Jan 27 17:03:42 crc kubenswrapper[4966]: I0127 17:03:42.058046 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" event={"ID":"12b9edc6-687c-47b9-b8c6-8fa656fc40de","Type":"ContainerStarted","Data":"69e16d22047046df2d75dc4e17e7fddf95cf00aee304ce0469c49179f0217c8e"} Jan 27 17:03:42 crc kubenswrapper[4966]: I0127 17:03:42.058797 4966 patch_prober.go:28] interesting pod/console-operator-58897d9998-wpbsk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 27 17:03:42 crc kubenswrapper[4966]: I0127 17:03:42.058850 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" podUID="12b9edc6-687c-47b9-b8c6-8fa656fc40de" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 27 17:03:42 crc kubenswrapper[4966]: I0127 17:03:42.385508 4966 patch_prober.go:28] interesting pod/oauth-openshift-666545c866-rs2mz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:42 crc kubenswrapper[4966]: I0127 17:03:42.385866 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" podUID="d0983135-11bf-4938-9360-757bc4556ec0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:42 crc kubenswrapper[4966]: I0127 17:03:42.385946 4966 patch_prober.go:28] interesting pod/oauth-openshift-666545c866-rs2mz container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:42 crc kubenswrapper[4966]: I0127 17:03:42.385964 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-666545c866-rs2mz" podUID="d0983135-11bf-4938-9360-757bc4556ec0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:42 crc kubenswrapper[4966]: I0127 17:03:42.686037 4966 patch_prober.go:28] interesting pod/controller-manager-7df4589fcc-vdfk5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Jan 27 17:03:42 crc kubenswrapper[4966]: I0127 17:03:42.686111 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" podUID="03e547fd-14a6-41eb-9bf7-8aea75e60ddf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Jan 27 17:03:42 crc kubenswrapper[4966]: I0127 17:03:42.896943 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-lr7wd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:42 crc kubenswrapper[4966]: I0127 17:03:42.897033 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" podUID="a58b269c-6e15-4eda-aa6f-00e51aa132fe" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:42 crc kubenswrapper[4966]: I0127 17:03:42.914718 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-5wzxv container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:42 crc kubenswrapper[4966]: I0127 17:03:42.914777 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" podUID="fa2c58c2-9b23-4360-897e-582237775277" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.075813 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" event={"ID":"37cde3a9-999c-4c96-a024-1769c058c4c8","Type":"ContainerStarted","Data":"a7bfa20dea1436ea34419b47147fc0be3d21693fd7d8d677c2f491dc98f48128"} Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.078228 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.085036 4966 generic.go:334] "Generic (PLEG): container finished" podID="a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7" containerID="712f7f3fa67c4eac572f4a1a00e314db48749e033361fb7e97d90c55e29b1f48" exitCode=0 Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.085142 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" event={"ID":"a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7","Type":"ContainerDied","Data":"712f7f3fa67c4eac572f4a1a00e314db48749e033361fb7e97d90c55e29b1f48"} Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.087959 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" event={"ID":"70f9fda4-72f3-4f6a-8b6a-38dffbb6c958","Type":"ContainerStarted","Data":"a25601d15988bae0c616b6a2f00130ede9f7387df3a79315242e2d1c05803552"} Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.089652 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.093487 4966 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fgklq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.093545 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" podUID="70f9fda4-72f3-4f6a-8b6a-38dffbb6c958" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.103880 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e8ef335cee3c59b32dfb1d13d1ae7ac6250700c1935f1f1df0ea8ab0ccfcad0a"} Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.104095 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.104349 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.104374 4966 patch_prober.go:28] interesting pod/console-operator-58897d9998-wpbsk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.104408 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" podUID="12b9edc6-687c-47b9-b8c6-8fa656fc40de" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.647770 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-48xxm" podUID="57175838-b13a-4dc9-bf85-ed8668e3d88c" containerName="registry-server" probeResult="failure" output=< Jan 27 17:03:43 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:03:43 crc kubenswrapper[4966]: > Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.647856 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-48xxm" Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.647770 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-48xxm" podUID="57175838-b13a-4dc9-bf85-ed8668e3d88c" containerName="registry-server" probeResult="failure" output=< Jan 27 17:03:43 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:03:43 crc kubenswrapper[4966]: > Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.648013 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-48xxm" Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.649032 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"5212cec5a0a5d6f38ebf2aaa68784012ddcd0e085ca8b77ba837e11485c578e2"} pod="openstack-operators/openstack-operator-index-48xxm" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.649082 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-48xxm" podUID="57175838-b13a-4dc9-bf85-ed8668e3d88c" containerName="registry-server" containerID="cri-o://5212cec5a0a5d6f38ebf2aaa68784012ddcd0e085ca8b77ba837e11485c578e2" gracePeriod=30 Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.851343 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.851355 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.851426 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.851489 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.851502 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.852032 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.852068 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.852680 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"74101d9bfbfdbae61fbe52d0fecffda3397fd48d3b74ff220d49e2a22739daf7"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.852719 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" containerID="cri-o://74101d9bfbfdbae61fbe52d0fecffda3397fd48d3b74ff220d49e2a22739daf7" gracePeriod=30 Jan 27 17:03:43 crc kubenswrapper[4966]: I0127 17:03:43.867230 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-72wzd" podUID="f55af454-c7f3-4d9c-8c30-108b947410a7" containerName="registry-server" probeResult="failure" output=< Jan 27 17:03:43 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:03:43 crc kubenswrapper[4966]: > Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.036344 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw" Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.117151 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" event={"ID":"a5bd3541-a72b-4c0c-9945-dfb48ef3c9f7","Type":"ContainerStarted","Data":"de479dea9b77bab949d6ac509dbb9ec85e81b5a1bdd5c68313c080c9f2054bfa"} Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.118070 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" Jan 27 17:03:44 crc kubenswrapper[4966]: E0127 17:03:44.143716 4966 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0046cf8f_c67b_4936_b3d6_1f7ac02eb919.slice/crio-74101d9bfbfdbae61fbe52d0fecffda3397fd48d3b74ff220d49e2a22739daf7.scope\": RecentStats: unable to find data in memory cache]" Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.158053 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h" podUID="093d4126-d96d-475a-9519-020f2f73a742" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.158310 4966 patch_prober.go:28] interesting pod/console-operator-58897d9998-wpbsk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.158662 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" podUID="12b9edc6-687c-47b9-b8c6-8fa656fc40de" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.171599 4966 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fgklq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.171664 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" podUID="70f9fda4-72f3-4f6a-8b6a-38dffbb6c958" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.281045 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr" podUID="45594823-cdbb-4586-95d2-f2af9f6460b9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.281113 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rzghf" Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.322053 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-88lhp" podUID="9dcc8f2a-06d2-493e-b0ce-50120cef400e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.322159 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-88lhp" Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.468393 4966 patch_prober.go:28] interesting pod/metrics-server-5fcbd5f794-2hhjm container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.468690 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" podUID="525a9ae1-69bf-4f75-b283-c0844b828a90" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.468742 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-sttbs" Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.468476 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j7j9c" podUID="624197a8-447a-4004-a1e0-679ce29dbe86" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.469009 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-crddt" Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.511068 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv" podUID="08ac68d1-220d-4098-9eed-6d0e3b752e5d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.624125 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rv5wp" podUID="f096bdf7-f589-4344-b71f-ab9db2eded5f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.796154 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc" podUID="8645d6d2-f7cd-4578-9a1a-8b07beeae08c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.796259 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-88lhp" Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.796348 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc" Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.796794 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69" podUID="dd6f6600-3072-42e6-a8ca-5e72c960425a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.836187 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="a1be6855-0a73-406a-93d5-625f7fca558b" containerName="galera" containerID="cri-o://8631c1d2e4e6006cc5c0581834e9d9d692faafcdd99032634f07f208377d2c31" gracePeriod=24 Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.836787 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="1dc01362-ea5a-48fe-b67f-1e00b193c36e" containerName="galera" containerID="cri-o://3bb901e699a65f37bdaf20f6ec41a0f298b753e6f1d314eb609b826aa8bcd29a" gracePeriod=24 Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.846181 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zdjjc" Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.890769 4966 patch_prober.go:28] interesting pod/monitoring-plugin-785c968969-bl9x5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:44 crc kubenswrapper[4966]: I0127 17:03:44.891068 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-785c968969-bl9x5" podUID="9bc3bce9-60e2-4ab9-ab45-28e69ba4a877" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:45 crc kubenswrapper[4966]: I0127 17:03:45.130424 4966 generic.go:334] "Generic (PLEG): container finished" podID="57175838-b13a-4dc9-bf85-ed8668e3d88c" containerID="5212cec5a0a5d6f38ebf2aaa68784012ddcd0e085ca8b77ba837e11485c578e2" exitCode=0 Jan 27 17:03:45 crc kubenswrapper[4966]: I0127 17:03:45.130488 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-48xxm" event={"ID":"57175838-b13a-4dc9-bf85-ed8668e3d88c","Type":"ContainerDied","Data":"5212cec5a0a5d6f38ebf2aaa68784012ddcd0e085ca8b77ba837e11485c578e2"} Jan 27 17:03:45 crc kubenswrapper[4966]: I0127 17:03:45.152622 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-pn2q5_0046cf8f-c67b-4936-b3d6-1f7ac02eb919/openshift-config-operator/1.log" Jan 27 17:03:45 crc kubenswrapper[4966]: I0127 17:03:45.161879 4966 generic.go:334] "Generic (PLEG): container finished" podID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerID="74101d9bfbfdbae61fbe52d0fecffda3397fd48d3b74ff220d49e2a22739daf7" exitCode=2 Jan 27 17:03:45 crc kubenswrapper[4966]: I0127 17:03:45.161948 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" event={"ID":"0046cf8f-c67b-4936-b3d6-1f7ac02eb919","Type":"ContainerDied","Data":"74101d9bfbfdbae61fbe52d0fecffda3397fd48d3b74ff220d49e2a22739daf7"} Jan 27 17:03:45 crc kubenswrapper[4966]: I0127 17:03:45.162585 4966 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fgklq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Jan 27 17:03:45 crc kubenswrapper[4966]: I0127 17:03:45.162641 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" podUID="70f9fda4-72f3-4f6a-8b6a-38dffbb6c958" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Jan 27 17:03:45 crc kubenswrapper[4966]: I0127 17:03:45.177465 4966 scope.go:117] "RemoveContainer" containerID="a4165bb5e68d0d51bd39f333d00d61bbdde7b503e116c44eaf4366167bd10168" Jan 27 17:03:45 crc kubenswrapper[4966]: I0127 17:03:45.744504 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="20044d98-b229-4e9a-946f-b18902841fe6" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 27 17:03:46 crc kubenswrapper[4966]: I0127 17:03:46.090565 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" podUID="cfa058e6-1d6f-4dc2-8058-c00b201175b5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:46 crc kubenswrapper[4966]: I0127 17:03:46.090839 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-qwc6v" podUID="cfa058e6-1d6f-4dc2-8058-c00b201175b5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:46 crc kubenswrapper[4966]: I0127 17:03:46.173516 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-48xxm" event={"ID":"57175838-b13a-4dc9-bf85-ed8668e3d88c","Type":"ContainerStarted","Data":"61bea754ee2c0d49efc30a6bd4a21e7ef9fab56ad7a6ee84b5157019a77a6c6d"} Jan 27 17:03:46 crc kubenswrapper[4966]: I0127 17:03:46.181059 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-pn2q5_0046cf8f-c67b-4936-b3d6-1f7ac02eb919/openshift-config-operator/1.log" Jan 27 17:03:46 crc kubenswrapper[4966]: I0127 17:03:46.182022 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" event={"ID":"0046cf8f-c67b-4936-b3d6-1f7ac02eb919","Type":"ContainerStarted","Data":"61909d6e14ff3845105f4a0da672c18867a3a7928a7f21eecf8f7de62518981c"} Jan 27 17:03:46 crc kubenswrapper[4966]: I0127 17:03:46.182385 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" Jan 27 17:03:46 crc kubenswrapper[4966]: E0127 17:03:46.258301 4966 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3bb901e699a65f37bdaf20f6ec41a0f298b753e6f1d314eb609b826aa8bcd29a" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 27 17:03:46 crc kubenswrapper[4966]: E0127 17:03:46.260110 4966 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3bb901e699a65f37bdaf20f6ec41a0f298b753e6f1d314eb609b826aa8bcd29a" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 27 17:03:46 crc kubenswrapper[4966]: E0127 17:03:46.261663 4966 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3bb901e699a65f37bdaf20f6ec41a0f298b753e6f1d314eb609b826aa8bcd29a" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 27 17:03:46 crc kubenswrapper[4966]: E0127 17:03:46.261891 4966 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="1dc01362-ea5a-48fe-b67f-1e00b193c36e" containerName="galera" Jan 27 17:03:46 crc kubenswrapper[4966]: I0127 17:03:46.454407 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" podUID="20e54080-e732-4925-b0c2-35669744821d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:46 crc kubenswrapper[4966]: I0127 17:03:46.454643 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" podUID="20e54080-e732-4925-b0c2-35669744821d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:46 crc kubenswrapper[4966]: I0127 17:03:46.454689 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="987fabcb-b141-4f03-96fe-d2acf923452c" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 17:03:46 crc kubenswrapper[4966]: I0127 17:03:46.680072 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" podUID="e49e9fb2-a5f0-4106-b239-93d488e4f515" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:46 crc kubenswrapper[4966]: I0127 17:03:46.680100 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-69869d7dcf-h42mh" podUID="e49e9fb2-a5f0-4106-b239-93d488e4f515" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:46 crc kubenswrapper[4966]: I0127 17:03:46.850366 4966 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-88nmc container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:46 crc kubenswrapper[4966]: I0127 17:03:46.978573 4966 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-95xm4 container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:46 crc kubenswrapper[4966]: I0127 17:03:46.978635 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-95xm4" podUID="80c26c09-83a0-4b08-979b-a138a5ed5d4b" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:47 crc kubenswrapper[4966]: I0127 17:03:47.094565 4966 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-6s27c container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:47 crc kubenswrapper[4966]: I0127 17:03:47.094661 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-6s27c" podUID="9c5e1e82-3053-4895-91ce-56475540fc35" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:47 crc kubenswrapper[4966]: I0127 17:03:47.177312 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" podUID="c70eec8b-c8da-4620-9c5e-bb19e5d66424" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:47 crc kubenswrapper[4966]: I0127 17:03:47.242033 4966 patch_prober.go:28] interesting pod/loki-operator-controller-manager-7c74c5b958-9l7lg container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:47 crc kubenswrapper[4966]: I0127 17:03:47.242105 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" podUID="8ee78ad6-4785-4aee-a8cb-c16b147764d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:47 crc kubenswrapper[4966]: I0127 17:03:47.242222 4966 patch_prober.go:28] interesting pod/loki-operator-controller-manager-7c74c5b958-9l7lg container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:47 crc kubenswrapper[4966]: I0127 17:03:47.242283 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" podUID="8ee78ad6-4785-4aee-a8cb-c16b147764d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:47 crc kubenswrapper[4966]: I0127 17:03:47.242386 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" Jan 27 17:03:47 crc kubenswrapper[4966]: I0127 17:03:47.435958 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-7c74c5b958-9l7lg" Jan 27 17:03:47 crc kubenswrapper[4966]: E0127 17:03:47.543650 4966 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8631c1d2e4e6006cc5c0581834e9d9d692faafcdd99032634f07f208377d2c31" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 27 17:03:47 crc kubenswrapper[4966]: E0127 17:03:47.550812 4966 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8631c1d2e4e6006cc5c0581834e9d9d692faafcdd99032634f07f208377d2c31" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 27 17:03:47 crc kubenswrapper[4966]: E0127 17:03:47.552325 4966 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8631c1d2e4e6006cc5c0581834e9d9d692faafcdd99032634f07f208377d2c31" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 27 17:03:47 crc kubenswrapper[4966]: E0127 17:03:47.552359 4966 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a1be6855-0a73-406a-93d5-625f7fca558b" containerName="galera" Jan 27 17:03:47 crc kubenswrapper[4966]: I0127 17:03:47.751464 4966 patch_prober.go:28] interesting pod/console-operator-58897d9998-wpbsk container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 27 17:03:47 crc kubenswrapper[4966]: I0127 17:03:47.751535 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" podUID="12b9edc6-687c-47b9-b8c6-8fa656fc40de" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 27 17:03:47 crc kubenswrapper[4966]: I0127 17:03:47.753562 4966 patch_prober.go:28] interesting pod/console-operator-58897d9998-wpbsk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 27 17:03:47 crc kubenswrapper[4966]: I0127 17:03:47.753732 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" podUID="12b9edc6-687c-47b9-b8c6-8fa656fc40de" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 27 17:03:47 crc kubenswrapper[4966]: I0127 17:03:47.849410 4966 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-88nmc container/loki-distributor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.51:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:47 crc kubenswrapper[4966]: I0127 17:03:47.849691 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-88nmc" podUID="c70eec8b-c8da-4620-9c5e-bb19e5d66424" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.51:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:47 crc kubenswrapper[4966]: I0127 17:03:47.895814 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-lr7wd container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:47 crc kubenswrapper[4966]: I0127 17:03:47.895884 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-lr7wd" podUID="a58b269c-6e15-4eda-aa6f-00e51aa132fe" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:47 crc kubenswrapper[4966]: I0127 17:03:47.914287 4966 patch_prober.go:28] interesting pod/logging-loki-gateway-575b568fc4-5wzxv container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:47 crc kubenswrapper[4966]: I0127 17:03:47.914378 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575b568fc4-5wzxv" podUID="fa2c58c2-9b23-4360-897e-582237775277" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:48 crc kubenswrapper[4966]: I0127 17:03:48.087580 4966 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fgklq container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Jan 27 17:03:48 crc kubenswrapper[4966]: I0127 17:03:48.087645 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" podUID="70f9fda4-72f3-4f6a-8b6a-38dffbb6c958" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Jan 27 17:03:48 crc kubenswrapper[4966]: I0127 17:03:48.087814 4966 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fgklq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Jan 27 17:03:48 crc kubenswrapper[4966]: I0127 17:03:48.087867 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" podUID="70f9fda4-72f3-4f6a-8b6a-38dffbb6c958" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Jan 27 17:03:48 crc kubenswrapper[4966]: I0127 17:03:48.769506 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="987fabcb-b141-4f03-96fe-d2acf923452c" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 17:03:48 crc kubenswrapper[4966]: I0127 17:03:48.993349 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-48xxm" Jan 27 17:03:48 crc kubenswrapper[4966]: I0127 17:03:48.994077 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-48xxm" Jan 27 17:03:49 crc kubenswrapper[4966]: I0127 17:03:49.309051 4966 patch_prober.go:28] interesting pod/router-default-5444994796-wzrqq container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:49 crc kubenswrapper[4966]: I0127 17:03:49.309402 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-wzrqq" podUID="08398814-3579-49c5-bf30-b8e700fabdab" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:49 crc kubenswrapper[4966]: I0127 17:03:49.673271 4966 patch_prober.go:28] interesting pod/route-controller-manager-86647b877f-m6l5t container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 27 17:03:49 crc kubenswrapper[4966]: I0127 17:03:49.673328 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" podUID="e0d60f56-c8ec-4004-a1c4-4f014dbccf7f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 27 17:03:49 crc kubenswrapper[4966]: I0127 17:03:49.850997 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 27 17:03:49 crc kubenswrapper[4966]: I0127 17:03:49.851318 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 27 17:03:49 crc kubenswrapper[4966]: I0127 17:03:49.852552 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 27 17:03:49 crc kubenswrapper[4966]: I0127 17:03:49.852675 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 27 17:03:50 crc kubenswrapper[4966]: I0127 17:03:50.157144 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" podUID="2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:50 crc kubenswrapper[4966]: I0127 17:03:50.157250 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-546646bf6b-gmbc9" podUID="2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:50 crc kubenswrapper[4966]: I0127 17:03:50.239305 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-wzrqq_08398814-3579-49c5-bf30-b8e700fabdab/router/0.log" Jan 27 17:03:50 crc kubenswrapper[4966]: I0127 17:03:50.239652 4966 generic.go:334] "Generic (PLEG): container finished" podID="08398814-3579-49c5-bf30-b8e700fabdab" containerID="bd020432a2d6ff1957eff4ef787cd85c20d0b8d3aaa3f6a34084f838c823a40b" exitCode=137 Jan 27 17:03:50 crc kubenswrapper[4966]: I0127 17:03:50.239798 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wzrqq" event={"ID":"08398814-3579-49c5-bf30-b8e700fabdab","Type":"ContainerDied","Data":"bd020432a2d6ff1957eff4ef787cd85c20d0b8d3aaa3f6a34084f838c823a40b"} Jan 27 17:03:50 crc kubenswrapper[4966]: I0127 17:03:50.530639 4966 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9876t container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.6:8081/healthz\": dial tcp 10.217.0.6:8081: connect: connection refused" start-of-body= Jan 27 17:03:50 crc kubenswrapper[4966]: I0127 17:03:50.530940 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-9876t" podUID="0443c8da-0b0f-4632-b990-f83e403a8b82" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.6:8081/healthz\": dial tcp 10.217.0.6:8081: connect: connection refused" Jan 27 17:03:50 crc kubenswrapper[4966]: I0127 17:03:50.530647 4966 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9876t container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.6:8081/healthz\": dial tcp 10.217.0.6:8081: connect: connection refused" start-of-body= Jan 27 17:03:50 crc kubenswrapper[4966]: I0127 17:03:50.531051 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-9876t" podUID="0443c8da-0b0f-4632-b990-f83e403a8b82" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.6:8081/healthz\": dial tcp 10.217.0.6:8081: connect: connection refused" Jan 27 17:03:50 crc kubenswrapper[4966]: I0127 17:03:50.553985 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-fpvwf" Jan 27 17:03:50 crc kubenswrapper[4966]: I0127 17:03:50.557992 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hfhvb" Jan 27 17:03:51 crc kubenswrapper[4966]: I0127 17:03:51.270516 4966 generic.go:334] "Generic (PLEG): container finished" podID="a1be6855-0a73-406a-93d5-625f7fca558b" containerID="8631c1d2e4e6006cc5c0581834e9d9d692faafcdd99032634f07f208377d2c31" exitCode=0 Jan 27 17:03:51 crc kubenswrapper[4966]: I0127 17:03:51.270940 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a1be6855-0a73-406a-93d5-625f7fca558b","Type":"ContainerDied","Data":"8631c1d2e4e6006cc5c0581834e9d9d692faafcdd99032634f07f208377d2c31"} Jan 27 17:03:51 crc kubenswrapper[4966]: I0127 17:03:51.290312 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-wzrqq_08398814-3579-49c5-bf30-b8e700fabdab/router/0.log" Jan 27 17:03:51 crc kubenswrapper[4966]: I0127 17:03:51.290370 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wzrqq" event={"ID":"08398814-3579-49c5-bf30-b8e700fabdab","Type":"ContainerStarted","Data":"07cfa6588b788e613bf5ff14e8d014b73994af97c3b723a887dc5ea60204011d"} Jan 27 17:03:51 crc kubenswrapper[4966]: I0127 17:03:51.525147 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-fpvwf" podUID="e75b042c-789e-43fc-8736-b3f5093f21db" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:51 crc kubenswrapper[4966]: I0127 17:03:51.525254 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="987fabcb-b141-4f03-96fe-d2acf923452c" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 17:03:51 crc kubenswrapper[4966]: I0127 17:03:51.525352 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 17:03:51 crc kubenswrapper[4966]: I0127 17:03:51.526339 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"e1196732ce70b236cab0778765692e918642b8d2323d9109cf37e08adf689da2"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Jan 27 17:03:51 crc kubenswrapper[4966]: I0127 17:03:51.526393 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="987fabcb-b141-4f03-96fe-d2acf923452c" containerName="cinder-scheduler" containerID="cri-o://e1196732ce70b236cab0778765692e918642b8d2323d9109cf37e08adf689da2" gracePeriod=30 Jan 27 17:03:51 crc kubenswrapper[4966]: I0127 17:03:51.536375 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lptwz" Jan 27 17:03:51 crc kubenswrapper[4966]: I0127 17:03:51.646523 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openstack-operators/openstack-operator-index-48xxm" podUID="57175838-b13a-4dc9-bf85-ed8668e3d88c" containerName="registry-server" probeResult="failure" output=< Jan 27 17:03:51 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:03:51 crc kubenswrapper[4966]: > Jan 27 17:03:51 crc kubenswrapper[4966]: I0127 17:03:51.764375 4966 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-x6l4k container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.15:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:51 crc kubenswrapper[4966]: I0127 17:03:51.764450 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-x6l4k" podUID="b9f6b9a4-ded2-467b-9e87-6fafa667f709" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.15:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:51 crc kubenswrapper[4966]: I0127 17:03:51.845870 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-wpv4z" Jan 27 17:03:51 crc kubenswrapper[4966]: I0127 17:03:51.942078 4966 patch_prober.go:28] interesting pod/console-76cf6b7d9d-8vc2q container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:03:51 crc kubenswrapper[4966]: I0127 17:03:51.942132 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-76cf6b7d9d-8vc2q" podUID="f71765ab-530f-4029-9091-f63413efd9c2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:03:51 crc kubenswrapper[4966]: I0127 17:03:51.958400 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" podUID="37cde3a9-999c-4c96-a024-1769c058c4c8" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": dial tcp 10.217.0.44:6080: connect: connection refused" Jan 27 17:03:52 crc kubenswrapper[4966]: I0127 17:03:52.269182 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 17:03:52 crc kubenswrapper[4966]: I0127 17:03:52.271890 4966 patch_prober.go:28] interesting pod/router-default-5444994796-wzrqq container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 27 17:03:52 crc kubenswrapper[4966]: I0127 17:03:52.271955 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wzrqq" podUID="08398814-3579-49c5-bf30-b8e700fabdab" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 27 17:03:52 crc kubenswrapper[4966]: I0127 17:03:52.309122 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a1be6855-0a73-406a-93d5-625f7fca558b","Type":"ContainerStarted","Data":"0d863831e0207131b92d7b853641344b95ca157226489d184ea05566549a8266"} Jan 27 17:03:52 crc kubenswrapper[4966]: I0127 17:03:52.685747 4966 patch_prober.go:28] interesting pod/controller-manager-7df4589fcc-vdfk5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Jan 27 17:03:52 crc kubenswrapper[4966]: I0127 17:03:52.686123 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" podUID="03e547fd-14a6-41eb-9bf7-8aea75e60ddf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Jan 27 17:03:52 crc kubenswrapper[4966]: I0127 17:03:52.850936 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 27 17:03:52 crc kubenswrapper[4966]: I0127 17:03:52.851371 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 27 17:03:52 crc kubenswrapper[4966]: I0127 17:03:52.852229 4966 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pn2q5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 27 17:03:52 crc kubenswrapper[4966]: I0127 17:03:52.852281 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" podUID="0046cf8f-c67b-4936-b3d6-1f7ac02eb919" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 27 17:03:53 crc kubenswrapper[4966]: I0127 17:03:53.150319 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hqm7h" Jan 27 17:03:53 crc kubenswrapper[4966]: I0127 17:03:53.254429 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-76ffr" Jan 27 17:03:53 crc kubenswrapper[4966]: I0127 17:03:53.276329 4966 patch_prober.go:28] interesting pod/router-default-5444994796-wzrqq container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 27 17:03:53 crc kubenswrapper[4966]: I0127 17:03:53.276372 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wzrqq" podUID="08398814-3579-49c5-bf30-b8e700fabdab" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 27 17:03:53 crc kubenswrapper[4966]: I0127 17:03:53.432177 4966 generic.go:334] "Generic (PLEG): container finished" podID="1dc01362-ea5a-48fe-b67f-1e00b193c36e" containerID="3bb901e699a65f37bdaf20f6ec41a0f298b753e6f1d314eb609b826aa8bcd29a" exitCode=0 Jan 27 17:03:53 crc kubenswrapper[4966]: I0127 17:03:53.432248 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1dc01362-ea5a-48fe-b67f-1e00b193c36e","Type":"ContainerDied","Data":"3bb901e699a65f37bdaf20f6ec41a0f298b753e6f1d314eb609b826aa8bcd29a"} Jan 27 17:03:53 crc kubenswrapper[4966]: I0127 17:03:53.486376 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6d5tv" Jan 27 17:03:53 crc kubenswrapper[4966]: I0127 17:03:53.766433 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-kxj69" Jan 27 17:03:53 crc kubenswrapper[4966]: I0127 17:03:53.794393 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-72wzd" podUID="f55af454-c7f3-4d9c-8c30-108b947410a7" containerName="registry-server" probeResult="failure" output=< Jan 27 17:03:53 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:03:53 crc kubenswrapper[4966]: > Jan 27 17:03:53 crc kubenswrapper[4966]: I0127 17:03:53.971873 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jnqzt"] Jan 27 17:03:53 crc kubenswrapper[4966]: I0127 17:03:53.994939 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jnqzt" Jan 27 17:03:54 crc kubenswrapper[4966]: I0127 17:03:54.048763 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0537ee3-2679-4cee-96ad-d5f5dfa15783-catalog-content\") pod \"redhat-marketplace-jnqzt\" (UID: \"d0537ee3-2679-4cee-96ad-d5f5dfa15783\") " pod="openshift-marketplace/redhat-marketplace-jnqzt" Jan 27 17:03:54 crc kubenswrapper[4966]: I0127 17:03:54.049074 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0537ee3-2679-4cee-96ad-d5f5dfa15783-utilities\") pod \"redhat-marketplace-jnqzt\" (UID: \"d0537ee3-2679-4cee-96ad-d5f5dfa15783\") " pod="openshift-marketplace/redhat-marketplace-jnqzt" Jan 27 17:03:54 crc kubenswrapper[4966]: I0127 17:03:54.049210 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p7h7\" (UniqueName: \"kubernetes.io/projected/d0537ee3-2679-4cee-96ad-d5f5dfa15783-kube-api-access-7p7h7\") pod \"redhat-marketplace-jnqzt\" (UID: \"d0537ee3-2679-4cee-96ad-d5f5dfa15783\") " pod="openshift-marketplace/redhat-marketplace-jnqzt" Jan 27 17:03:54 crc kubenswrapper[4966]: I0127 17:03:54.051812 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-785c968969-bl9x5" Jan 27 17:03:54 crc kubenswrapper[4966]: I0127 17:03:54.151810 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p7h7\" (UniqueName: \"kubernetes.io/projected/d0537ee3-2679-4cee-96ad-d5f5dfa15783-kube-api-access-7p7h7\") pod \"redhat-marketplace-jnqzt\" (UID: \"d0537ee3-2679-4cee-96ad-d5f5dfa15783\") " pod="openshift-marketplace/redhat-marketplace-jnqzt" Jan 27 17:03:54 crc kubenswrapper[4966]: I0127 17:03:54.152002 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0537ee3-2679-4cee-96ad-d5f5dfa15783-catalog-content\") pod \"redhat-marketplace-jnqzt\" (UID: \"d0537ee3-2679-4cee-96ad-d5f5dfa15783\") " pod="openshift-marketplace/redhat-marketplace-jnqzt" Jan 27 17:03:54 crc kubenswrapper[4966]: I0127 17:03:54.152084 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0537ee3-2679-4cee-96ad-d5f5dfa15783-utilities\") pod \"redhat-marketplace-jnqzt\" (UID: \"d0537ee3-2679-4cee-96ad-d5f5dfa15783\") " pod="openshift-marketplace/redhat-marketplace-jnqzt" Jan 27 17:03:54 crc kubenswrapper[4966]: I0127 17:03:54.156107 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0537ee3-2679-4cee-96ad-d5f5dfa15783-utilities\") pod \"redhat-marketplace-jnqzt\" (UID: \"d0537ee3-2679-4cee-96ad-d5f5dfa15783\") " pod="openshift-marketplace/redhat-marketplace-jnqzt" Jan 27 17:03:54 crc kubenswrapper[4966]: I0127 17:03:54.169399 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0537ee3-2679-4cee-96ad-d5f5dfa15783-catalog-content\") pod \"redhat-marketplace-jnqzt\" (UID: \"d0537ee3-2679-4cee-96ad-d5f5dfa15783\") " pod="openshift-marketplace/redhat-marketplace-jnqzt" Jan 27 17:03:54 crc kubenswrapper[4966]: I0127 17:03:54.220811 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p7h7\" (UniqueName: \"kubernetes.io/projected/d0537ee3-2679-4cee-96ad-d5f5dfa15783-kube-api-access-7p7h7\") pod \"redhat-marketplace-jnqzt\" (UID: \"d0537ee3-2679-4cee-96ad-d5f5dfa15783\") " pod="openshift-marketplace/redhat-marketplace-jnqzt" Jan 27 17:03:54 crc kubenswrapper[4966]: I0127 17:03:54.226858 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jnqzt"] Jan 27 17:03:54 crc kubenswrapper[4966]: I0127 17:03:54.284309 4966 patch_prober.go:28] interesting pod/router-default-5444994796-wzrqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 17:03:54 crc kubenswrapper[4966]: [+]has-synced ok Jan 27 17:03:54 crc kubenswrapper[4966]: [+]process-running ok Jan 27 17:03:54 crc kubenswrapper[4966]: healthz check failed Jan 27 17:03:54 crc kubenswrapper[4966]: I0127 17:03:54.284472 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wzrqq" podUID="08398814-3579-49c5-bf30-b8e700fabdab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 17:03:54 crc kubenswrapper[4966]: I0127 17:03:54.320887 4966 trace.go:236] Trace[475903242]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-0" (27-Jan-2026 17:03:52.822) (total time: 1487ms): Jan 27 17:03:54 crc kubenswrapper[4966]: Trace[475903242]: [1.487169382s] [1.487169382s] END Jan 27 17:03:54 crc kubenswrapper[4966]: I0127 17:03:54.332044 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jnqzt" Jan 27 17:03:54 crc kubenswrapper[4966]: I0127 17:03:54.650320 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1dc01362-ea5a-48fe-b67f-1e00b193c36e","Type":"ContainerStarted","Data":"962b35b2596bd58eb9731ec5037edb5bd51776afee679c0fd120a25b54f066eb"} Jan 27 17:03:55 crc kubenswrapper[4966]: I0127 17:03:55.276064 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 17:03:55 crc kubenswrapper[4966]: I0127 17:03:55.276886 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 17:03:55 crc kubenswrapper[4966]: I0127 17:03:55.282181 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-wzrqq" Jan 27 17:03:55 crc kubenswrapper[4966]: I0127 17:03:55.408228 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9" Jan 27 17:03:55 crc kubenswrapper[4966]: I0127 17:03:55.860657 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pn2q5" Jan 27 17:03:56 crc kubenswrapper[4966]: I0127 17:03:56.242191 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 17:03:56 crc kubenswrapper[4966]: I0127 17:03:56.242592 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 17:03:56 crc kubenswrapper[4966]: I0127 17:03:56.347406 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jnqzt"] Jan 27 17:03:56 crc kubenswrapper[4966]: W0127 17:03:56.368864 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0537ee3_2679_4cee_96ad_d5f5dfa15783.slice/crio-463f7e4543e769cd837a8b3ead7ac7f7c010820d2fecac2ae4eb0cf5dcc21bcc WatchSource:0}: Error finding container 463f7e4543e769cd837a8b3ead7ac7f7c010820d2fecac2ae4eb0cf5dcc21bcc: Status 404 returned error can't find the container with id 463f7e4543e769cd837a8b3ead7ac7f7c010820d2fecac2ae4eb0cf5dcc21bcc Jan 27 17:03:56 crc kubenswrapper[4966]: I0127 17:03:56.691349 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnqzt" event={"ID":"d0537ee3-2679-4cee-96ad-d5f5dfa15783","Type":"ContainerStarted","Data":"0b77f3907e9dc9675b10089108dd7350476cabd76c6addd4e419c9ae4d400732"} Jan 27 17:03:56 crc kubenswrapper[4966]: I0127 17:03:56.692469 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnqzt" event={"ID":"d0537ee3-2679-4cee-96ad-d5f5dfa15783","Type":"ContainerStarted","Data":"463f7e4543e769cd837a8b3ead7ac7f7c010820d2fecac2ae4eb0cf5dcc21bcc"} Jan 27 17:03:56 crc kubenswrapper[4966]: I0127 17:03:56.694981 4966 generic.go:334] "Generic (PLEG): container finished" podID="987fabcb-b141-4f03-96fe-d2acf923452c" containerID="e1196732ce70b236cab0778765692e918642b8d2323d9109cf37e08adf689da2" exitCode=0 Jan 27 17:03:56 crc kubenswrapper[4966]: I0127 17:03:56.695047 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"987fabcb-b141-4f03-96fe-d2acf923452c","Type":"ContainerDied","Data":"e1196732ce70b236cab0778765692e918642b8d2323d9109cf37e08adf689da2"} Jan 27 17:03:56 crc kubenswrapper[4966]: I0127 17:03:56.962706 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-5jnwt" Jan 27 17:03:57 crc kubenswrapper[4966]: I0127 17:03:57.530333 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 17:03:57 crc kubenswrapper[4966]: I0127 17:03:57.532204 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 17:03:57 crc kubenswrapper[4966]: I0127 17:03:57.727216 4966 generic.go:334] "Generic (PLEG): container finished" podID="d0537ee3-2679-4cee-96ad-d5f5dfa15783" containerID="0b77f3907e9dc9675b10089108dd7350476cabd76c6addd4e419c9ae4d400732" exitCode=0 Jan 27 17:03:57 crc kubenswrapper[4966]: I0127 17:03:57.727322 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnqzt" event={"ID":"d0537ee3-2679-4cee-96ad-d5f5dfa15783","Type":"ContainerDied","Data":"0b77f3907e9dc9675b10089108dd7350476cabd76c6addd4e419c9ae4d400732"} Jan 27 17:03:57 crc kubenswrapper[4966]: I0127 17:03:57.741423 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"987fabcb-b141-4f03-96fe-d2acf923452c","Type":"ContainerStarted","Data":"17ea7151f9b274a9ea316a12760baac98b2137f741b1fa35e90aa96f8fdee462"} Jan 27 17:03:57 crc kubenswrapper[4966]: I0127 17:03:57.757835 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-wpbsk" Jan 27 17:03:58 crc kubenswrapper[4966]: I0127 17:03:58.096981 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgklq" Jan 27 17:03:58 crc kubenswrapper[4966]: I0127 17:03:58.350092 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 17:03:58 crc kubenswrapper[4966]: I0127 17:03:58.473454 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 17:03:58 crc kubenswrapper[4966]: I0127 17:03:58.751781 4966 generic.go:334] "Generic (PLEG): container finished" podID="9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6" containerID="eae15376315732b2d15dc71ea83591a96fbb2edefb826ed3e0188b21d55def84" exitCode=1 Jan 27 17:03:58 crc kubenswrapper[4966]: I0127 17:03:58.751890 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6","Type":"ContainerDied","Data":"eae15376315732b2d15dc71ea83591a96fbb2edefb826ed3e0188b21d55def84"} Jan 27 17:03:59 crc kubenswrapper[4966]: I0127 17:03:59.041601 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-48xxm" Jan 27 17:03:59 crc kubenswrapper[4966]: I0127 17:03:59.090457 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-48xxm" Jan 27 17:03:59 crc kubenswrapper[4966]: I0127 17:03:59.638897 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 17:03:59 crc kubenswrapper[4966]: I0127 17:03:59.680850 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86647b877f-m6l5t" Jan 27 17:03:59 crc kubenswrapper[4966]: I0127 17:03:59.756440 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 17:03:59 crc kubenswrapper[4966]: I0127 17:03:59.770701 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnqzt" event={"ID":"d0537ee3-2679-4cee-96ad-d5f5dfa15783","Type":"ContainerStarted","Data":"31dd5b573d4df349d2471e29e4cc9c68a53ee6c9abdaad1ec66f34f5d837a8a4"} Jan 27 17:04:00 crc kubenswrapper[4966]: I0127 17:04:00.428603 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 17:04:00 crc kubenswrapper[4966]: I0127 17:04:00.535149 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-9876t" Jan 27 17:04:00 crc kubenswrapper[4966]: I0127 17:04:00.783730 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6","Type":"ContainerDied","Data":"e3f37d8428c71dbe62c3ba1da130e6d705a20de3dddbfb25893b1f5b9ecebbc2"} Jan 27 17:04:00 crc kubenswrapper[4966]: I0127 17:04:00.784833 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3f37d8428c71dbe62c3ba1da130e6d705a20de3dddbfb25893b1f5b9ecebbc2" Jan 27 17:04:00 crc kubenswrapper[4966]: I0127 17:04:00.790121 4966 generic.go:334] "Generic (PLEG): container finished" podID="d0537ee3-2679-4cee-96ad-d5f5dfa15783" containerID="31dd5b573d4df349d2471e29e4cc9c68a53ee6c9abdaad1ec66f34f5d837a8a4" exitCode=0 Jan 27 17:04:00 crc kubenswrapper[4966]: I0127 17:04:00.790167 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnqzt" event={"ID":"d0537ee3-2679-4cee-96ad-d5f5dfa15783","Type":"ContainerDied","Data":"31dd5b573d4df349d2471e29e4cc9c68a53ee6c9abdaad1ec66f34f5d837a8a4"} Jan 27 17:04:00 crc kubenswrapper[4966]: I0127 17:04:00.960638 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.059123 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.059206 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-test-operator-ephemeral-temporary\") pod \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.059299 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-ssh-key\") pod \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.059489 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-ca-certs\") pod \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.059540 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-openstack-config-secret\") pod \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.059634 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-test-operator-ephemeral-workdir\") pod \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.059674 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-openstack-config\") pod \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.059715 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-config-data\") pod \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.059746 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c89tt\" (UniqueName: \"kubernetes.io/projected/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-kube-api-access-c89tt\") pod \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\" (UID: \"9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6\") " Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.073848 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6" (UID: "9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.078514 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-config-data" (OuterVolumeSpecName: "config-data") pod "9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6" (UID: "9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.089444 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "test-operator-logs") pod "9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6" (UID: "9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.108359 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-kube-api-access-c89tt" (OuterVolumeSpecName: "kube-api-access-c89tt") pod "9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6" (UID: "9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6"). InnerVolumeSpecName "kube-api-access-c89tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.118951 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6" (UID: "9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.162975 4966 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.163271 4966 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.163283 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c89tt\" (UniqueName: \"kubernetes.io/projected/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-kube-api-access-c89tt\") on node \"crc\" DevicePath \"\"" Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.163327 4966 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.163338 4966 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.186209 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6" (UID: "9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.186357 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6" (UID: "9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.196155 4966 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.231042 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6" (UID: "9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.232166 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6" (UID: "9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.265478 4966 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.265505 4966 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.265515 4966 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.265570 4966 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.265581 4966 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 27 17:04:01 crc kubenswrapper[4966]: I0127 17:04:01.803674 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 17:04:02 crc kubenswrapper[4966]: I0127 17:04:02.693739 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7df4589fcc-vdfk5" Jan 27 17:04:02 crc kubenswrapper[4966]: I0127 17:04:02.826793 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnqzt" event={"ID":"d0537ee3-2679-4cee-96ad-d5f5dfa15783","Type":"ContainerStarted","Data":"c4cb63278fa2a67e8115715ecfd733750a9f2d10cb00d74e72dd94c51015b6d8"} Jan 27 17:04:02 crc kubenswrapper[4966]: I0127 17:04:02.863130 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jnqzt" podStartSLOduration=27.018160136 podStartE2EDuration="30.853972845s" podCreationTimestamp="2026-01-27 17:03:32 +0000 UTC" firstStartedPulling="2026-01-27 17:03:57.731051306 +0000 UTC m=+4904.033844784" lastFinishedPulling="2026-01-27 17:04:01.566864005 +0000 UTC m=+4907.869657493" observedRunningTime="2026-01-27 17:04:02.845252222 +0000 UTC m=+4909.148045730" watchObservedRunningTime="2026-01-27 17:04:02.853972845 +0000 UTC m=+4909.156766333" Jan 27 17:04:04 crc kubenswrapper[4966]: I0127 17:04:04.332564 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jnqzt" Jan 27 17:04:04 crc kubenswrapper[4966]: I0127 17:04:04.333436 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jnqzt" Jan 27 17:04:04 crc kubenswrapper[4966]: I0127 17:04:04.486083 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-72wzd" podUID="f55af454-c7f3-4d9c-8c30-108b947410a7" containerName="registry-server" probeResult="failure" output=< Jan 27 17:04:04 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:04:04 crc kubenswrapper[4966]: > Jan 27 17:04:05 crc kubenswrapper[4966]: I0127 17:04:05.391577 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-jnqzt" podUID="d0537ee3-2679-4cee-96ad-d5f5dfa15783" containerName="registry-server" probeResult="failure" output=< Jan 27 17:04:05 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:04:05 crc kubenswrapper[4966]: > Jan 27 17:04:05 crc kubenswrapper[4966]: I0127 17:04:05.478078 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 17:04:06 crc kubenswrapper[4966]: I0127 17:04:06.414119 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 17:04:06 crc kubenswrapper[4966]: E0127 17:04:06.415143 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6" containerName="tempest-tests-tempest-tests-runner" Jan 27 17:04:06 crc kubenswrapper[4966]: I0127 17:04:06.415172 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6" containerName="tempest-tests-tempest-tests-runner" Jan 27 17:04:06 crc kubenswrapper[4966]: I0127 17:04:06.415536 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6" containerName="tempest-tests-tempest-tests-runner" Jan 27 17:04:06 crc kubenswrapper[4966]: I0127 17:04:06.416709 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 17:04:06 crc kubenswrapper[4966]: I0127 17:04:06.423096 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-xsspx" Jan 27 17:04:06 crc kubenswrapper[4966]: I0127 17:04:06.438505 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 17:04:06 crc kubenswrapper[4966]: I0127 17:04:06.513916 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjl6m\" (UniqueName: \"kubernetes.io/projected/2a62958e-1b8e-4063-8070-4a273e047872-kube-api-access-sjl6m\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2a62958e-1b8e-4063-8070-4a273e047872\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 17:04:06 crc kubenswrapper[4966]: I0127 17:04:06.514389 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2a62958e-1b8e-4063-8070-4a273e047872\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 17:04:06 crc kubenswrapper[4966]: I0127 17:04:06.617271 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjl6m\" (UniqueName: \"kubernetes.io/projected/2a62958e-1b8e-4063-8070-4a273e047872-kube-api-access-sjl6m\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2a62958e-1b8e-4063-8070-4a273e047872\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 17:04:06 crc kubenswrapper[4966]: I0127 17:04:06.617392 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2a62958e-1b8e-4063-8070-4a273e047872\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 17:04:06 crc kubenswrapper[4966]: I0127 17:04:06.619057 4966 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2a62958e-1b8e-4063-8070-4a273e047872\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 17:04:06 crc kubenswrapper[4966]: I0127 17:04:06.636941 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjl6m\" (UniqueName: \"kubernetes.io/projected/2a62958e-1b8e-4063-8070-4a273e047872-kube-api-access-sjl6m\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2a62958e-1b8e-4063-8070-4a273e047872\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 17:04:06 crc kubenswrapper[4966]: I0127 17:04:06.662562 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2a62958e-1b8e-4063-8070-4a273e047872\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 17:04:06 crc kubenswrapper[4966]: I0127 17:04:06.739491 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 17:04:07 crc kubenswrapper[4966]: I0127 17:04:07.342093 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 17:04:07 crc kubenswrapper[4966]: I0127 17:04:07.888391 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"2a62958e-1b8e-4063-8070-4a273e047872","Type":"ContainerStarted","Data":"c0195d8e7b1193d4d49625888c32bb8725a2e39ed6909e251ead513f3c53e53c"} Jan 27 17:04:09 crc kubenswrapper[4966]: I0127 17:04:09.917106 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"2a62958e-1b8e-4063-8070-4a273e047872","Type":"ContainerStarted","Data":"32db93c4f5810dcd0bbfa26df2d5fbc6dc44f5b0027c39a23e66321f2b272f42"} Jan 27 17:04:09 crc kubenswrapper[4966]: I0127 17:04:09.940304 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.944333065 podStartE2EDuration="3.940284733s" podCreationTimestamp="2026-01-27 17:04:06 +0000 UTC" firstStartedPulling="2026-01-27 17:04:07.358087834 +0000 UTC m=+4913.660881322" lastFinishedPulling="2026-01-27 17:04:09.354039502 +0000 UTC m=+4915.656832990" observedRunningTime="2026-01-27 17:04:09.929483925 +0000 UTC m=+4916.232277433" watchObservedRunningTime="2026-01-27 17:04:09.940284733 +0000 UTC m=+4916.243078221" Jan 27 17:04:12 crc kubenswrapper[4966]: I0127 17:04:12.775198 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-72wzd" Jan 27 17:04:12 crc kubenswrapper[4966]: I0127 17:04:12.842421 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-72wzd" Jan 27 17:04:14 crc kubenswrapper[4966]: I0127 17:04:14.015685 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-72wzd"] Jan 27 17:04:14 crc kubenswrapper[4966]: I0127 17:04:14.017208 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-72wzd" podUID="f55af454-c7f3-4d9c-8c30-108b947410a7" containerName="registry-server" containerID="cri-o://50a69cadd976d7712e9ca39e9d297d310a97358081f27d3218a13e593202b1b0" gracePeriod=2 Jan 27 17:04:14 crc kubenswrapper[4966]: I0127 17:04:14.411060 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jnqzt" Jan 27 17:04:14 crc kubenswrapper[4966]: I0127 17:04:14.483024 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jnqzt" Jan 27 17:04:14 crc kubenswrapper[4966]: I0127 17:04:14.719973 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-72wzd" Jan 27 17:04:14 crc kubenswrapper[4966]: I0127 17:04:14.843831 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55af454-c7f3-4d9c-8c30-108b947410a7-catalog-content\") pod \"f55af454-c7f3-4d9c-8c30-108b947410a7\" (UID: \"f55af454-c7f3-4d9c-8c30-108b947410a7\") " Jan 27 17:04:14 crc kubenswrapper[4966]: I0127 17:04:14.844285 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdz56\" (UniqueName: \"kubernetes.io/projected/f55af454-c7f3-4d9c-8c30-108b947410a7-kube-api-access-pdz56\") pod \"f55af454-c7f3-4d9c-8c30-108b947410a7\" (UID: \"f55af454-c7f3-4d9c-8c30-108b947410a7\") " Jan 27 17:04:14 crc kubenswrapper[4966]: I0127 17:04:14.844363 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55af454-c7f3-4d9c-8c30-108b947410a7-utilities\") pod \"f55af454-c7f3-4d9c-8c30-108b947410a7\" (UID: \"f55af454-c7f3-4d9c-8c30-108b947410a7\") " Jan 27 17:04:14 crc kubenswrapper[4966]: I0127 17:04:14.846892 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f55af454-c7f3-4d9c-8c30-108b947410a7-utilities" (OuterVolumeSpecName: "utilities") pod "f55af454-c7f3-4d9c-8c30-108b947410a7" (UID: "f55af454-c7f3-4d9c-8c30-108b947410a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:04:14 crc kubenswrapper[4966]: I0127 17:04:14.854448 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f55af454-c7f3-4d9c-8c30-108b947410a7-kube-api-access-pdz56" (OuterVolumeSpecName: "kube-api-access-pdz56") pod "f55af454-c7f3-4d9c-8c30-108b947410a7" (UID: "f55af454-c7f3-4d9c-8c30-108b947410a7"). InnerVolumeSpecName "kube-api-access-pdz56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:04:14 crc kubenswrapper[4966]: I0127 17:04:14.948612 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdz56\" (UniqueName: \"kubernetes.io/projected/f55af454-c7f3-4d9c-8c30-108b947410a7-kube-api-access-pdz56\") on node \"crc\" DevicePath \"\"" Jan 27 17:04:14 crc kubenswrapper[4966]: I0127 17:04:14.948642 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55af454-c7f3-4d9c-8c30-108b947410a7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:04:14 crc kubenswrapper[4966]: I0127 17:04:14.953512 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f55af454-c7f3-4d9c-8c30-108b947410a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f55af454-c7f3-4d9c-8c30-108b947410a7" (UID: "f55af454-c7f3-4d9c-8c30-108b947410a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:04:14 crc kubenswrapper[4966]: I0127 17:04:14.991209 4966 generic.go:334] "Generic (PLEG): container finished" podID="f55af454-c7f3-4d9c-8c30-108b947410a7" containerID="50a69cadd976d7712e9ca39e9d297d310a97358081f27d3218a13e593202b1b0" exitCode=0 Jan 27 17:04:14 crc kubenswrapper[4966]: I0127 17:04:14.991265 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-72wzd" Jan 27 17:04:14 crc kubenswrapper[4966]: I0127 17:04:14.991281 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72wzd" event={"ID":"f55af454-c7f3-4d9c-8c30-108b947410a7","Type":"ContainerDied","Data":"50a69cadd976d7712e9ca39e9d297d310a97358081f27d3218a13e593202b1b0"} Jan 27 17:04:14 crc kubenswrapper[4966]: I0127 17:04:14.991322 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72wzd" event={"ID":"f55af454-c7f3-4d9c-8c30-108b947410a7","Type":"ContainerDied","Data":"aab47489e062102b6caaf7371ad364ce268e8fa5341c9e1e33f3196f5334984d"} Jan 27 17:04:14 crc kubenswrapper[4966]: I0127 17:04:14.991918 4966 scope.go:117] "RemoveContainer" containerID="50a69cadd976d7712e9ca39e9d297d310a97358081f27d3218a13e593202b1b0" Jan 27 17:04:15 crc kubenswrapper[4966]: I0127 17:04:15.030635 4966 scope.go:117] "RemoveContainer" containerID="7ff269dddff4baf8c92fba5237badaa6769045011cd9f6e80a4a6e62557e9b90" Jan 27 17:04:15 crc kubenswrapper[4966]: I0127 17:04:15.038742 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-72wzd"] Jan 27 17:04:15 crc kubenswrapper[4966]: I0127 17:04:15.065623 4966 scope.go:117] "RemoveContainer" containerID="732751d060a03373bc2ae2fdfa444e01c0f71fd0882aafd7b8b28e7f2c7c5249" Jan 27 17:04:15 crc kubenswrapper[4966]: I0127 17:04:15.072863 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-72wzd"] Jan 27 17:04:15 crc kubenswrapper[4966]: I0127 17:04:15.074976 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55af454-c7f3-4d9c-8c30-108b947410a7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:04:15 crc kubenswrapper[4966]: I0127 17:04:15.118810 4966 scope.go:117] "RemoveContainer" containerID="50a69cadd976d7712e9ca39e9d297d310a97358081f27d3218a13e593202b1b0" Jan 27 17:04:15 crc kubenswrapper[4966]: E0127 17:04:15.120396 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50a69cadd976d7712e9ca39e9d297d310a97358081f27d3218a13e593202b1b0\": container with ID starting with 50a69cadd976d7712e9ca39e9d297d310a97358081f27d3218a13e593202b1b0 not found: ID does not exist" containerID="50a69cadd976d7712e9ca39e9d297d310a97358081f27d3218a13e593202b1b0" Jan 27 17:04:15 crc kubenswrapper[4966]: I0127 17:04:15.120445 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a69cadd976d7712e9ca39e9d297d310a97358081f27d3218a13e593202b1b0"} err="failed to get container status \"50a69cadd976d7712e9ca39e9d297d310a97358081f27d3218a13e593202b1b0\": rpc error: code = NotFound desc = could not find container \"50a69cadd976d7712e9ca39e9d297d310a97358081f27d3218a13e593202b1b0\": container with ID starting with 50a69cadd976d7712e9ca39e9d297d310a97358081f27d3218a13e593202b1b0 not found: ID does not exist" Jan 27 17:04:15 crc kubenswrapper[4966]: I0127 17:04:15.120476 4966 scope.go:117] "RemoveContainer" containerID="7ff269dddff4baf8c92fba5237badaa6769045011cd9f6e80a4a6e62557e9b90" Jan 27 17:04:15 crc kubenswrapper[4966]: E0127 17:04:15.120934 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ff269dddff4baf8c92fba5237badaa6769045011cd9f6e80a4a6e62557e9b90\": container with ID starting with 7ff269dddff4baf8c92fba5237badaa6769045011cd9f6e80a4a6e62557e9b90 not found: ID does not exist" containerID="7ff269dddff4baf8c92fba5237badaa6769045011cd9f6e80a4a6e62557e9b90" Jan 27 17:04:15 crc kubenswrapper[4966]: I0127 17:04:15.120967 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ff269dddff4baf8c92fba5237badaa6769045011cd9f6e80a4a6e62557e9b90"} err="failed to get container status \"7ff269dddff4baf8c92fba5237badaa6769045011cd9f6e80a4a6e62557e9b90\": rpc error: code = NotFound desc = could not find container \"7ff269dddff4baf8c92fba5237badaa6769045011cd9f6e80a4a6e62557e9b90\": container with ID starting with 7ff269dddff4baf8c92fba5237badaa6769045011cd9f6e80a4a6e62557e9b90 not found: ID does not exist" Jan 27 17:04:15 crc kubenswrapper[4966]: I0127 17:04:15.120988 4966 scope.go:117] "RemoveContainer" containerID="732751d060a03373bc2ae2fdfa444e01c0f71fd0882aafd7b8b28e7f2c7c5249" Jan 27 17:04:15 crc kubenswrapper[4966]: E0127 17:04:15.121244 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"732751d060a03373bc2ae2fdfa444e01c0f71fd0882aafd7b8b28e7f2c7c5249\": container with ID starting with 732751d060a03373bc2ae2fdfa444e01c0f71fd0882aafd7b8b28e7f2c7c5249 not found: ID does not exist" containerID="732751d060a03373bc2ae2fdfa444e01c0f71fd0882aafd7b8b28e7f2c7c5249" Jan 27 17:04:15 crc kubenswrapper[4966]: I0127 17:04:15.121272 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"732751d060a03373bc2ae2fdfa444e01c0f71fd0882aafd7b8b28e7f2c7c5249"} err="failed to get container status \"732751d060a03373bc2ae2fdfa444e01c0f71fd0882aafd7b8b28e7f2c7c5249\": rpc error: code = NotFound desc = could not find container \"732751d060a03373bc2ae2fdfa444e01c0f71fd0882aafd7b8b28e7f2c7c5249\": container with ID starting with 732751d060a03373bc2ae2fdfa444e01c0f71fd0882aafd7b8b28e7f2c7c5249 not found: ID does not exist" Jan 27 17:04:16 crc kubenswrapper[4966]: I0127 17:04:16.533675 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f55af454-c7f3-4d9c-8c30-108b947410a7" path="/var/lib/kubelet/pods/f55af454-c7f3-4d9c-8c30-108b947410a7/volumes" Jan 27 17:04:16 crc kubenswrapper[4966]: I0127 17:04:16.830705 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jnqzt"] Jan 27 17:04:16 crc kubenswrapper[4966]: I0127 17:04:16.831017 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jnqzt" podUID="d0537ee3-2679-4cee-96ad-d5f5dfa15783" containerName="registry-server" containerID="cri-o://c4cb63278fa2a67e8115715ecfd733750a9f2d10cb00d74e72dd94c51015b6d8" gracePeriod=2 Jan 27 17:04:17 crc kubenswrapper[4966]: I0127 17:04:17.022159 4966 generic.go:334] "Generic (PLEG): container finished" podID="d0537ee3-2679-4cee-96ad-d5f5dfa15783" containerID="c4cb63278fa2a67e8115715ecfd733750a9f2d10cb00d74e72dd94c51015b6d8" exitCode=0 Jan 27 17:04:17 crc kubenswrapper[4966]: I0127 17:04:17.022236 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnqzt" event={"ID":"d0537ee3-2679-4cee-96ad-d5f5dfa15783","Type":"ContainerDied","Data":"c4cb63278fa2a67e8115715ecfd733750a9f2d10cb00d74e72dd94c51015b6d8"} Jan 27 17:04:17 crc kubenswrapper[4966]: I0127 17:04:17.518420 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jnqzt" Jan 27 17:04:17 crc kubenswrapper[4966]: I0127 17:04:17.635408 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0537ee3-2679-4cee-96ad-d5f5dfa15783-utilities\") pod \"d0537ee3-2679-4cee-96ad-d5f5dfa15783\" (UID: \"d0537ee3-2679-4cee-96ad-d5f5dfa15783\") " Jan 27 17:04:17 crc kubenswrapper[4966]: I0127 17:04:17.635713 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p7h7\" (UniqueName: \"kubernetes.io/projected/d0537ee3-2679-4cee-96ad-d5f5dfa15783-kube-api-access-7p7h7\") pod \"d0537ee3-2679-4cee-96ad-d5f5dfa15783\" (UID: \"d0537ee3-2679-4cee-96ad-d5f5dfa15783\") " Jan 27 17:04:17 crc kubenswrapper[4966]: I0127 17:04:17.635812 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0537ee3-2679-4cee-96ad-d5f5dfa15783-catalog-content\") pod \"d0537ee3-2679-4cee-96ad-d5f5dfa15783\" (UID: \"d0537ee3-2679-4cee-96ad-d5f5dfa15783\") " Jan 27 17:04:17 crc kubenswrapper[4966]: I0127 17:04:17.636193 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0537ee3-2679-4cee-96ad-d5f5dfa15783-utilities" (OuterVolumeSpecName: "utilities") pod "d0537ee3-2679-4cee-96ad-d5f5dfa15783" (UID: "d0537ee3-2679-4cee-96ad-d5f5dfa15783"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:04:17 crc kubenswrapper[4966]: I0127 17:04:17.637274 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0537ee3-2679-4cee-96ad-d5f5dfa15783-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:04:17 crc kubenswrapper[4966]: I0127 17:04:17.652181 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0537ee3-2679-4cee-96ad-d5f5dfa15783-kube-api-access-7p7h7" (OuterVolumeSpecName: "kube-api-access-7p7h7") pod "d0537ee3-2679-4cee-96ad-d5f5dfa15783" (UID: "d0537ee3-2679-4cee-96ad-d5f5dfa15783"). InnerVolumeSpecName "kube-api-access-7p7h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:04:17 crc kubenswrapper[4966]: I0127 17:04:17.656858 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0537ee3-2679-4cee-96ad-d5f5dfa15783-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0537ee3-2679-4cee-96ad-d5f5dfa15783" (UID: "d0537ee3-2679-4cee-96ad-d5f5dfa15783"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:04:17 crc kubenswrapper[4966]: I0127 17:04:17.739317 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p7h7\" (UniqueName: \"kubernetes.io/projected/d0537ee3-2679-4cee-96ad-d5f5dfa15783-kube-api-access-7p7h7\") on node \"crc\" DevicePath \"\"" Jan 27 17:04:17 crc kubenswrapper[4966]: I0127 17:04:17.739355 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0537ee3-2679-4cee-96ad-d5f5dfa15783-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:04:18 crc kubenswrapper[4966]: I0127 17:04:18.036823 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnqzt" event={"ID":"d0537ee3-2679-4cee-96ad-d5f5dfa15783","Type":"ContainerDied","Data":"463f7e4543e769cd837a8b3ead7ac7f7c010820d2fecac2ae4eb0cf5dcc21bcc"} Jan 27 17:04:18 crc kubenswrapper[4966]: I0127 17:04:18.036887 4966 scope.go:117] "RemoveContainer" containerID="c4cb63278fa2a67e8115715ecfd733750a9f2d10cb00d74e72dd94c51015b6d8" Jan 27 17:04:18 crc kubenswrapper[4966]: I0127 17:04:18.037108 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jnqzt" Jan 27 17:04:18 crc kubenswrapper[4966]: I0127 17:04:18.078003 4966 scope.go:117] "RemoveContainer" containerID="31dd5b573d4df349d2471e29e4cc9c68a53ee6c9abdaad1ec66f34f5d837a8a4" Jan 27 17:04:18 crc kubenswrapper[4966]: I0127 17:04:18.086239 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jnqzt"] Jan 27 17:04:18 crc kubenswrapper[4966]: I0127 17:04:18.101306 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jnqzt"] Jan 27 17:04:18 crc kubenswrapper[4966]: I0127 17:04:18.116247 4966 scope.go:117] "RemoveContainer" containerID="0b77f3907e9dc9675b10089108dd7350476cabd76c6addd4e419c9ae4d400732" Jan 27 17:04:18 crc kubenswrapper[4966]: I0127 17:04:18.211204 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cr8s9" Jan 27 17:04:18 crc kubenswrapper[4966]: I0127 17:04:18.534612 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0537ee3-2679-4cee-96ad-d5f5dfa15783" path="/var/lib/kubelet/pods/d0537ee3-2679-4cee-96ad-d5f5dfa15783/volumes" Jan 27 17:04:35 crc kubenswrapper[4966]: I0127 17:04:35.855360 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 17:04:40 crc kubenswrapper[4966]: I0127 17:04:40.119756 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:04:40 crc kubenswrapper[4966]: I0127 17:04:40.120312 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:04:59 crc kubenswrapper[4966]: I0127 17:04:59.394048 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lgskz/must-gather-24pds"] Jan 27 17:04:59 crc kubenswrapper[4966]: E0127 17:04:59.395191 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0537ee3-2679-4cee-96ad-d5f5dfa15783" containerName="registry-server" Jan 27 17:04:59 crc kubenswrapper[4966]: I0127 17:04:59.395211 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0537ee3-2679-4cee-96ad-d5f5dfa15783" containerName="registry-server" Jan 27 17:04:59 crc kubenswrapper[4966]: E0127 17:04:59.395233 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0537ee3-2679-4cee-96ad-d5f5dfa15783" containerName="extract-content" Jan 27 17:04:59 crc kubenswrapper[4966]: I0127 17:04:59.395241 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0537ee3-2679-4cee-96ad-d5f5dfa15783" containerName="extract-content" Jan 27 17:04:59 crc kubenswrapper[4966]: E0127 17:04:59.395269 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0537ee3-2679-4cee-96ad-d5f5dfa15783" containerName="extract-utilities" Jan 27 17:04:59 crc kubenswrapper[4966]: I0127 17:04:59.395281 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0537ee3-2679-4cee-96ad-d5f5dfa15783" containerName="extract-utilities" Jan 27 17:04:59 crc kubenswrapper[4966]: E0127 17:04:59.395296 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55af454-c7f3-4d9c-8c30-108b947410a7" containerName="registry-server" Jan 27 17:04:59 crc kubenswrapper[4966]: I0127 17:04:59.395303 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55af454-c7f3-4d9c-8c30-108b947410a7" containerName="registry-server" Jan 27 17:04:59 crc kubenswrapper[4966]: E0127 17:04:59.395335 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55af454-c7f3-4d9c-8c30-108b947410a7" containerName="extract-utilities" Jan 27 17:04:59 crc kubenswrapper[4966]: I0127 17:04:59.395344 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55af454-c7f3-4d9c-8c30-108b947410a7" containerName="extract-utilities" Jan 27 17:04:59 crc kubenswrapper[4966]: E0127 17:04:59.395357 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55af454-c7f3-4d9c-8c30-108b947410a7" containerName="extract-content" Jan 27 17:04:59 crc kubenswrapper[4966]: I0127 17:04:59.395365 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55af454-c7f3-4d9c-8c30-108b947410a7" containerName="extract-content" Jan 27 17:04:59 crc kubenswrapper[4966]: I0127 17:04:59.395644 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0537ee3-2679-4cee-96ad-d5f5dfa15783" containerName="registry-server" Jan 27 17:04:59 crc kubenswrapper[4966]: I0127 17:04:59.395685 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="f55af454-c7f3-4d9c-8c30-108b947410a7" containerName="registry-server" Jan 27 17:04:59 crc kubenswrapper[4966]: I0127 17:04:59.397522 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lgskz/must-gather-24pds" Jan 27 17:04:59 crc kubenswrapper[4966]: I0127 17:04:59.399906 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-lgskz"/"default-dockercfg-6ww7r" Jan 27 17:04:59 crc kubenswrapper[4966]: I0127 17:04:59.402033 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lgskz"/"kube-root-ca.crt" Jan 27 17:04:59 crc kubenswrapper[4966]: I0127 17:04:59.402065 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lgskz"/"openshift-service-ca.crt" Jan 27 17:04:59 crc kubenswrapper[4966]: I0127 17:04:59.411389 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lgskz/must-gather-24pds"] Jan 27 17:04:59 crc kubenswrapper[4966]: I0127 17:04:59.536539 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a912b0fc-de7a-49d3-ba0b-8ae0475bf650-must-gather-output\") pod \"must-gather-24pds\" (UID: \"a912b0fc-de7a-49d3-ba0b-8ae0475bf650\") " pod="openshift-must-gather-lgskz/must-gather-24pds" Jan 27 17:04:59 crc kubenswrapper[4966]: I0127 17:04:59.536775 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjcbr\" (UniqueName: \"kubernetes.io/projected/a912b0fc-de7a-49d3-ba0b-8ae0475bf650-kube-api-access-kjcbr\") pod \"must-gather-24pds\" (UID: \"a912b0fc-de7a-49d3-ba0b-8ae0475bf650\") " pod="openshift-must-gather-lgskz/must-gather-24pds" Jan 27 17:04:59 crc kubenswrapper[4966]: I0127 17:04:59.639494 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjcbr\" (UniqueName: \"kubernetes.io/projected/a912b0fc-de7a-49d3-ba0b-8ae0475bf650-kube-api-access-kjcbr\") pod \"must-gather-24pds\" (UID: \"a912b0fc-de7a-49d3-ba0b-8ae0475bf650\") " pod="openshift-must-gather-lgskz/must-gather-24pds" Jan 27 17:04:59 crc kubenswrapper[4966]: I0127 17:04:59.639835 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a912b0fc-de7a-49d3-ba0b-8ae0475bf650-must-gather-output\") pod \"must-gather-24pds\" (UID: \"a912b0fc-de7a-49d3-ba0b-8ae0475bf650\") " pod="openshift-must-gather-lgskz/must-gather-24pds" Jan 27 17:04:59 crc kubenswrapper[4966]: I0127 17:04:59.640232 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a912b0fc-de7a-49d3-ba0b-8ae0475bf650-must-gather-output\") pod \"must-gather-24pds\" (UID: \"a912b0fc-de7a-49d3-ba0b-8ae0475bf650\") " pod="openshift-must-gather-lgskz/must-gather-24pds" Jan 27 17:04:59 crc kubenswrapper[4966]: I0127 17:04:59.752013 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjcbr\" (UniqueName: \"kubernetes.io/projected/a912b0fc-de7a-49d3-ba0b-8ae0475bf650-kube-api-access-kjcbr\") pod \"must-gather-24pds\" (UID: \"a912b0fc-de7a-49d3-ba0b-8ae0475bf650\") " pod="openshift-must-gather-lgskz/must-gather-24pds" Jan 27 17:05:00 crc kubenswrapper[4966]: I0127 17:05:00.018044 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lgskz/must-gather-24pds" Jan 27 17:05:01 crc kubenswrapper[4966]: I0127 17:05:01.132213 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lgskz/must-gather-24pds"] Jan 27 17:05:01 crc kubenswrapper[4966]: I0127 17:05:01.581454 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lgskz/must-gather-24pds" event={"ID":"a912b0fc-de7a-49d3-ba0b-8ae0475bf650","Type":"ContainerStarted","Data":"b2b6ba460b37d4054f48c14107473ec89aa3ec20d8682f76b17dfbed36159c84"} Jan 27 17:05:08 crc kubenswrapper[4966]: I0127 17:05:08.658748 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lgskz/must-gather-24pds" event={"ID":"a912b0fc-de7a-49d3-ba0b-8ae0475bf650","Type":"ContainerStarted","Data":"427af27be2b0091963f55d63f3f31dc6b52cc63fa57db1f5086aa07f82878b69"} Jan 27 17:05:09 crc kubenswrapper[4966]: I0127 17:05:09.677078 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lgskz/must-gather-24pds" event={"ID":"a912b0fc-de7a-49d3-ba0b-8ae0475bf650","Type":"ContainerStarted","Data":"982a1a65f7d320712a671ab6c0c67e2642c2b38663d2742d518e8d03d7403ca4"} Jan 27 17:05:09 crc kubenswrapper[4966]: I0127 17:05:09.719494 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lgskz/must-gather-24pds" podStartSLOduration=3.7908873180000002 podStartE2EDuration="10.719463329s" podCreationTimestamp="2026-01-27 17:04:59 +0000 UTC" firstStartedPulling="2026-01-27 17:05:01.139661751 +0000 UTC m=+4967.442455259" lastFinishedPulling="2026-01-27 17:05:08.068237772 +0000 UTC m=+4974.371031270" observedRunningTime="2026-01-27 17:05:09.701404204 +0000 UTC m=+4976.004197712" watchObservedRunningTime="2026-01-27 17:05:09.719463329 +0000 UTC m=+4976.022256827" Jan 27 17:05:10 crc kubenswrapper[4966]: I0127 17:05:10.119259 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:05:10 crc kubenswrapper[4966]: I0127 17:05:10.119574 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:05:14 crc kubenswrapper[4966]: I0127 17:05:14.845449 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lgskz/crc-debug-dxlc7"] Jan 27 17:05:14 crc kubenswrapper[4966]: I0127 17:05:14.849838 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lgskz/crc-debug-dxlc7" Jan 27 17:05:14 crc kubenswrapper[4966]: I0127 17:05:14.922331 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c-host\") pod \"crc-debug-dxlc7\" (UID: \"bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c\") " pod="openshift-must-gather-lgskz/crc-debug-dxlc7" Jan 27 17:05:14 crc kubenswrapper[4966]: I0127 17:05:14.922632 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttqz2\" (UniqueName: \"kubernetes.io/projected/bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c-kube-api-access-ttqz2\") pod \"crc-debug-dxlc7\" (UID: \"bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c\") " pod="openshift-must-gather-lgskz/crc-debug-dxlc7" Jan 27 17:05:15 crc kubenswrapper[4966]: I0127 17:05:15.025887 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c-host\") pod \"crc-debug-dxlc7\" (UID: \"bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c\") " pod="openshift-must-gather-lgskz/crc-debug-dxlc7" Jan 27 17:05:15 crc kubenswrapper[4966]: I0127 17:05:15.026031 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttqz2\" (UniqueName: \"kubernetes.io/projected/bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c-kube-api-access-ttqz2\") pod \"crc-debug-dxlc7\" (UID: \"bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c\") " pod="openshift-must-gather-lgskz/crc-debug-dxlc7" Jan 27 17:05:15 crc kubenswrapper[4966]: I0127 17:05:15.026447 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c-host\") pod \"crc-debug-dxlc7\" (UID: \"bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c\") " pod="openshift-must-gather-lgskz/crc-debug-dxlc7" Jan 27 17:05:15 crc kubenswrapper[4966]: I0127 17:05:15.066198 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttqz2\" (UniqueName: \"kubernetes.io/projected/bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c-kube-api-access-ttqz2\") pod \"crc-debug-dxlc7\" (UID: \"bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c\") " pod="openshift-must-gather-lgskz/crc-debug-dxlc7" Jan 27 17:05:15 crc kubenswrapper[4966]: I0127 17:05:15.175023 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lgskz/crc-debug-dxlc7" Jan 27 17:05:15 crc kubenswrapper[4966]: W0127 17:05:15.222533 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc6a6d9f_6a36_4062_9d0d_388d49ba3d6c.slice/crio-2204a84f71eabdacdbb0e363796d7e47148fd1a7c51c8c4916760c0f725c3467 WatchSource:0}: Error finding container 2204a84f71eabdacdbb0e363796d7e47148fd1a7c51c8c4916760c0f725c3467: Status 404 returned error can't find the container with id 2204a84f71eabdacdbb0e363796d7e47148fd1a7c51c8c4916760c0f725c3467 Jan 27 17:05:15 crc kubenswrapper[4966]: I0127 17:05:15.756928 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lgskz/crc-debug-dxlc7" event={"ID":"bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c","Type":"ContainerStarted","Data":"2204a84f71eabdacdbb0e363796d7e47148fd1a7c51c8c4916760c0f725c3467"} Jan 27 17:05:25 crc kubenswrapper[4966]: I0127 17:05:25.864146 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lgskz/crc-debug-dxlc7" event={"ID":"bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c","Type":"ContainerStarted","Data":"5cb3404bb4fe8913ec332d885b95ba429a3d602ce985a33e2849d41263a99c73"} Jan 27 17:05:25 crc kubenswrapper[4966]: I0127 17:05:25.885080 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lgskz/crc-debug-dxlc7" podStartSLOduration=1.514920069 podStartE2EDuration="11.885062689s" podCreationTimestamp="2026-01-27 17:05:14 +0000 UTC" firstStartedPulling="2026-01-27 17:05:15.223332013 +0000 UTC m=+4981.526125501" lastFinishedPulling="2026-01-27 17:05:25.593474643 +0000 UTC m=+4991.896268121" observedRunningTime="2026-01-27 17:05:25.878180314 +0000 UTC m=+4992.180973812" watchObservedRunningTime="2026-01-27 17:05:25.885062689 +0000 UTC m=+4992.187856177" Jan 27 17:05:40 crc kubenswrapper[4966]: I0127 17:05:40.120835 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:05:40 crc kubenswrapper[4966]: I0127 17:05:40.121610 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:05:40 crc kubenswrapper[4966]: I0127 17:05:40.121711 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 17:05:40 crc kubenswrapper[4966]: I0127 17:05:40.124831 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246"} pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 17:05:40 crc kubenswrapper[4966]: I0127 17:05:40.125143 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" containerID="cri-o://9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" gracePeriod=600 Jan 27 17:05:40 crc kubenswrapper[4966]: E0127 17:05:40.993275 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:05:41 crc kubenswrapper[4966]: I0127 17:05:41.013103 4966 generic.go:334] "Generic (PLEG): container finished" podID="75889828-fc5d-4516-a3c4-db3affd4f810" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" exitCode=0 Jan 27 17:05:41 crc kubenswrapper[4966]: I0127 17:05:41.013150 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerDied","Data":"9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246"} Jan 27 17:05:41 crc kubenswrapper[4966]: I0127 17:05:41.013211 4966 scope.go:117] "RemoveContainer" containerID="aac0539be57f6932ac72b94428abfbd1ca11f23206b67d7996e17f5ed19cf93c" Jan 27 17:05:41 crc kubenswrapper[4966]: I0127 17:05:41.014112 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:05:41 crc kubenswrapper[4966]: E0127 17:05:41.014466 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:05:53 crc kubenswrapper[4966]: I0127 17:05:53.522702 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:05:53 crc kubenswrapper[4966]: E0127 17:05:53.523587 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:06:05 crc kubenswrapper[4966]: I0127 17:06:05.275890 4966 generic.go:334] "Generic (PLEG): container finished" podID="525a9ae1-69bf-4f75-b283-c0844b828a90" containerID="f013bc67e00e99c0b09bf2f8d1090628c6bfd382bf7a380af3041ab2d9038e94" exitCode=0 Jan 27 17:06:05 crc kubenswrapper[4966]: I0127 17:06:05.278520 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" event={"ID":"525a9ae1-69bf-4f75-b283-c0844b828a90","Type":"ContainerDied","Data":"f013bc67e00e99c0b09bf2f8d1090628c6bfd382bf7a380af3041ab2d9038e94"} Jan 27 17:06:05 crc kubenswrapper[4966]: I0127 17:06:05.278746 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" event={"ID":"525a9ae1-69bf-4f75-b283-c0844b828a90","Type":"ContainerStarted","Data":"9e2df0e5afbc968ee2616f0e7166dbdcb1ce591b0c033b6ddcfc1d8500772862"} Jan 27 17:06:05 crc kubenswrapper[4966]: I0127 17:06:05.520726 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:06:05 crc kubenswrapper[4966]: E0127 17:06:05.521085 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:06:15 crc kubenswrapper[4966]: I0127 17:06:15.399966 4966 generic.go:334] "Generic (PLEG): container finished" podID="bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c" containerID="5cb3404bb4fe8913ec332d885b95ba429a3d602ce985a33e2849d41263a99c73" exitCode=0 Jan 27 17:06:15 crc kubenswrapper[4966]: I0127 17:06:15.400024 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lgskz/crc-debug-dxlc7" event={"ID":"bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c","Type":"ContainerDied","Data":"5cb3404bb4fe8913ec332d885b95ba429a3d602ce985a33e2849d41263a99c73"} Jan 27 17:06:16 crc kubenswrapper[4966]: I0127 17:06:16.522057 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:06:16 crc kubenswrapper[4966]: E0127 17:06:16.523124 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:06:16 crc kubenswrapper[4966]: I0127 17:06:16.548949 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lgskz/crc-debug-dxlc7" Jan 27 17:06:16 crc kubenswrapper[4966]: I0127 17:06:16.588619 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lgskz/crc-debug-dxlc7"] Jan 27 17:06:16 crc kubenswrapper[4966]: I0127 17:06:16.600985 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lgskz/crc-debug-dxlc7"] Jan 27 17:06:16 crc kubenswrapper[4966]: I0127 17:06:16.692221 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c-host\") pod \"bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c\" (UID: \"bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c\") " Jan 27 17:06:16 crc kubenswrapper[4966]: I0127 17:06:16.692318 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttqz2\" (UniqueName: \"kubernetes.io/projected/bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c-kube-api-access-ttqz2\") pod \"bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c\" (UID: \"bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c\") " Jan 27 17:06:16 crc kubenswrapper[4966]: I0127 17:06:16.694047 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c-host" (OuterVolumeSpecName: "host") pod "bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c" (UID: "bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:06:16 crc kubenswrapper[4966]: I0127 17:06:16.704752 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c-kube-api-access-ttqz2" (OuterVolumeSpecName: "kube-api-access-ttqz2") pod "bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c" (UID: "bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c"). InnerVolumeSpecName "kube-api-access-ttqz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:06:16 crc kubenswrapper[4966]: I0127 17:06:16.795810 4966 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c-host\") on node \"crc\" DevicePath \"\"" Jan 27 17:06:16 crc kubenswrapper[4966]: I0127 17:06:16.795850 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttqz2\" (UniqueName: \"kubernetes.io/projected/bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c-kube-api-access-ttqz2\") on node \"crc\" DevicePath \"\"" Jan 27 17:06:17 crc kubenswrapper[4966]: I0127 17:06:17.421253 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lgskz/crc-debug-dxlc7" Jan 27 17:06:17 crc kubenswrapper[4966]: I0127 17:06:17.421741 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2204a84f71eabdacdbb0e363796d7e47148fd1a7c51c8c4916760c0f725c3467" Jan 27 17:06:17 crc kubenswrapper[4966]: I0127 17:06:17.817940 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lgskz/crc-debug-kkdg5"] Jan 27 17:06:17 crc kubenswrapper[4966]: E0127 17:06:17.818730 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c" containerName="container-00" Jan 27 17:06:17 crc kubenswrapper[4966]: I0127 17:06:17.818744 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c" containerName="container-00" Jan 27 17:06:17 crc kubenswrapper[4966]: I0127 17:06:17.818994 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c" containerName="container-00" Jan 27 17:06:17 crc kubenswrapper[4966]: I0127 17:06:17.819890 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lgskz/crc-debug-kkdg5" Jan 27 17:06:17 crc kubenswrapper[4966]: I0127 17:06:17.924254 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b-host\") pod \"crc-debug-kkdg5\" (UID: \"cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b\") " pod="openshift-must-gather-lgskz/crc-debug-kkdg5" Jan 27 17:06:17 crc kubenswrapper[4966]: I0127 17:06:17.924431 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kwz2\" (UniqueName: \"kubernetes.io/projected/cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b-kube-api-access-2kwz2\") pod \"crc-debug-kkdg5\" (UID: \"cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b\") " pod="openshift-must-gather-lgskz/crc-debug-kkdg5" Jan 27 17:06:18 crc kubenswrapper[4966]: I0127 17:06:18.027855 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b-host\") pod \"crc-debug-kkdg5\" (UID: \"cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b\") " pod="openshift-must-gather-lgskz/crc-debug-kkdg5" Jan 27 17:06:18 crc kubenswrapper[4966]: I0127 17:06:18.028300 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kwz2\" (UniqueName: \"kubernetes.io/projected/cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b-kube-api-access-2kwz2\") pod \"crc-debug-kkdg5\" (UID: \"cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b\") " pod="openshift-must-gather-lgskz/crc-debug-kkdg5" Jan 27 17:06:18 crc kubenswrapper[4966]: I0127 17:06:18.028641 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b-host\") pod \"crc-debug-kkdg5\" (UID: \"cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b\") " pod="openshift-must-gather-lgskz/crc-debug-kkdg5" Jan 27 17:06:18 crc kubenswrapper[4966]: I0127 17:06:18.049244 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kwz2\" (UniqueName: \"kubernetes.io/projected/cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b-kube-api-access-2kwz2\") pod \"crc-debug-kkdg5\" (UID: \"cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b\") " pod="openshift-must-gather-lgskz/crc-debug-kkdg5" Jan 27 17:06:18 crc kubenswrapper[4966]: I0127 17:06:18.141490 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lgskz/crc-debug-kkdg5" Jan 27 17:06:18 crc kubenswrapper[4966]: W0127 17:06:18.196160 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf4079e4_7e9e_4ed5_96a5_8e39b9f4f27b.slice/crio-a7edfe41b34a8ce4ca297687285cb2d2ef1b5d50d62dda18890d1aa638a8a988 WatchSource:0}: Error finding container a7edfe41b34a8ce4ca297687285cb2d2ef1b5d50d62dda18890d1aa638a8a988: Status 404 returned error can't find the container with id a7edfe41b34a8ce4ca297687285cb2d2ef1b5d50d62dda18890d1aa638a8a988 Jan 27 17:06:18 crc kubenswrapper[4966]: I0127 17:06:18.434388 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lgskz/crc-debug-kkdg5" event={"ID":"cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b","Type":"ContainerStarted","Data":"a7edfe41b34a8ce4ca297687285cb2d2ef1b5d50d62dda18890d1aa638a8a988"} Jan 27 17:06:18 crc kubenswrapper[4966]: I0127 17:06:18.538621 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c" path="/var/lib/kubelet/pods/bc6a6d9f-6a36-4062-9d0d-388d49ba3d6c/volumes" Jan 27 17:06:19 crc kubenswrapper[4966]: I0127 17:06:19.447292 4966 generic.go:334] "Generic (PLEG): container finished" podID="cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b" containerID="b4ee86a976bd2f2236d22d0cd3310cb75d6dfa86ade0d81c95278d4fa5eaea51" exitCode=0 Jan 27 17:06:19 crc kubenswrapper[4966]: I0127 17:06:19.447583 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lgskz/crc-debug-kkdg5" event={"ID":"cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b","Type":"ContainerDied","Data":"b4ee86a976bd2f2236d22d0cd3310cb75d6dfa86ade0d81c95278d4fa5eaea51"} Jan 27 17:06:20 crc kubenswrapper[4966]: I0127 17:06:20.478115 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lgskz/crc-debug-kkdg5"] Jan 27 17:06:20 crc kubenswrapper[4966]: I0127 17:06:20.490379 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lgskz/crc-debug-kkdg5"] Jan 27 17:06:20 crc kubenswrapper[4966]: I0127 17:06:20.584103 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lgskz/crc-debug-kkdg5" Jan 27 17:06:20 crc kubenswrapper[4966]: I0127 17:06:20.691934 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b-host\") pod \"cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b\" (UID: \"cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b\") " Jan 27 17:06:20 crc kubenswrapper[4966]: I0127 17:06:20.692138 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b-host" (OuterVolumeSpecName: "host") pod "cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b" (UID: "cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:06:20 crc kubenswrapper[4966]: I0127 17:06:20.692201 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kwz2\" (UniqueName: \"kubernetes.io/projected/cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b-kube-api-access-2kwz2\") pod \"cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b\" (UID: \"cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b\") " Jan 27 17:06:20 crc kubenswrapper[4966]: I0127 17:06:20.694818 4966 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b-host\") on node \"crc\" DevicePath \"\"" Jan 27 17:06:20 crc kubenswrapper[4966]: I0127 17:06:20.702798 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b-kube-api-access-2kwz2" (OuterVolumeSpecName: "kube-api-access-2kwz2") pod "cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b" (UID: "cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b"). InnerVolumeSpecName "kube-api-access-2kwz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:06:20 crc kubenswrapper[4966]: I0127 17:06:20.797630 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kwz2\" (UniqueName: \"kubernetes.io/projected/cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b-kube-api-access-2kwz2\") on node \"crc\" DevicePath \"\"" Jan 27 17:06:21 crc kubenswrapper[4966]: I0127 17:06:21.470543 4966 scope.go:117] "RemoveContainer" containerID="b4ee86a976bd2f2236d22d0cd3310cb75d6dfa86ade0d81c95278d4fa5eaea51" Jan 27 17:06:21 crc kubenswrapper[4966]: I0127 17:06:21.471011 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lgskz/crc-debug-kkdg5" Jan 27 17:06:21 crc kubenswrapper[4966]: I0127 17:06:21.693684 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lgskz/crc-debug-54cpw"] Jan 27 17:06:21 crc kubenswrapper[4966]: E0127 17:06:21.694196 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b" containerName="container-00" Jan 27 17:06:21 crc kubenswrapper[4966]: I0127 17:06:21.694208 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b" containerName="container-00" Jan 27 17:06:21 crc kubenswrapper[4966]: I0127 17:06:21.694411 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b" containerName="container-00" Jan 27 17:06:21 crc kubenswrapper[4966]: I0127 17:06:21.695148 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lgskz/crc-debug-54cpw" Jan 27 17:06:21 crc kubenswrapper[4966]: I0127 17:06:21.831072 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd442471-5693-4884-9479-659002b91119-host\") pod \"crc-debug-54cpw\" (UID: \"fd442471-5693-4884-9479-659002b91119\") " pod="openshift-must-gather-lgskz/crc-debug-54cpw" Jan 27 17:06:21 crc kubenswrapper[4966]: I0127 17:06:21.831187 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qddtn\" (UniqueName: \"kubernetes.io/projected/fd442471-5693-4884-9479-659002b91119-kube-api-access-qddtn\") pod \"crc-debug-54cpw\" (UID: \"fd442471-5693-4884-9479-659002b91119\") " pod="openshift-must-gather-lgskz/crc-debug-54cpw" Jan 27 17:06:21 crc kubenswrapper[4966]: I0127 17:06:21.934785 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd442471-5693-4884-9479-659002b91119-host\") pod \"crc-debug-54cpw\" (UID: \"fd442471-5693-4884-9479-659002b91119\") " pod="openshift-must-gather-lgskz/crc-debug-54cpw" Jan 27 17:06:21 crc kubenswrapper[4966]: I0127 17:06:21.934856 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qddtn\" (UniqueName: \"kubernetes.io/projected/fd442471-5693-4884-9479-659002b91119-kube-api-access-qddtn\") pod \"crc-debug-54cpw\" (UID: \"fd442471-5693-4884-9479-659002b91119\") " pod="openshift-must-gather-lgskz/crc-debug-54cpw" Jan 27 17:06:21 crc kubenswrapper[4966]: I0127 17:06:21.934949 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd442471-5693-4884-9479-659002b91119-host\") pod \"crc-debug-54cpw\" (UID: \"fd442471-5693-4884-9479-659002b91119\") " pod="openshift-must-gather-lgskz/crc-debug-54cpw" Jan 27 17:06:21 crc kubenswrapper[4966]: I0127 17:06:21.955788 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qddtn\" (UniqueName: \"kubernetes.io/projected/fd442471-5693-4884-9479-659002b91119-kube-api-access-qddtn\") pod \"crc-debug-54cpw\" (UID: \"fd442471-5693-4884-9479-659002b91119\") " pod="openshift-must-gather-lgskz/crc-debug-54cpw" Jan 27 17:06:22 crc kubenswrapper[4966]: I0127 17:06:22.022274 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lgskz/crc-debug-54cpw" Jan 27 17:06:22 crc kubenswrapper[4966]: W0127 17:06:22.070049 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd442471_5693_4884_9479_659002b91119.slice/crio-fba72dc92aef56938fd348dd4e3ef3a6726c16022df2064c028ddf2a1a1faea5 WatchSource:0}: Error finding container fba72dc92aef56938fd348dd4e3ef3a6726c16022df2064c028ddf2a1a1faea5: Status 404 returned error can't find the container with id fba72dc92aef56938fd348dd4e3ef3a6726c16022df2064c028ddf2a1a1faea5 Jan 27 17:06:22 crc kubenswrapper[4966]: I0127 17:06:22.482322 4966 generic.go:334] "Generic (PLEG): container finished" podID="fd442471-5693-4884-9479-659002b91119" containerID="5f48f709f7b43bf6680138d358597d0ba4012bd6c59136fd1ceddabb5fd2c265" exitCode=0 Jan 27 17:06:22 crc kubenswrapper[4966]: I0127 17:06:22.482422 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lgskz/crc-debug-54cpw" event={"ID":"fd442471-5693-4884-9479-659002b91119","Type":"ContainerDied","Data":"5f48f709f7b43bf6680138d358597d0ba4012bd6c59136fd1ceddabb5fd2c265"} Jan 27 17:06:22 crc kubenswrapper[4966]: I0127 17:06:22.482705 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lgskz/crc-debug-54cpw" event={"ID":"fd442471-5693-4884-9479-659002b91119","Type":"ContainerStarted","Data":"fba72dc92aef56938fd348dd4e3ef3a6726c16022df2064c028ddf2a1a1faea5"} Jan 27 17:06:22 crc kubenswrapper[4966]: I0127 17:06:22.534055 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b" path="/var/lib/kubelet/pods/cf4079e4-7e9e-4ed5-96a5-8e39b9f4f27b/volumes" Jan 27 17:06:22 crc kubenswrapper[4966]: I0127 17:06:22.534707 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lgskz/crc-debug-54cpw"] Jan 27 17:06:22 crc kubenswrapper[4966]: I0127 17:06:22.542939 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lgskz/crc-debug-54cpw"] Jan 27 17:06:23 crc kubenswrapper[4966]: I0127 17:06:23.413585 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 17:06:23 crc kubenswrapper[4966]: I0127 17:06:23.413891 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 17:06:23 crc kubenswrapper[4966]: I0127 17:06:23.638080 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lgskz/crc-debug-54cpw" Jan 27 17:06:23 crc kubenswrapper[4966]: I0127 17:06:23.780453 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qddtn\" (UniqueName: \"kubernetes.io/projected/fd442471-5693-4884-9479-659002b91119-kube-api-access-qddtn\") pod \"fd442471-5693-4884-9479-659002b91119\" (UID: \"fd442471-5693-4884-9479-659002b91119\") " Jan 27 17:06:23 crc kubenswrapper[4966]: I0127 17:06:23.780954 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd442471-5693-4884-9479-659002b91119-host\") pod \"fd442471-5693-4884-9479-659002b91119\" (UID: \"fd442471-5693-4884-9479-659002b91119\") " Jan 27 17:06:23 crc kubenswrapper[4966]: I0127 17:06:23.781146 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd442471-5693-4884-9479-659002b91119-host" (OuterVolumeSpecName: "host") pod "fd442471-5693-4884-9479-659002b91119" (UID: "fd442471-5693-4884-9479-659002b91119"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:06:23 crc kubenswrapper[4966]: I0127 17:06:23.781874 4966 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd442471-5693-4884-9479-659002b91119-host\") on node \"crc\" DevicePath \"\"" Jan 27 17:06:23 crc kubenswrapper[4966]: I0127 17:06:23.795280 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd442471-5693-4884-9479-659002b91119-kube-api-access-qddtn" (OuterVolumeSpecName: "kube-api-access-qddtn") pod "fd442471-5693-4884-9479-659002b91119" (UID: "fd442471-5693-4884-9479-659002b91119"). InnerVolumeSpecName "kube-api-access-qddtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:06:23 crc kubenswrapper[4966]: I0127 17:06:23.883695 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qddtn\" (UniqueName: \"kubernetes.io/projected/fd442471-5693-4884-9479-659002b91119-kube-api-access-qddtn\") on node \"crc\" DevicePath \"\"" Jan 27 17:06:24 crc kubenswrapper[4966]: I0127 17:06:24.510215 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fba72dc92aef56938fd348dd4e3ef3a6726c16022df2064c028ddf2a1a1faea5" Jan 27 17:06:24 crc kubenswrapper[4966]: I0127 17:06:24.510251 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lgskz/crc-debug-54cpw" Jan 27 17:06:24 crc kubenswrapper[4966]: I0127 17:06:24.535012 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd442471-5693-4884-9479-659002b91119" path="/var/lib/kubelet/pods/fd442471-5693-4884-9479-659002b91119/volumes" Jan 27 17:06:29 crc kubenswrapper[4966]: I0127 17:06:29.521508 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:06:29 crc kubenswrapper[4966]: E0127 17:06:29.524205 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:06:43 crc kubenswrapper[4966]: I0127 17:06:43.418752 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 17:06:43 crc kubenswrapper[4966]: I0127 17:06:43.425088 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-5fcbd5f794-2hhjm" Jan 27 17:06:44 crc kubenswrapper[4966]: I0127 17:06:44.529761 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:06:44 crc kubenswrapper[4966]: E0127 17:06:44.531001 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:06:56 crc kubenswrapper[4966]: I0127 17:06:56.262826 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_677fc3da-714e-4157-a167-cd49355a7e62/aodh-api/0.log" Jan 27 17:06:56 crc kubenswrapper[4966]: I0127 17:06:56.432691 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_677fc3da-714e-4157-a167-cd49355a7e62/aodh-listener/0.log" Jan 27 17:06:56 crc kubenswrapper[4966]: I0127 17:06:56.456969 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_677fc3da-714e-4157-a167-cd49355a7e62/aodh-notifier/0.log" Jan 27 17:06:56 crc kubenswrapper[4966]: I0127 17:06:56.479138 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_677fc3da-714e-4157-a167-cd49355a7e62/aodh-evaluator/0.log" Jan 27 17:06:56 crc kubenswrapper[4966]: I0127 17:06:56.641879 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8d96bc9db-xwrs6_8ad0899a-bde7-4576-8195-6719d77a51d0/barbican-api/0.log" Jan 27 17:06:56 crc kubenswrapper[4966]: I0127 17:06:56.666606 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8d96bc9db-xwrs6_8ad0899a-bde7-4576-8195-6719d77a51d0/barbican-api-log/0.log" Jan 27 17:06:56 crc kubenswrapper[4966]: I0127 17:06:56.849748 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-797bd7c9db-vjlzd_e49ad1cd-2925-41c2-b562-8a3478420d39/barbican-keystone-listener/0.log" Jan 27 17:06:56 crc kubenswrapper[4966]: I0127 17:06:56.892920 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-797bd7c9db-vjlzd_e49ad1cd-2925-41c2-b562-8a3478420d39/barbican-keystone-listener-log/0.log" Jan 27 17:06:56 crc kubenswrapper[4966]: I0127 17:06:56.962210 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5d55df94bc-bzw98_db44a3bd-5583-4a79-838d-6a21f083e020/barbican-worker/0.log" Jan 27 17:06:57 crc kubenswrapper[4966]: I0127 17:06:57.060646 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5d55df94bc-bzw98_db44a3bd-5583-4a79-838d-6a21f083e020/barbican-worker-log/0.log" Jan 27 17:06:57 crc kubenswrapper[4966]: I0127 17:06:57.202613 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-sqgrr_c33cbdef-0a35-4b70-9eeb-09b7fb7a3e2e/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 17:06:57 crc kubenswrapper[4966]: I0127 17:06:57.305355 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_20044d98-b229-4e9a-946f-b18902841fe6/ceilometer-central-agent/1.log" Jan 27 17:06:57 crc kubenswrapper[4966]: I0127 17:06:57.389495 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_20044d98-b229-4e9a-946f-b18902841fe6/ceilometer-central-agent/0.log" Jan 27 17:06:57 crc kubenswrapper[4966]: I0127 17:06:57.441267 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_20044d98-b229-4e9a-946f-b18902841fe6/ceilometer-notification-agent/0.log" Jan 27 17:06:57 crc kubenswrapper[4966]: I0127 17:06:57.457687 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_20044d98-b229-4e9a-946f-b18902841fe6/proxy-httpd/0.log" Jan 27 17:06:57 crc kubenswrapper[4966]: I0127 17:06:57.542599 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_20044d98-b229-4e9a-946f-b18902841fe6/sg-core/0.log" Jan 27 17:06:57 crc kubenswrapper[4966]: I0127 17:06:57.709393 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c7212827-c660-4b8a-b0ef-62d91f255dd6/cinder-api-log/0.log" Jan 27 17:06:57 crc kubenswrapper[4966]: I0127 17:06:57.713015 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c7212827-c660-4b8a-b0ef-62d91f255dd6/cinder-api/0.log" Jan 27 17:06:57 crc kubenswrapper[4966]: I0127 17:06:57.886856 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_987fabcb-b141-4f03-96fe-d2acf923452c/cinder-scheduler/1.log" Jan 27 17:06:57 crc kubenswrapper[4966]: I0127 17:06:57.931224 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_987fabcb-b141-4f03-96fe-d2acf923452c/cinder-scheduler/0.log" Jan 27 17:06:58 crc kubenswrapper[4966]: I0127 17:06:58.066557 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_987fabcb-b141-4f03-96fe-d2acf923452c/probe/0.log" Jan 27 17:06:58 crc kubenswrapper[4966]: I0127 17:06:58.159879 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-c9pf2_1273b23b-1970-4c55-93e5-aa72f8b416af/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 17:06:58 crc kubenswrapper[4966]: I0127 17:06:58.395589 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-5tcs7_5bb77603-0e4b-4d98-961e-a669b0ceee35/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 17:06:58 crc kubenswrapper[4966]: I0127 17:06:58.444046 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-c482c_327ce86b-3b3f-4b71-b51a-498ec4a19e63/init/0.log" Jan 27 17:06:58 crc kubenswrapper[4966]: I0127 17:06:58.521686 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:06:58 crc kubenswrapper[4966]: E0127 17:06:58.522000 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:06:58 crc kubenswrapper[4966]: I0127 17:06:58.595491 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-c482c_327ce86b-3b3f-4b71-b51a-498ec4a19e63/init/0.log" Jan 27 17:06:58 crc kubenswrapper[4966]: I0127 17:06:58.649742 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-c482c_327ce86b-3b3f-4b71-b51a-498ec4a19e63/dnsmasq-dns/0.log" Jan 27 17:06:58 crc kubenswrapper[4966]: I0127 17:06:58.660228 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-mhsm6_0968ae26-d5aa-401f-b777-78d1ee76cbad/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 17:06:58 crc kubenswrapper[4966]: I0127 17:06:58.829148 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c3089e5a-e4ea-4397-a676-fd4230311639/glance-log/0.log" Jan 27 17:06:58 crc kubenswrapper[4966]: I0127 17:06:58.835944 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c3089e5a-e4ea-4397-a676-fd4230311639/glance-httpd/0.log" Jan 27 17:06:59 crc kubenswrapper[4966]: I0127 17:06:59.035037 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6c4e4ebf-c37e-45fb-9248-b60defccda7f/glance-httpd/0.log" Jan 27 17:06:59 crc kubenswrapper[4966]: I0127 17:06:59.073035 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6c4e4ebf-c37e-45fb-9248-b60defccda7f/glance-log/0.log" Jan 27 17:06:59 crc kubenswrapper[4966]: I0127 17:06:59.681935 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-dd4644df4-l2k8s_ce9500c8-7004-47aa-a51a-050e3ffa6555/heat-api/0.log" Jan 27 17:06:59 crc kubenswrapper[4966]: I0127 17:06:59.771145 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-759fbdccc8-4p9db_f9a93c7c-a441-4742-9b3f-6b71992b842e/heat-engine/0.log" Jan 27 17:06:59 crc kubenswrapper[4966]: I0127 17:06:59.849991 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-mftzq_7a905a6d-3c1e-40fa-bde0-f30f5fcb0a44/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 17:06:59 crc kubenswrapper[4966]: I0127 17:06:59.867377 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-757f8b56d5-hnpxj_6c074d4e-440a-4518-897c-d05c3197ae79/heat-cfnapi/0.log" Jan 27 17:06:59 crc kubenswrapper[4966]: I0127 17:06:59.929835 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-zpfml_37e4ec2b-5f24-4dae-9e35-7ec76860c36c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 17:07:00 crc kubenswrapper[4966]: I0127 17:07:00.145652 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29492221-dflqh_564b32c4-a589-4d48-82f9-5d56159d4674/keystone-cron/0.log" Jan 27 17:07:00 crc kubenswrapper[4966]: I0127 17:07:00.377646 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_0ff42153-39d6-45d2-b4dc-e2b1e2eddc5a/kube-state-metrics/0.log" Jan 27 17:07:00 crc kubenswrapper[4966]: I0127 17:07:00.385992 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7d7bd8f5c6-k4z4r_1ac92552-e80a-486f-a9cf-4e57907928ca/keystone-api/0.log" Jan 27 17:07:00 crc kubenswrapper[4966]: I0127 17:07:00.435390 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-bzb8r_8d3c294b-3e69-4b63-8d6a-471ed67944bc/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 17:07:00 crc kubenswrapper[4966]: I0127 17:07:00.564245 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-vkkgc_2f03cdf3-ef7a-4b45-b92a-346469d17373/logging-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 17:07:00 crc kubenswrapper[4966]: I0127 17:07:00.742654 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_94241c97-14d2-406a-9ea5-1b9797ec4785/mysqld-exporter/0.log" Jan 27 17:07:01 crc kubenswrapper[4966]: I0127 17:07:01.092165 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ds29n_f9e3c1ee-653a-41b5-8f96-ebdb1167552c/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 17:07:01 crc kubenswrapper[4966]: I0127 17:07:01.119096 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5c7444dc4c-gxtck_3fce5b18-2272-4aba-a5cc-75f98ee0b1f7/neutron-httpd/0.log" Jan 27 17:07:01 crc kubenswrapper[4966]: I0127 17:07:01.153627 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5c7444dc4c-gxtck_3fce5b18-2272-4aba-a5cc-75f98ee0b1f7/neutron-api/0.log" Jan 27 17:07:01 crc kubenswrapper[4966]: I0127 17:07:01.743577 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3b8ddb20-6758-4eff-a0ea-0c0437f990e4/nova-api-log/0.log" Jan 27 17:07:02 crc kubenswrapper[4966]: I0127 17:07:02.017672 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3b8ddb20-6758-4eff-a0ea-0c0437f990e4/nova-api-api/0.log" Jan 27 17:07:02 crc kubenswrapper[4966]: I0127 17:07:02.266046 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_5a04b039-8946-4237-ae6b-1d1ece6927d5/nova-cell1-conductor-conductor/0.log" Jan 27 17:07:02 crc kubenswrapper[4966]: I0127 17:07:02.291095 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ea7f80cc-7f25-4546-b728-b483e4690acd/nova-cell0-conductor-conductor/0.log" Jan 27 17:07:02 crc kubenswrapper[4966]: I0127 17:07:02.461625 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_d83cc2a1-a7e1-4a08-be19-acfcabb8bafa/nova-cell1-novncproxy-novncproxy/0.log" Jan 27 17:07:02 crc kubenswrapper[4966]: I0127 17:07:02.641244 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-cvnb5_da633cd7-e3b3-4dc6-a2b0-c6c49ab44bfe/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 17:07:03 crc kubenswrapper[4966]: I0127 17:07:03.270865 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_dffd2675-6c13-4c1e-82c5-d19c859db134/nova-metadata-log/0.log" Jan 27 17:07:03 crc kubenswrapper[4966]: I0127 17:07:03.459761 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e083fa11-ceea-4516-8fb9-84b13faf4411/nova-scheduler-scheduler/0.log" Jan 27 17:07:04 crc kubenswrapper[4966]: I0127 17:07:04.020001 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a1be6855-0a73-406a-93d5-625f7fca558b/mysql-bootstrap/0.log" Jan 27 17:07:04 crc kubenswrapper[4966]: I0127 17:07:04.231950 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a1be6855-0a73-406a-93d5-625f7fca558b/mysql-bootstrap/0.log" Jan 27 17:07:04 crc kubenswrapper[4966]: I0127 17:07:04.239989 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a1be6855-0a73-406a-93d5-625f7fca558b/galera/1.log" Jan 27 17:07:04 crc kubenswrapper[4966]: I0127 17:07:04.273650 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a1be6855-0a73-406a-93d5-625f7fca558b/galera/0.log" Jan 27 17:07:04 crc kubenswrapper[4966]: I0127 17:07:04.571167 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1dc01362-ea5a-48fe-b67f-1e00b193c36e/mysql-bootstrap/0.log" Jan 27 17:07:04 crc kubenswrapper[4966]: I0127 17:07:04.732374 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1dc01362-ea5a-48fe-b67f-1e00b193c36e/mysql-bootstrap/0.log" Jan 27 17:07:04 crc kubenswrapper[4966]: I0127 17:07:04.791476 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1dc01362-ea5a-48fe-b67f-1e00b193c36e/galera/1.log" Jan 27 17:07:04 crc kubenswrapper[4966]: I0127 17:07:04.838425 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1dc01362-ea5a-48fe-b67f-1e00b193c36e/galera/0.log" Jan 27 17:07:04 crc kubenswrapper[4966]: I0127 17:07:04.990301 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_5461689b-4309-4ffe-9b5a-fef2eba77915/openstackclient/0.log" Jan 27 17:07:05 crc kubenswrapper[4966]: I0127 17:07:05.187679 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-96kvf_7c442a88-8881-4780-a2c3-eddb5d940209/ovn-controller/0.log" Jan 27 17:07:05 crc kubenswrapper[4966]: I0127 17:07:05.326381 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-w9xlx_e9d6ae15-29bc-428e-a03b-c9a7d1d2f8ef/openstack-network-exporter/0.log" Jan 27 17:07:05 crc kubenswrapper[4966]: I0127 17:07:05.360153 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_dffd2675-6c13-4c1e-82c5-d19c859db134/nova-metadata-metadata/0.log" Jan 27 17:07:05 crc kubenswrapper[4966]: I0127 17:07:05.456136 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5jgjj_ae54b281-3b7e-412e-8575-9096f191343e/ovsdb-server-init/0.log" Jan 27 17:07:05 crc kubenswrapper[4966]: I0127 17:07:05.729656 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5jgjj_ae54b281-3b7e-412e-8575-9096f191343e/ovsdb-server/0.log" Jan 27 17:07:05 crc kubenswrapper[4966]: I0127 17:07:05.733461 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5jgjj_ae54b281-3b7e-412e-8575-9096f191343e/ovs-vswitchd/0.log" Jan 27 17:07:05 crc kubenswrapper[4966]: I0127 17:07:05.793591 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5jgjj_ae54b281-3b7e-412e-8575-9096f191343e/ovsdb-server-init/0.log" Jan 27 17:07:05 crc kubenswrapper[4966]: I0127 17:07:05.987803 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3b38986f-892c-45df-9229-2d4dae664b48/openstack-network-exporter/0.log" Jan 27 17:07:06 crc kubenswrapper[4966]: I0127 17:07:06.047055 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3b38986f-892c-45df-9229-2d4dae664b48/ovn-northd/0.log" Jan 27 17:07:06 crc kubenswrapper[4966]: I0127 17:07:06.072773 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-x65gv_e415840f-113a-4545-a340-f206e683b62c/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 17:07:06 crc kubenswrapper[4966]: I0127 17:07:06.339001 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_17569cb4-bb32-47c9-8fed-2bffeda09a7c/openstack-network-exporter/0.log" Jan 27 17:07:06 crc kubenswrapper[4966]: I0127 17:07:06.342687 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_17569cb4-bb32-47c9-8fed-2bffeda09a7c/ovsdbserver-nb/0.log" Jan 27 17:07:06 crc kubenswrapper[4966]: I0127 17:07:06.514003 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_bfe12ec4-548f-4242-94fb-1ac7cac46c73/openstack-network-exporter/0.log" Jan 27 17:07:06 crc kubenswrapper[4966]: I0127 17:07:06.611110 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_bfe12ec4-548f-4242-94fb-1ac7cac46c73/ovsdbserver-sb/0.log" Jan 27 17:07:06 crc kubenswrapper[4966]: I0127 17:07:06.752574 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-f68b888d8-25wpv_5a03245f-c0e4-4241-8711-18cd9517be4d/placement-api/0.log" Jan 27 17:07:06 crc kubenswrapper[4966]: I0127 17:07:06.851331 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_20223cbd-a9e0-4eb8-b051-0833bebe5975/init-config-reloader/0.log" Jan 27 17:07:06 crc kubenswrapper[4966]: I0127 17:07:06.893610 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-f68b888d8-25wpv_5a03245f-c0e4-4241-8711-18cd9517be4d/placement-log/0.log" Jan 27 17:07:07 crc kubenswrapper[4966]: I0127 17:07:07.109491 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_20223cbd-a9e0-4eb8-b051-0833bebe5975/init-config-reloader/0.log" Jan 27 17:07:07 crc kubenswrapper[4966]: I0127 17:07:07.139211 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_20223cbd-a9e0-4eb8-b051-0833bebe5975/config-reloader/0.log" Jan 27 17:07:07 crc kubenswrapper[4966]: I0127 17:07:07.191916 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_20223cbd-a9e0-4eb8-b051-0833bebe5975/prometheus/0.log" Jan 27 17:07:07 crc kubenswrapper[4966]: I0127 17:07:07.237616 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_20223cbd-a9e0-4eb8-b051-0833bebe5975/thanos-sidecar/0.log" Jan 27 17:07:07 crc kubenswrapper[4966]: I0127 17:07:07.438436 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_392d1dfb-fb0e-4c96-bd6b-0d85c032f41b/setup-container/0.log" Jan 27 17:07:07 crc kubenswrapper[4966]: I0127 17:07:07.639484 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_392d1dfb-fb0e-4c96-bd6b-0d85c032f41b/setup-container/0.log" Jan 27 17:07:07 crc kubenswrapper[4966]: I0127 17:07:07.660177 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_392d1dfb-fb0e-4c96-bd6b-0d85c032f41b/rabbitmq/0.log" Jan 27 17:07:07 crc kubenswrapper[4966]: I0127 17:07:07.705926 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_10d78543-5cf7-4e24-aa48-52feb8606492/setup-container/0.log" Jan 27 17:07:07 crc kubenswrapper[4966]: I0127 17:07:07.977522 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_10d78543-5cf7-4e24-aa48-52feb8606492/rabbitmq/0.log" Jan 27 17:07:08 crc kubenswrapper[4966]: I0127 17:07:08.031242 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_10d78543-5cf7-4e24-aa48-52feb8606492/setup-container/0.log" Jan 27 17:07:08 crc kubenswrapper[4966]: I0127 17:07:08.088423 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d/setup-container/0.log" Jan 27 17:07:08 crc kubenswrapper[4966]: I0127 17:07:08.332138 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d/setup-container/0.log" Jan 27 17:07:08 crc kubenswrapper[4966]: I0127 17:07:08.335153 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_793ef49f-7394-4261-a7c1-b262c6744776/setup-container/0.log" Jan 27 17:07:08 crc kubenswrapper[4966]: I0127 17:07:08.415828 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_cdfb19c1-5a2d-4a90-ba29-2cfcccd2174d/rabbitmq/0.log" Jan 27 17:07:08 crc kubenswrapper[4966]: I0127 17:07:08.646450 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_793ef49f-7394-4261-a7c1-b262c6744776/setup-container/0.log" Jan 27 17:07:08 crc kubenswrapper[4966]: I0127 17:07:08.650555 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_793ef49f-7394-4261-a7c1-b262c6744776/rabbitmq/0.log" Jan 27 17:07:08 crc kubenswrapper[4966]: I0127 17:07:08.743657 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-vks7h_f476647f-6bc8-43f4-804e-1349f84b2639/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 17:07:08 crc kubenswrapper[4966]: I0127 17:07:08.875339 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-8q6cm_a7f9447f-e2b0-4ff0-bdf9-bfc90483e383/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 17:07:08 crc kubenswrapper[4966]: I0127 17:07:08.967104 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-6j8gv_a39c95f2-d908-4593-9ddf-813da13c1f6a/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 17:07:09 crc kubenswrapper[4966]: I0127 17:07:09.155202 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-775hq_ebb6cac6-208d-407d-b0a5-5556e1968d8d/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 17:07:09 crc kubenswrapper[4966]: I0127 17:07:09.335096 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-c5hdk_eb03c226-9aea-45da-aa90-3243fce92eee/ssh-known-hosts-edpm-deployment/0.log" Jan 27 17:07:09 crc kubenswrapper[4966]: I0127 17:07:09.487178 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7f67cf7b6c-fm8vs_bb0a5c7d-bb55-4f56-9f03-268df91b2748/proxy-server/0.log" Jan 27 17:07:09 crc kubenswrapper[4966]: I0127 17:07:09.587378 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7f67cf7b6c-fm8vs_bb0a5c7d-bb55-4f56-9f03-268df91b2748/proxy-httpd/0.log" Jan 27 17:07:09 crc kubenswrapper[4966]: I0127 17:07:09.668615 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-6r97n_5036c06b-cb10-4530-9315-4ba4dee273f0/swift-ring-rebalance/0.log" Jan 27 17:07:09 crc kubenswrapper[4966]: I0127 17:07:09.818363 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a59c903b-6e40-43bd-a120-e47e504cf5a9/account-auditor/0.log" Jan 27 17:07:09 crc kubenswrapper[4966]: I0127 17:07:09.850303 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a59c903b-6e40-43bd-a120-e47e504cf5a9/account-reaper/0.log" Jan 27 17:07:09 crc kubenswrapper[4966]: I0127 17:07:09.973539 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a59c903b-6e40-43bd-a120-e47e504cf5a9/account-replicator/0.log" Jan 27 17:07:10 crc kubenswrapper[4966]: I0127 17:07:10.053759 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a59c903b-6e40-43bd-a120-e47e504cf5a9/account-server/0.log" Jan 27 17:07:10 crc kubenswrapper[4966]: I0127 17:07:10.062566 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a59c903b-6e40-43bd-a120-e47e504cf5a9/container-auditor/0.log" Jan 27 17:07:10 crc kubenswrapper[4966]: I0127 17:07:10.134043 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a59c903b-6e40-43bd-a120-e47e504cf5a9/container-replicator/0.log" Jan 27 17:07:10 crc kubenswrapper[4966]: I0127 17:07:10.231259 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a59c903b-6e40-43bd-a120-e47e504cf5a9/container-server/0.log" Jan 27 17:07:10 crc kubenswrapper[4966]: I0127 17:07:10.289180 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a59c903b-6e40-43bd-a120-e47e504cf5a9/object-auditor/0.log" Jan 27 17:07:10 crc kubenswrapper[4966]: I0127 17:07:10.327711 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a59c903b-6e40-43bd-a120-e47e504cf5a9/container-updater/0.log" Jan 27 17:07:10 crc kubenswrapper[4966]: I0127 17:07:10.385746 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a59c903b-6e40-43bd-a120-e47e504cf5a9/object-expirer/0.log" Jan 27 17:07:10 crc kubenswrapper[4966]: I0127 17:07:10.546192 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a59c903b-6e40-43bd-a120-e47e504cf5a9/object-replicator/0.log" Jan 27 17:07:10 crc kubenswrapper[4966]: I0127 17:07:10.550490 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a59c903b-6e40-43bd-a120-e47e504cf5a9/object-server/0.log" Jan 27 17:07:10 crc kubenswrapper[4966]: I0127 17:07:10.611810 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a59c903b-6e40-43bd-a120-e47e504cf5a9/rsync/0.log" Jan 27 17:07:10 crc kubenswrapper[4966]: I0127 17:07:10.633427 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a59c903b-6e40-43bd-a120-e47e504cf5a9/object-updater/0.log" Jan 27 17:07:10 crc kubenswrapper[4966]: I0127 17:07:10.771506 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a59c903b-6e40-43bd-a120-e47e504cf5a9/swift-recon-cron/0.log" Jan 27 17:07:10 crc kubenswrapper[4966]: I0127 17:07:10.900864 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-qc6nb_492de347-8c7a-4efc-a0ad-000c4da9df94/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 17:07:11 crc kubenswrapper[4966]: I0127 17:07:11.030446 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-6nkvc_78e9dd7a-9cb3-47ec-8412-30ce3be2b93b/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 17:07:11 crc kubenswrapper[4966]: I0127 17:07:11.268536 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_2a62958e-1b8e-4063-8070-4a273e047872/test-operator-logs-container/0.log" Jan 27 17:07:11 crc kubenswrapper[4966]: I0127 17:07:11.480833 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-xstc7_5a43d2ae-3ec1-49be-a4f4-70edbbf06d0b/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 17:07:11 crc kubenswrapper[4966]: I0127 17:07:11.521207 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:07:11 crc kubenswrapper[4966]: E0127 17:07:11.521547 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:07:11 crc kubenswrapper[4966]: I0127 17:07:11.600482 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_9cb67ef7-bf33-4d76-855a-b2e1a16ec0a6/tempest-tests-tempest-tests-runner/0.log" Jan 27 17:07:21 crc kubenswrapper[4966]: I0127 17:07:21.987475 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_3778d5b6-0474-4399-b163-521cb18b3eda/memcached/0.log" Jan 27 17:07:22 crc kubenswrapper[4966]: I0127 17:07:22.522596 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:07:22 crc kubenswrapper[4966]: E0127 17:07:22.523166 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:07:36 crc kubenswrapper[4966]: I0127 17:07:36.520790 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:07:36 crc kubenswrapper[4966]: E0127 17:07:36.521435 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:07:42 crc kubenswrapper[4966]: I0127 17:07:42.954399 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp_99316d5d-7260-4487-98dc-a531f3501aa0/util/0.log" Jan 27 17:07:43 crc kubenswrapper[4966]: I0127 17:07:43.109934 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp_99316d5d-7260-4487-98dc-a531f3501aa0/pull/0.log" Jan 27 17:07:43 crc kubenswrapper[4966]: I0127 17:07:43.136506 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp_99316d5d-7260-4487-98dc-a531f3501aa0/util/0.log" Jan 27 17:07:43 crc kubenswrapper[4966]: I0127 17:07:43.157640 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp_99316d5d-7260-4487-98dc-a531f3501aa0/pull/0.log" Jan 27 17:07:43 crc kubenswrapper[4966]: I0127 17:07:43.360359 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp_99316d5d-7260-4487-98dc-a531f3501aa0/pull/0.log" Jan 27 17:07:43 crc kubenswrapper[4966]: I0127 17:07:43.367592 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp_99316d5d-7260-4487-98dc-a531f3501aa0/util/0.log" Jan 27 17:07:43 crc kubenswrapper[4966]: I0127 17:07:43.403617 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1cfafb0b2d78d8f18ab14b0a2b52bc6a688bc9c0c405c0093332ba9b1684glp_99316d5d-7260-4487-98dc-a531f3501aa0/extract/0.log" Jan 27 17:07:43 crc kubenswrapper[4966]: I0127 17:07:43.586672 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-9s62h_eb03df91-4797-41be-a7fb-7ca572014c88/manager/0.log" Jan 27 17:07:43 crc kubenswrapper[4966]: I0127 17:07:43.668601 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-hqm7h_093d4126-d96d-475a-9519-020f2f73a742/manager/0.log" Jan 27 17:07:43 crc kubenswrapper[4966]: I0127 17:07:43.800732 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-76ffr_45594823-cdbb-4586-95d2-f2af9f6460b9/manager/0.log" Jan 27 17:07:44 crc kubenswrapper[4966]: I0127 17:07:44.458838 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-88lhp_9dcc8f2a-06d2-493e-b0ce-50120cef400e/manager/0.log" Jan 27 17:07:44 crc kubenswrapper[4966]: I0127 17:07:44.530005 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-j7j9c_624197a8-447a-4004-a1e0-679ce29dbe86/manager/0.log" Jan 27 17:07:44 crc kubenswrapper[4966]: I0127 17:07:44.650347 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-77jlm_ed16ab57-79cc-42b3-9ae5-663ba4c7b8ff/manager/0.log" Jan 27 17:07:44 crc kubenswrapper[4966]: I0127 17:07:44.899188 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-rzghf_e2cfe3d1-d500-418e-bc6b-4da3482999c3/manager/0.log" Jan 27 17:07:44 crc kubenswrapper[4966]: I0127 17:07:44.991772 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-qwc6v_cfa058e6-1d6f-4dc2-8058-c00b201175b5/manager/0.log" Jan 27 17:07:45 crc kubenswrapper[4966]: I0127 17:07:45.098231 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-j6s8k_dd40e2cd-59aa-442b-b27a-209632cba6e4/manager/0.log" Jan 27 17:07:45 crc kubenswrapper[4966]: I0127 17:07:45.130555 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-6d5tv_08ac68d1-220d-4098-9eed-6d0e3b752e5d/manager/0.log" Jan 27 17:07:45 crc kubenswrapper[4966]: I0127 17:07:45.322552 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-2lmbw_3ae401e5-feea-47d3-9c86-1e33635a461a/manager/0.log" Jan 27 17:07:45 crc kubenswrapper[4966]: I0127 17:07:45.374156 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-h6qtl_871381eb-c218-433c-a004-fea884f4ced0/manager/0.log" Jan 27 17:07:45 crc kubenswrapper[4966]: I0127 17:07:45.646518 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-2n8dp_6006cb9c-d22f-47b1-b8b6-cb999ecab7df/manager/0.log" Jan 27 17:07:45 crc kubenswrapper[4966]: I0127 17:07:45.655274 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-sttbs_64b84834-e9db-4f50-a7c7-6d24302652d3/manager/0.log" Jan 27 17:07:46 crc kubenswrapper[4966]: I0127 17:07:46.222206 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854hj5j9_20e54080-e732-4925-b0c2-35669744821d/manager/0.log" Jan 27 17:07:46 crc kubenswrapper[4966]: I0127 17:07:46.866122 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-75d854449c-9v6lm_cfd02a37-95ae-43f0-9e50-2e9d78202bd9/operator/0.log" Jan 27 17:07:47 crc kubenswrapper[4966]: I0127 17:07:47.127834 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-48xxm_57175838-b13a-4dc9-bf85-ed8668e3d88c/registry-server/1.log" Jan 27 17:07:47 crc kubenswrapper[4966]: I0127 17:07:47.196018 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-48xxm_57175838-b13a-4dc9-bf85-ed8668e3d88c/registry-server/0.log" Jan 27 17:07:47 crc kubenswrapper[4966]: I0127 17:07:47.399340 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-rv5wp_f096bdf7-f589-4344-b71f-ab9db2eded5f/manager/0.log" Jan 27 17:07:47 crc kubenswrapper[4966]: I0127 17:07:47.445495 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-kxj69_dd6f6600-3072-42e6-a8ca-5e72c960425a/manager/0.log" Jan 27 17:07:47 crc kubenswrapper[4966]: I0127 17:07:47.710812 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-rvh45_0af070d2-e4fd-488e-abd5-c8ae5915d089/operator/0.log" Jan 27 17:07:47 crc kubenswrapper[4966]: I0127 17:07:47.727091 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-zdjjc_8645d6d2-f7cd-4578-9a1a-8b07beeae08c/manager/0.log" Jan 27 17:07:48 crc kubenswrapper[4966]: I0127 17:07:48.042537 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-crddt_3bc21fb1-fc35-42e4-ab0f-bdd047cd05d7/manager/0.log" Jan 27 17:07:48 crc kubenswrapper[4966]: I0127 17:07:48.173835 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-l5xd2_434d2d44-cb00-40d2-90b5-64dd65faadc8/manager/0.log" Jan 27 17:07:48 crc kubenswrapper[4966]: I0127 17:07:48.484626 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-8bb444544-qmbfx_734cfb67-80ec-42a1-8d52-298ae82e1a6b/manager/0.log" Jan 27 17:07:48 crc kubenswrapper[4966]: I0127 17:07:48.584225 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-69869d7dcf-h42mh_e49e9fb2-a5f0-4106-b239-93d488e4f515/manager/0.log" Jan 27 17:07:51 crc kubenswrapper[4966]: I0127 17:07:51.521622 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:07:51 crc kubenswrapper[4966]: E0127 17:07:51.522400 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:07:53 crc kubenswrapper[4966]: I0127 17:07:53.867966 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jvmh5"] Jan 27 17:07:53 crc kubenswrapper[4966]: E0127 17:07:53.869001 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd442471-5693-4884-9479-659002b91119" containerName="container-00" Jan 27 17:07:53 crc kubenswrapper[4966]: I0127 17:07:53.869014 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd442471-5693-4884-9479-659002b91119" containerName="container-00" Jan 27 17:07:53 crc kubenswrapper[4966]: I0127 17:07:53.869230 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd442471-5693-4884-9479-659002b91119" containerName="container-00" Jan 27 17:07:53 crc kubenswrapper[4966]: I0127 17:07:53.875426 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvmh5" Jan 27 17:07:53 crc kubenswrapper[4966]: I0127 17:07:53.891981 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jvmh5"] Jan 27 17:07:53 crc kubenswrapper[4966]: I0127 17:07:53.961913 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkbkx\" (UniqueName: \"kubernetes.io/projected/5167bfdf-b60b-4f32-adbd-f133454ccf07-kube-api-access-fkbkx\") pod \"redhat-operators-jvmh5\" (UID: \"5167bfdf-b60b-4f32-adbd-f133454ccf07\") " pod="openshift-marketplace/redhat-operators-jvmh5" Jan 27 17:07:53 crc kubenswrapper[4966]: I0127 17:07:53.962138 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5167bfdf-b60b-4f32-adbd-f133454ccf07-catalog-content\") pod \"redhat-operators-jvmh5\" (UID: \"5167bfdf-b60b-4f32-adbd-f133454ccf07\") " pod="openshift-marketplace/redhat-operators-jvmh5" Jan 27 17:07:53 crc kubenswrapper[4966]: I0127 17:07:53.962211 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5167bfdf-b60b-4f32-adbd-f133454ccf07-utilities\") pod \"redhat-operators-jvmh5\" (UID: \"5167bfdf-b60b-4f32-adbd-f133454ccf07\") " pod="openshift-marketplace/redhat-operators-jvmh5" Jan 27 17:07:54 crc kubenswrapper[4966]: I0127 17:07:54.064190 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5167bfdf-b60b-4f32-adbd-f133454ccf07-catalog-content\") pod \"redhat-operators-jvmh5\" (UID: \"5167bfdf-b60b-4f32-adbd-f133454ccf07\") " pod="openshift-marketplace/redhat-operators-jvmh5" Jan 27 17:07:54 crc kubenswrapper[4966]: I0127 17:07:54.064307 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5167bfdf-b60b-4f32-adbd-f133454ccf07-utilities\") pod \"redhat-operators-jvmh5\" (UID: \"5167bfdf-b60b-4f32-adbd-f133454ccf07\") " pod="openshift-marketplace/redhat-operators-jvmh5" Jan 27 17:07:54 crc kubenswrapper[4966]: I0127 17:07:54.064364 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkbkx\" (UniqueName: \"kubernetes.io/projected/5167bfdf-b60b-4f32-adbd-f133454ccf07-kube-api-access-fkbkx\") pod \"redhat-operators-jvmh5\" (UID: \"5167bfdf-b60b-4f32-adbd-f133454ccf07\") " pod="openshift-marketplace/redhat-operators-jvmh5" Jan 27 17:07:54 crc kubenswrapper[4966]: I0127 17:07:54.066033 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5167bfdf-b60b-4f32-adbd-f133454ccf07-catalog-content\") pod \"redhat-operators-jvmh5\" (UID: \"5167bfdf-b60b-4f32-adbd-f133454ccf07\") " pod="openshift-marketplace/redhat-operators-jvmh5" Jan 27 17:07:54 crc kubenswrapper[4966]: I0127 17:07:54.066400 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5167bfdf-b60b-4f32-adbd-f133454ccf07-utilities\") pod \"redhat-operators-jvmh5\" (UID: \"5167bfdf-b60b-4f32-adbd-f133454ccf07\") " pod="openshift-marketplace/redhat-operators-jvmh5" Jan 27 17:07:54 crc kubenswrapper[4966]: I0127 17:07:54.086227 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkbkx\" (UniqueName: \"kubernetes.io/projected/5167bfdf-b60b-4f32-adbd-f133454ccf07-kube-api-access-fkbkx\") pod \"redhat-operators-jvmh5\" (UID: \"5167bfdf-b60b-4f32-adbd-f133454ccf07\") " pod="openshift-marketplace/redhat-operators-jvmh5" Jan 27 17:07:54 crc kubenswrapper[4966]: I0127 17:07:54.195104 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvmh5" Jan 27 17:07:55 crc kubenswrapper[4966]: W0127 17:07:55.627652 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5167bfdf_b60b_4f32_adbd_f133454ccf07.slice/crio-2de579f2fd4510266a8fea482fc2e5e8543e7f01665561c237b6987bf16f75dd WatchSource:0}: Error finding container 2de579f2fd4510266a8fea482fc2e5e8543e7f01665561c237b6987bf16f75dd: Status 404 returned error can't find the container with id 2de579f2fd4510266a8fea482fc2e5e8543e7f01665561c237b6987bf16f75dd Jan 27 17:07:55 crc kubenswrapper[4966]: I0127 17:07:55.655301 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jvmh5"] Jan 27 17:07:55 crc kubenswrapper[4966]: I0127 17:07:55.766117 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvmh5" event={"ID":"5167bfdf-b60b-4f32-adbd-f133454ccf07","Type":"ContainerStarted","Data":"2de579f2fd4510266a8fea482fc2e5e8543e7f01665561c237b6987bf16f75dd"} Jan 27 17:07:56 crc kubenswrapper[4966]: I0127 17:07:56.779736 4966 generic.go:334] "Generic (PLEG): container finished" podID="5167bfdf-b60b-4f32-adbd-f133454ccf07" containerID="232c065cbf0672985d2c3380536966da23da0c140c9314d4d23eb3548e9e26d8" exitCode=0 Jan 27 17:07:56 crc kubenswrapper[4966]: I0127 17:07:56.779849 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvmh5" event={"ID":"5167bfdf-b60b-4f32-adbd-f133454ccf07","Type":"ContainerDied","Data":"232c065cbf0672985d2c3380536966da23da0c140c9314d4d23eb3548e9e26d8"} Jan 27 17:07:58 crc kubenswrapper[4966]: I0127 17:07:58.809159 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvmh5" event={"ID":"5167bfdf-b60b-4f32-adbd-f133454ccf07","Type":"ContainerStarted","Data":"d289b06978927acacb72d92e670415f6f36daecc97048c1c5f1eb83c0aaf2d07"} Jan 27 17:08:03 crc kubenswrapper[4966]: I0127 17:08:03.862281 4966 generic.go:334] "Generic (PLEG): container finished" podID="5167bfdf-b60b-4f32-adbd-f133454ccf07" containerID="d289b06978927acacb72d92e670415f6f36daecc97048c1c5f1eb83c0aaf2d07" exitCode=0 Jan 27 17:08:03 crc kubenswrapper[4966]: I0127 17:08:03.862361 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvmh5" event={"ID":"5167bfdf-b60b-4f32-adbd-f133454ccf07","Type":"ContainerDied","Data":"d289b06978927acacb72d92e670415f6f36daecc97048c1c5f1eb83c0aaf2d07"} Jan 27 17:08:03 crc kubenswrapper[4966]: I0127 17:08:03.867126 4966 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 17:08:04 crc kubenswrapper[4966]: I0127 17:08:04.876840 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvmh5" event={"ID":"5167bfdf-b60b-4f32-adbd-f133454ccf07","Type":"ContainerStarted","Data":"aa3cc1d946686866bb370c1dfff1fa945a5796194e3ae8570b768b9b52fa135b"} Jan 27 17:08:04 crc kubenswrapper[4966]: I0127 17:08:04.901934 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jvmh5" podStartSLOduration=4.097570762 podStartE2EDuration="11.901915205s" podCreationTimestamp="2026-01-27 17:07:53 +0000 UTC" firstStartedPulling="2026-01-27 17:07:56.782762347 +0000 UTC m=+5143.085555845" lastFinishedPulling="2026-01-27 17:08:04.5871068 +0000 UTC m=+5150.889900288" observedRunningTime="2026-01-27 17:08:04.894022958 +0000 UTC m=+5151.196816496" watchObservedRunningTime="2026-01-27 17:08:04.901915205 +0000 UTC m=+5151.204708693" Jan 27 17:08:06 crc kubenswrapper[4966]: I0127 17:08:06.522159 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:08:06 crc kubenswrapper[4966]: E0127 17:08:06.522796 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:08:12 crc kubenswrapper[4966]: I0127 17:08:12.102580 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-l4jwq_d00c259a-2ad3-44dd-97d2-53e763da5ab1/control-plane-machine-set-operator/0.log" Jan 27 17:08:12 crc kubenswrapper[4966]: I0127 17:08:12.318399 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-f2z26_eb7f0aaf-703f-4d9c-89c8-701f0707ab18/kube-rbac-proxy/0.log" Jan 27 17:08:12 crc kubenswrapper[4966]: I0127 17:08:12.407367 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-f2z26_eb7f0aaf-703f-4d9c-89c8-701f0707ab18/machine-api-operator/0.log" Jan 27 17:08:14 crc kubenswrapper[4966]: I0127 17:08:14.196101 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jvmh5" Jan 27 17:08:14 crc kubenswrapper[4966]: I0127 17:08:14.196433 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jvmh5" Jan 27 17:08:15 crc kubenswrapper[4966]: I0127 17:08:15.257952 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jvmh5" podUID="5167bfdf-b60b-4f32-adbd-f133454ccf07" containerName="registry-server" probeResult="failure" output=< Jan 27 17:08:15 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:08:15 crc kubenswrapper[4966]: > Jan 27 17:08:21 crc kubenswrapper[4966]: I0127 17:08:21.521431 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:08:21 crc kubenswrapper[4966]: E0127 17:08:21.522382 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:08:25 crc kubenswrapper[4966]: I0127 17:08:25.309246 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jvmh5" podUID="5167bfdf-b60b-4f32-adbd-f133454ccf07" containerName="registry-server" probeResult="failure" output=< Jan 27 17:08:25 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:08:25 crc kubenswrapper[4966]: > Jan 27 17:08:27 crc kubenswrapper[4966]: I0127 17:08:27.553575 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-jmsgj_d545853c-f504-4f02-a056-06ae19f8d3a4/cert-manager-controller/0.log" Jan 27 17:08:27 crc kubenswrapper[4966]: I0127 17:08:27.842061 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-rk7xr_4e21f2be-885f-4486-a5c2-056b78ab3ae1/cert-manager-cainjector/0.log" Jan 27 17:08:27 crc kubenswrapper[4966]: I0127 17:08:27.894780 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-5jnwt_37cde3a9-999c-4c96-a024-1769c058c4c8/cert-manager-webhook/1.log" Jan 27 17:08:27 crc kubenswrapper[4966]: I0127 17:08:27.990203 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-5jnwt_37cde3a9-999c-4c96-a024-1769c058c4c8/cert-manager-webhook/0.log" Jan 27 17:08:34 crc kubenswrapper[4966]: I0127 17:08:34.528937 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:08:34 crc kubenswrapper[4966]: E0127 17:08:34.529748 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:08:35 crc kubenswrapper[4966]: I0127 17:08:35.253346 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jvmh5" podUID="5167bfdf-b60b-4f32-adbd-f133454ccf07" containerName="registry-server" probeResult="failure" output=< Jan 27 17:08:35 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:08:35 crc kubenswrapper[4966]: > Jan 27 17:08:42 crc kubenswrapper[4966]: I0127 17:08:42.585479 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-mtks6_a7036db1-80f3-4de2-ac2c-8b6ad2c3a69e/nmstate-console-plugin/0.log" Jan 27 17:08:42 crc kubenswrapper[4966]: I0127 17:08:42.824515 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-86gp2_25afd019-0360-4ea5-ac94-94c6f42bb8a8/nmstate-handler/0.log" Jan 27 17:08:42 crc kubenswrapper[4966]: I0127 17:08:42.896027 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-8lp6r_24848ca3-bec9-4747-9d22-58606da5ef34/kube-rbac-proxy/0.log" Jan 27 17:08:43 crc kubenswrapper[4966]: I0127 17:08:43.007938 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-8lp6r_24848ca3-bec9-4747-9d22-58606da5ef34/nmstate-metrics/0.log" Jan 27 17:08:43 crc kubenswrapper[4966]: I0127 17:08:43.040532 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-l2twl_7761ce15-c3e7-45f2-a85e-000ea7043118/nmstate-operator/0.log" Jan 27 17:08:43 crc kubenswrapper[4966]: I0127 17:08:43.185869 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-5bg28_ee84a560-7150-49bd-94ac-e190aab8bc92/nmstate-webhook/0.log" Jan 27 17:08:44 crc kubenswrapper[4966]: I0127 17:08:44.247252 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jvmh5" Jan 27 17:08:44 crc kubenswrapper[4966]: I0127 17:08:44.307487 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jvmh5" Jan 27 17:08:44 crc kubenswrapper[4966]: I0127 17:08:44.502804 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jvmh5"] Jan 27 17:08:45 crc kubenswrapper[4966]: I0127 17:08:45.324398 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jvmh5" podUID="5167bfdf-b60b-4f32-adbd-f133454ccf07" containerName="registry-server" containerID="cri-o://aa3cc1d946686866bb370c1dfff1fa945a5796194e3ae8570b768b9b52fa135b" gracePeriod=2 Jan 27 17:08:46 crc kubenswrapper[4966]: I0127 17:08:46.339650 4966 generic.go:334] "Generic (PLEG): container finished" podID="5167bfdf-b60b-4f32-adbd-f133454ccf07" containerID="aa3cc1d946686866bb370c1dfff1fa945a5796194e3ae8570b768b9b52fa135b" exitCode=0 Jan 27 17:08:46 crc kubenswrapper[4966]: I0127 17:08:46.339730 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvmh5" event={"ID":"5167bfdf-b60b-4f32-adbd-f133454ccf07","Type":"ContainerDied","Data":"aa3cc1d946686866bb370c1dfff1fa945a5796194e3ae8570b768b9b52fa135b"} Jan 27 17:08:46 crc kubenswrapper[4966]: I0127 17:08:46.340052 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvmh5" event={"ID":"5167bfdf-b60b-4f32-adbd-f133454ccf07","Type":"ContainerDied","Data":"2de579f2fd4510266a8fea482fc2e5e8543e7f01665561c237b6987bf16f75dd"} Jan 27 17:08:46 crc kubenswrapper[4966]: I0127 17:08:46.340072 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2de579f2fd4510266a8fea482fc2e5e8543e7f01665561c237b6987bf16f75dd" Jan 27 17:08:46 crc kubenswrapper[4966]: I0127 17:08:46.439498 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvmh5" Jan 27 17:08:46 crc kubenswrapper[4966]: I0127 17:08:46.539882 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5167bfdf-b60b-4f32-adbd-f133454ccf07-utilities\") pod \"5167bfdf-b60b-4f32-adbd-f133454ccf07\" (UID: \"5167bfdf-b60b-4f32-adbd-f133454ccf07\") " Jan 27 17:08:46 crc kubenswrapper[4966]: I0127 17:08:46.540068 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5167bfdf-b60b-4f32-adbd-f133454ccf07-catalog-content\") pod \"5167bfdf-b60b-4f32-adbd-f133454ccf07\" (UID: \"5167bfdf-b60b-4f32-adbd-f133454ccf07\") " Jan 27 17:08:46 crc kubenswrapper[4966]: I0127 17:08:46.540212 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkbkx\" (UniqueName: \"kubernetes.io/projected/5167bfdf-b60b-4f32-adbd-f133454ccf07-kube-api-access-fkbkx\") pod \"5167bfdf-b60b-4f32-adbd-f133454ccf07\" (UID: \"5167bfdf-b60b-4f32-adbd-f133454ccf07\") " Jan 27 17:08:46 crc kubenswrapper[4966]: I0127 17:08:46.540606 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5167bfdf-b60b-4f32-adbd-f133454ccf07-utilities" (OuterVolumeSpecName: "utilities") pod "5167bfdf-b60b-4f32-adbd-f133454ccf07" (UID: "5167bfdf-b60b-4f32-adbd-f133454ccf07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:08:46 crc kubenswrapper[4966]: I0127 17:08:46.540831 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5167bfdf-b60b-4f32-adbd-f133454ccf07-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:08:46 crc kubenswrapper[4966]: I0127 17:08:46.552706 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5167bfdf-b60b-4f32-adbd-f133454ccf07-kube-api-access-fkbkx" (OuterVolumeSpecName: "kube-api-access-fkbkx") pod "5167bfdf-b60b-4f32-adbd-f133454ccf07" (UID: "5167bfdf-b60b-4f32-adbd-f133454ccf07"). InnerVolumeSpecName "kube-api-access-fkbkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:08:46 crc kubenswrapper[4966]: I0127 17:08:46.643576 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5167bfdf-b60b-4f32-adbd-f133454ccf07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5167bfdf-b60b-4f32-adbd-f133454ccf07" (UID: "5167bfdf-b60b-4f32-adbd-f133454ccf07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:08:46 crc kubenswrapper[4966]: I0127 17:08:46.644074 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkbkx\" (UniqueName: \"kubernetes.io/projected/5167bfdf-b60b-4f32-adbd-f133454ccf07-kube-api-access-fkbkx\") on node \"crc\" DevicePath \"\"" Jan 27 17:08:46 crc kubenswrapper[4966]: I0127 17:08:46.644100 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5167bfdf-b60b-4f32-adbd-f133454ccf07-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:08:47 crc kubenswrapper[4966]: I0127 17:08:47.359743 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvmh5" Jan 27 17:08:47 crc kubenswrapper[4966]: I0127 17:08:47.424583 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jvmh5"] Jan 27 17:08:47 crc kubenswrapper[4966]: I0127 17:08:47.433631 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jvmh5"] Jan 27 17:08:48 crc kubenswrapper[4966]: I0127 17:08:48.521909 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:08:48 crc kubenswrapper[4966]: E0127 17:08:48.522510 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:08:48 crc kubenswrapper[4966]: I0127 17:08:48.533826 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5167bfdf-b60b-4f32-adbd-f133454ccf07" path="/var/lib/kubelet/pods/5167bfdf-b60b-4f32-adbd-f133454ccf07/volumes" Jan 27 17:08:56 crc kubenswrapper[4966]: I0127 17:08:56.407856 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7c74c5b958-9l7lg_8ee78ad6-4785-4aee-a8cb-c16b147764d9/kube-rbac-proxy/0.log" Jan 27 17:08:56 crc kubenswrapper[4966]: I0127 17:08:56.457220 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7c74c5b958-9l7lg_8ee78ad6-4785-4aee-a8cb-c16b147764d9/manager/0.log" Jan 27 17:09:03 crc kubenswrapper[4966]: I0127 17:09:03.521078 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:09:03 crc kubenswrapper[4966]: E0127 17:09:03.522973 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:09:11 crc kubenswrapper[4966]: I0127 17:09:11.499287 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-vwbv2_15e87d6f-3d12-45d6-9d4c-e23919de2787/prometheus-operator/0.log" Jan 27 17:09:11 crc kubenswrapper[4966]: I0127 17:09:11.860817 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d_8f59b8d2-65c5-447c-b71c-6aa014c7e531/prometheus-operator-admission-webhook/0.log" Jan 27 17:09:11 crc kubenswrapper[4966]: I0127 17:09:11.907868 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-847885c9f7-s4b89_81566a49-33d9-4ca2-baa8-1944c4769bf5/prometheus-operator-admission-webhook/0.log" Jan 27 17:09:12 crc kubenswrapper[4966]: I0127 17:09:12.052990 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9876t_0443c8da-0b0f-4632-b990-f83e403a8b82/operator/1.log" Jan 27 17:09:12 crc kubenswrapper[4966]: I0127 17:09:12.084762 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9876t_0443c8da-0b0f-4632-b990-f83e403a8b82/operator/0.log" Jan 27 17:09:12 crc kubenswrapper[4966]: I0127 17:09:12.157416 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-qwptr_1e34c83b-3e54-47c0-88c7-57c3065deda1/observability-ui-dashboards/0.log" Jan 27 17:09:12 crc kubenswrapper[4966]: I0127 17:09:12.276104 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-x6l4k_b9f6b9a4-ded2-467b-9e87-6fafa667f709/perses-operator/0.log" Jan 27 17:09:17 crc kubenswrapper[4966]: I0127 17:09:17.521547 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:09:17 crc kubenswrapper[4966]: E0127 17:09:17.522152 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:09:29 crc kubenswrapper[4966]: I0127 17:09:29.002535 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-79cf69ddc8-pk8lz_fe36d099-9929-462a-8275-c58158cafe2c/cluster-logging-operator/0.log" Jan 27 17:09:29 crc kubenswrapper[4966]: I0127 17:09:29.407863 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-8vsc7_3b8269e8-5b68-4f5b-8bcf-9b4852846b6a/collector/0.log" Jan 27 17:09:29 crc kubenswrapper[4966]: I0127 17:09:29.446367 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_0caab707-59fa-4d4d-976b-e1f99d30fc01/loki-compactor/0.log" Jan 27 17:09:29 crc kubenswrapper[4966]: I0127 17:09:29.588809 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5f678c8dd6-88nmc_c70eec8b-c8da-4620-9c5e-bb19e5d66424/loki-distributor/0.log" Jan 27 17:09:30 crc kubenswrapper[4966]: I0127 17:09:30.235563 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-575b568fc4-5wzxv_fa2c58c2-9b23-4360-897e-582237775277/gateway/0.log" Jan 27 17:09:30 crc kubenswrapper[4966]: I0127 17:09:30.263485 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-575b568fc4-5wzxv_fa2c58c2-9b23-4360-897e-582237775277/opa/0.log" Jan 27 17:09:30 crc kubenswrapper[4966]: I0127 17:09:30.398527 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-575b568fc4-lr7wd_a58b269c-6e15-4eda-aa6f-00e51aa132fe/gateway/0.log" Jan 27 17:09:30 crc kubenswrapper[4966]: I0127 17:09:30.460835 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-575b568fc4-lr7wd_a58b269c-6e15-4eda-aa6f-00e51aa132fe/opa/0.log" Jan 27 17:09:30 crc kubenswrapper[4966]: I0127 17:09:30.521829 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:09:30 crc kubenswrapper[4966]: E0127 17:09:30.522284 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:09:30 crc kubenswrapper[4966]: I0127 17:09:30.523745 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_988dcb32-33f1-4e22-8b8c-a1a3b09828b3/loki-index-gateway/0.log" Jan 27 17:09:30 crc kubenswrapper[4966]: I0127 17:09:30.701256 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_e440811a-ec7d-4606-a78b-6b3d5062e044/loki-ingester/0.log" Jan 27 17:09:30 crc kubenswrapper[4966]: I0127 17:09:30.744169 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76788598db-95xm4_80c26c09-83a0-4b08-979b-a138a5ed5d4b/loki-querier/0.log" Jan 27 17:09:30 crc kubenswrapper[4966]: I0127 17:09:30.867161 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-69d9546745-6s27c_9c5e1e82-3053-4895-91ce-56475540fc35/loki-query-frontend/0.log" Jan 27 17:09:44 crc kubenswrapper[4966]: I0127 17:09:44.532156 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:09:44 crc kubenswrapper[4966]: E0127 17:09:44.533148 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:09:45 crc kubenswrapper[4966]: I0127 17:09:45.165773 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nn8hx_9fb28925-f952-48ea-88e5-db1ec4dba047/kube-rbac-proxy/0.log" Jan 27 17:09:45 crc kubenswrapper[4966]: I0127 17:09:45.288321 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nn8hx_9fb28925-f952-48ea-88e5-db1ec4dba047/controller/0.log" Jan 27 17:09:45 crc kubenswrapper[4966]: I0127 17:09:45.401874 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fpvwf_e75b042c-789e-43fc-8736-b3f5093f21db/cp-frr-files/0.log" Jan 27 17:09:45 crc kubenswrapper[4966]: I0127 17:09:45.543200 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fpvwf_e75b042c-789e-43fc-8736-b3f5093f21db/cp-reloader/0.log" Jan 27 17:09:45 crc kubenswrapper[4966]: I0127 17:09:45.557516 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fpvwf_e75b042c-789e-43fc-8736-b3f5093f21db/cp-reloader/0.log" Jan 27 17:09:45 crc kubenswrapper[4966]: I0127 17:09:45.568221 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fpvwf_e75b042c-789e-43fc-8736-b3f5093f21db/cp-frr-files/0.log" Jan 27 17:09:45 crc kubenswrapper[4966]: I0127 17:09:45.568268 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fpvwf_e75b042c-789e-43fc-8736-b3f5093f21db/cp-metrics/0.log" Jan 27 17:09:45 crc kubenswrapper[4966]: I0127 17:09:45.753162 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fpvwf_e75b042c-789e-43fc-8736-b3f5093f21db/cp-frr-files/0.log" Jan 27 17:09:45 crc kubenswrapper[4966]: I0127 17:09:45.763340 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fpvwf_e75b042c-789e-43fc-8736-b3f5093f21db/cp-reloader/0.log" Jan 27 17:09:45 crc kubenswrapper[4966]: I0127 17:09:45.764059 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fpvwf_e75b042c-789e-43fc-8736-b3f5093f21db/cp-metrics/0.log" Jan 27 17:09:45 crc kubenswrapper[4966]: I0127 17:09:45.772509 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fpvwf_e75b042c-789e-43fc-8736-b3f5093f21db/cp-metrics/0.log" Jan 27 17:09:45 crc kubenswrapper[4966]: I0127 17:09:45.938362 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fpvwf_e75b042c-789e-43fc-8736-b3f5093f21db/cp-frr-files/0.log" Jan 27 17:09:45 crc kubenswrapper[4966]: I0127 17:09:45.968692 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fpvwf_e75b042c-789e-43fc-8736-b3f5093f21db/cp-metrics/0.log" Jan 27 17:09:45 crc kubenswrapper[4966]: I0127 17:09:45.973948 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fpvwf_e75b042c-789e-43fc-8736-b3f5093f21db/cp-reloader/0.log" Jan 27 17:09:45 crc kubenswrapper[4966]: I0127 17:09:45.981290 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fpvwf_e75b042c-789e-43fc-8736-b3f5093f21db/controller/1.log" Jan 27 17:09:46 crc kubenswrapper[4966]: I0127 17:09:46.130665 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fpvwf_e75b042c-789e-43fc-8736-b3f5093f21db/controller/0.log" Jan 27 17:09:46 crc kubenswrapper[4966]: I0127 17:09:46.170457 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fpvwf_e75b042c-789e-43fc-8736-b3f5093f21db/frr-metrics/0.log" Jan 27 17:09:46 crc kubenswrapper[4966]: I0127 17:09:46.247424 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fpvwf_e75b042c-789e-43fc-8736-b3f5093f21db/frr/1.log" Jan 27 17:09:46 crc kubenswrapper[4966]: I0127 17:09:46.340745 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fpvwf_e75b042c-789e-43fc-8736-b3f5093f21db/kube-rbac-proxy/0.log" Jan 27 17:09:46 crc kubenswrapper[4966]: I0127 17:09:46.379104 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fpvwf_e75b042c-789e-43fc-8736-b3f5093f21db/kube-rbac-proxy-frr/0.log" Jan 27 17:09:46 crc kubenswrapper[4966]: I0127 17:09:46.475983 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fpvwf_e75b042c-789e-43fc-8736-b3f5093f21db/reloader/0.log" Jan 27 17:09:46 crc kubenswrapper[4966]: I0127 17:09:46.548760 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-hfhvb_3e510e0a-a47d-416e-aec1-c7de88b0a2af/frr-k8s-webhook-server/1.log" Jan 27 17:09:46 crc kubenswrapper[4966]: I0127 17:09:46.678720 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-hfhvb_3e510e0a-a47d-416e-aec1-c7de88b0a2af/frr-k8s-webhook-server/0.log" Jan 27 17:09:46 crc kubenswrapper[4966]: I0127 17:09:46.795144 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-784794b655-q7lf8_c9a9151b-f291-44db-a0fb-904cf48b7e37/manager/0.log" Jan 27 17:09:47 crc kubenswrapper[4966]: I0127 17:09:47.485541 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wpv4z_ce06a03b-db66-49be-ace7-f79a0b78dc62/kube-rbac-proxy/0.log" Jan 27 17:09:47 crc kubenswrapper[4966]: I0127 17:09:47.489719 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-546646bf6b-gmbc9_2d8c7621-e3fc-4854-b0a2-fde7bad8c5ef/webhook-server/0.log" Jan 27 17:09:48 crc kubenswrapper[4966]: I0127 17:09:48.008109 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wpv4z_ce06a03b-db66-49be-ace7-f79a0b78dc62/speaker/1.log" Jan 27 17:09:48 crc kubenswrapper[4966]: I0127 17:09:48.502000 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wpv4z_ce06a03b-db66-49be-ace7-f79a0b78dc62/speaker/0.log" Jan 27 17:09:48 crc kubenswrapper[4966]: I0127 17:09:48.580029 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fpvwf_e75b042c-789e-43fc-8736-b3f5093f21db/frr/0.log" Jan 27 17:09:55 crc kubenswrapper[4966]: I0127 17:09:55.521983 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:09:55 crc kubenswrapper[4966]: E0127 17:09:55.524481 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:10:03 crc kubenswrapper[4966]: I0127 17:10:03.654432 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc_6c3974f8-3174-42c5-b11c-ae9c190dd0c3/util/0.log" Jan 27 17:10:04 crc kubenswrapper[4966]: I0127 17:10:04.196982 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc_6c3974f8-3174-42c5-b11c-ae9c190dd0c3/pull/0.log" Jan 27 17:10:04 crc kubenswrapper[4966]: I0127 17:10:04.239723 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc_6c3974f8-3174-42c5-b11c-ae9c190dd0c3/pull/0.log" Jan 27 17:10:04 crc kubenswrapper[4966]: I0127 17:10:04.249921 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc_6c3974f8-3174-42c5-b11c-ae9c190dd0c3/util/0.log" Jan 27 17:10:04 crc kubenswrapper[4966]: I0127 17:10:04.433189 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc_6c3974f8-3174-42c5-b11c-ae9c190dd0c3/pull/0.log" Jan 27 17:10:04 crc kubenswrapper[4966]: I0127 17:10:04.467689 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc_6c3974f8-3174-42c5-b11c-ae9c190dd0c3/util/0.log" Jan 27 17:10:04 crc kubenswrapper[4966]: I0127 17:10:04.471672 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a257nwc_6c3974f8-3174-42c5-b11c-ae9c190dd0c3/extract/0.log" Jan 27 17:10:04 crc kubenswrapper[4966]: I0127 17:10:04.595648 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd_071b256a-ffeb-405a-b9ac-d65c622633b9/util/0.log" Jan 27 17:10:04 crc kubenswrapper[4966]: I0127 17:10:04.771722 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd_071b256a-ffeb-405a-b9ac-d65c622633b9/util/0.log" Jan 27 17:10:04 crc kubenswrapper[4966]: I0127 17:10:04.801881 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd_071b256a-ffeb-405a-b9ac-d65c622633b9/pull/0.log" Jan 27 17:10:04 crc kubenswrapper[4966]: I0127 17:10:04.803477 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd_071b256a-ffeb-405a-b9ac-d65c622633b9/pull/0.log" Jan 27 17:10:04 crc kubenswrapper[4966]: I0127 17:10:04.967746 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd_071b256a-ffeb-405a-b9ac-d65c622633b9/pull/0.log" Jan 27 17:10:05 crc kubenswrapper[4966]: I0127 17:10:05.006098 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd_071b256a-ffeb-405a-b9ac-d65c622633b9/extract/0.log" Jan 27 17:10:05 crc kubenswrapper[4966]: I0127 17:10:05.032823 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbwdxd_071b256a-ffeb-405a-b9ac-d65c622633b9/util/0.log" Jan 27 17:10:05 crc kubenswrapper[4966]: I0127 17:10:05.339662 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d_7274aa23-2136-466c-a16c-172e17d804ce/util/0.log" Jan 27 17:10:06 crc kubenswrapper[4966]: I0127 17:10:06.176040 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d_7274aa23-2136-466c-a16c-172e17d804ce/util/0.log" Jan 27 17:10:06 crc kubenswrapper[4966]: I0127 17:10:06.197490 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d_7274aa23-2136-466c-a16c-172e17d804ce/pull/0.log" Jan 27 17:10:06 crc kubenswrapper[4966]: I0127 17:10:06.203812 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d_7274aa23-2136-466c-a16c-172e17d804ce/pull/0.log" Jan 27 17:10:06 crc kubenswrapper[4966]: I0127 17:10:06.367811 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d_7274aa23-2136-466c-a16c-172e17d804ce/util/0.log" Jan 27 17:10:06 crc kubenswrapper[4966]: I0127 17:10:06.396272 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d_7274aa23-2136-466c-a16c-172e17d804ce/extract/0.log" Jan 27 17:10:06 crc kubenswrapper[4966]: I0127 17:10:06.413060 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b4sb8d_7274aa23-2136-466c-a16c-172e17d804ce/pull/0.log" Jan 27 17:10:06 crc kubenswrapper[4966]: I0127 17:10:06.537037 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p_8314cd46-03d3-46e9-a7dc-d7a561fe50fb/util/0.log" Jan 27 17:10:06 crc kubenswrapper[4966]: I0127 17:10:06.725909 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p_8314cd46-03d3-46e9-a7dc-d7a561fe50fb/pull/0.log" Jan 27 17:10:06 crc kubenswrapper[4966]: I0127 17:10:06.743711 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p_8314cd46-03d3-46e9-a7dc-d7a561fe50fb/util/0.log" Jan 27 17:10:06 crc kubenswrapper[4966]: I0127 17:10:06.756891 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p_8314cd46-03d3-46e9-a7dc-d7a561fe50fb/pull/0.log" Jan 27 17:10:06 crc kubenswrapper[4966]: I0127 17:10:06.945915 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p_8314cd46-03d3-46e9-a7dc-d7a561fe50fb/extract/0.log" Jan 27 17:10:07 crc kubenswrapper[4966]: I0127 17:10:07.002046 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p_8314cd46-03d3-46e9-a7dc-d7a561fe50fb/util/0.log" Jan 27 17:10:07 crc kubenswrapper[4966]: I0127 17:10:07.005741 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jm66p_8314cd46-03d3-46e9-a7dc-d7a561fe50fb/pull/0.log" Jan 27 17:10:07 crc kubenswrapper[4966]: I0127 17:10:07.155657 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r_5d93eed5-fedf-4b4e-b036-6fa9454b22a5/util/0.log" Jan 27 17:10:07 crc kubenswrapper[4966]: I0127 17:10:07.301636 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r_5d93eed5-fedf-4b4e-b036-6fa9454b22a5/util/0.log" Jan 27 17:10:07 crc kubenswrapper[4966]: I0127 17:10:07.351604 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r_5d93eed5-fedf-4b4e-b036-6fa9454b22a5/pull/0.log" Jan 27 17:10:07 crc kubenswrapper[4966]: I0127 17:10:07.352423 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r_5d93eed5-fedf-4b4e-b036-6fa9454b22a5/pull/0.log" Jan 27 17:10:07 crc kubenswrapper[4966]: I0127 17:10:07.498417 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r_5d93eed5-fedf-4b4e-b036-6fa9454b22a5/extract/0.log" Jan 27 17:10:07 crc kubenswrapper[4966]: I0127 17:10:07.498831 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r_5d93eed5-fedf-4b4e-b036-6fa9454b22a5/util/0.log" Jan 27 17:10:07 crc kubenswrapper[4966]: I0127 17:10:07.540283 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7br4n_0bcb091b-8f56-46c3-8437-2505b27684da/extract-utilities/0.log" Jan 27 17:10:07 crc kubenswrapper[4966]: I0127 17:10:07.541645 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822d7r_5d93eed5-fedf-4b4e-b036-6fa9454b22a5/pull/0.log" Jan 27 17:10:07 crc kubenswrapper[4966]: I0127 17:10:07.737759 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7br4n_0bcb091b-8f56-46c3-8437-2505b27684da/extract-utilities/0.log" Jan 27 17:10:07 crc kubenswrapper[4966]: I0127 17:10:07.757939 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7br4n_0bcb091b-8f56-46c3-8437-2505b27684da/extract-content/0.log" Jan 27 17:10:07 crc kubenswrapper[4966]: I0127 17:10:07.759946 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7br4n_0bcb091b-8f56-46c3-8437-2505b27684da/extract-content/0.log" Jan 27 17:10:07 crc kubenswrapper[4966]: I0127 17:10:07.924401 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7br4n_0bcb091b-8f56-46c3-8437-2505b27684da/extract-utilities/0.log" Jan 27 17:10:07 crc kubenswrapper[4966]: I0127 17:10:07.933668 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7br4n_0bcb091b-8f56-46c3-8437-2505b27684da/extract-content/0.log" Jan 27 17:10:08 crc kubenswrapper[4966]: I0127 17:10:08.030179 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-85mqp_055a60e0-b7aa-4b35-9807-01fe7095113d/extract-utilities/0.log" Jan 27 17:10:08 crc kubenswrapper[4966]: I0127 17:10:08.208695 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-85mqp_055a60e0-b7aa-4b35-9807-01fe7095113d/extract-utilities/0.log" Jan 27 17:10:08 crc kubenswrapper[4966]: I0127 17:10:08.217650 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-85mqp_055a60e0-b7aa-4b35-9807-01fe7095113d/extract-content/0.log" Jan 27 17:10:08 crc kubenswrapper[4966]: I0127 17:10:08.264437 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-85mqp_055a60e0-b7aa-4b35-9807-01fe7095113d/extract-content/0.log" Jan 27 17:10:08 crc kubenswrapper[4966]: I0127 17:10:08.546936 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-85mqp_055a60e0-b7aa-4b35-9807-01fe7095113d/extract-utilities/0.log" Jan 27 17:10:08 crc kubenswrapper[4966]: I0127 17:10:08.561207 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-85mqp_055a60e0-b7aa-4b35-9807-01fe7095113d/extract-content/0.log" Jan 27 17:10:08 crc kubenswrapper[4966]: I0127 17:10:08.757413 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-85mqp_055a60e0-b7aa-4b35-9807-01fe7095113d/registry-server/0.log" Jan 27 17:10:08 crc kubenswrapper[4966]: I0127 17:10:08.761859 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7br4n_0bcb091b-8f56-46c3-8437-2505b27684da/registry-server/0.log" Jan 27 17:10:08 crc kubenswrapper[4966]: I0127 17:10:08.775591 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7cfns_692eec10-7d08-44ba-aa26-0ac0eacfb1e7/marketplace-operator/0.log" Jan 27 17:10:08 crc kubenswrapper[4966]: I0127 17:10:08.839149 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dvjsg_8ebef5e4-f520-44af-9488-659932ab7ff8/extract-utilities/0.log" Jan 27 17:10:09 crc kubenswrapper[4966]: I0127 17:10:09.007497 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dvjsg_8ebef5e4-f520-44af-9488-659932ab7ff8/extract-content/0.log" Jan 27 17:10:09 crc kubenswrapper[4966]: I0127 17:10:09.010799 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dvjsg_8ebef5e4-f520-44af-9488-659932ab7ff8/extract-utilities/0.log" Jan 27 17:10:09 crc kubenswrapper[4966]: I0127 17:10:09.032733 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dvjsg_8ebef5e4-f520-44af-9488-659932ab7ff8/extract-content/0.log" Jan 27 17:10:09 crc kubenswrapper[4966]: I0127 17:10:09.224520 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dvjsg_8ebef5e4-f520-44af-9488-659932ab7ff8/extract-utilities/0.log" Jan 27 17:10:09 crc kubenswrapper[4966]: I0127 17:10:09.232390 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dvjsg_8ebef5e4-f520-44af-9488-659932ab7ff8/extract-content/0.log" Jan 27 17:10:09 crc kubenswrapper[4966]: I0127 17:10:09.275407 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q8wrz_818e66f7-b294-448b-9d55-99de7ebd3f34/extract-utilities/0.log" Jan 27 17:10:09 crc kubenswrapper[4966]: I0127 17:10:09.400607 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dvjsg_8ebef5e4-f520-44af-9488-659932ab7ff8/registry-server/0.log" Jan 27 17:10:09 crc kubenswrapper[4966]: I0127 17:10:09.521757 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:10:09 crc kubenswrapper[4966]: E0127 17:10:09.522064 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:10:09 crc kubenswrapper[4966]: I0127 17:10:09.526403 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q8wrz_818e66f7-b294-448b-9d55-99de7ebd3f34/extract-content/0.log" Jan 27 17:10:09 crc kubenswrapper[4966]: I0127 17:10:09.534831 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q8wrz_818e66f7-b294-448b-9d55-99de7ebd3f34/extract-utilities/0.log" Jan 27 17:10:09 crc kubenswrapper[4966]: I0127 17:10:09.535532 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q8wrz_818e66f7-b294-448b-9d55-99de7ebd3f34/extract-content/0.log" Jan 27 17:10:09 crc kubenswrapper[4966]: I0127 17:10:09.911967 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q8wrz_818e66f7-b294-448b-9d55-99de7ebd3f34/extract-content/0.log" Jan 27 17:10:09 crc kubenswrapper[4966]: I0127 17:10:09.917202 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q8wrz_818e66f7-b294-448b-9d55-99de7ebd3f34/extract-utilities/0.log" Jan 27 17:10:10 crc kubenswrapper[4966]: I0127 17:10:10.649593 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q8wrz_818e66f7-b294-448b-9d55-99de7ebd3f34/registry-server/0.log" Jan 27 17:10:22 crc kubenswrapper[4966]: I0127 17:10:22.521860 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:10:22 crc kubenswrapper[4966]: E0127 17:10:22.523247 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:10:24 crc kubenswrapper[4966]: I0127 17:10:24.324389 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-vwbv2_15e87d6f-3d12-45d6-9d4c-e23919de2787/prometheus-operator/0.log" Jan 27 17:10:24 crc kubenswrapper[4966]: I0127 17:10:24.329192 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-847885c9f7-kmc9d_8f59b8d2-65c5-447c-b71c-6aa014c7e531/prometheus-operator-admission-webhook/0.log" Jan 27 17:10:24 crc kubenswrapper[4966]: I0127 17:10:24.387291 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-847885c9f7-s4b89_81566a49-33d9-4ca2-baa8-1944c4769bf5/prometheus-operator-admission-webhook/0.log" Jan 27 17:10:24 crc kubenswrapper[4966]: I0127 17:10:24.519051 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9876t_0443c8da-0b0f-4632-b990-f83e403a8b82/operator/1.log" Jan 27 17:10:24 crc kubenswrapper[4966]: I0127 17:10:24.530341 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9876t_0443c8da-0b0f-4632-b990-f83e403a8b82/operator/0.log" Jan 27 17:10:24 crc kubenswrapper[4966]: I0127 17:10:24.575558 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-qwptr_1e34c83b-3e54-47c0-88c7-57c3065deda1/observability-ui-dashboards/0.log" Jan 27 17:10:24 crc kubenswrapper[4966]: I0127 17:10:24.669533 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-x6l4k_b9f6b9a4-ded2-467b-9e87-6fafa667f709/perses-operator/0.log" Jan 27 17:10:34 crc kubenswrapper[4966]: I0127 17:10:34.531300 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:10:34 crc kubenswrapper[4966]: E0127 17:10:34.532207 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:10:38 crc kubenswrapper[4966]: I0127 17:10:38.380515 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7c74c5b958-9l7lg_8ee78ad6-4785-4aee-a8cb-c16b147764d9/kube-rbac-proxy/0.log" Jan 27 17:10:38 crc kubenswrapper[4966]: I0127 17:10:38.427075 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7c74c5b958-9l7lg_8ee78ad6-4785-4aee-a8cb-c16b147764d9/manager/0.log" Jan 27 17:10:43 crc kubenswrapper[4966]: E0127 17:10:43.287591 4966 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.58:36434->38.129.56.58:34425: write tcp 38.129.56.58:36434->38.129.56.58:34425: write: broken pipe Jan 27 17:10:46 crc kubenswrapper[4966]: I0127 17:10:46.521361 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:10:47 crc kubenswrapper[4966]: I0127 17:10:47.715229 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"0b82f5c4006e37ce4acea807f2f48ce9819f9a916fa418c6d1ef6e9b0f839312"} Jan 27 17:11:49 crc kubenswrapper[4966]: I0127 17:11:49.798776 4966 scope.go:117] "RemoveContainer" containerID="5cb3404bb4fe8913ec332d885b95ba429a3d602ce985a33e2849d41263a99c73" Jan 27 17:12:35 crc kubenswrapper[4966]: I0127 17:12:35.133162 4966 generic.go:334] "Generic (PLEG): container finished" podID="a912b0fc-de7a-49d3-ba0b-8ae0475bf650" containerID="427af27be2b0091963f55d63f3f31dc6b52cc63fa57db1f5086aa07f82878b69" exitCode=0 Jan 27 17:12:35 crc kubenswrapper[4966]: I0127 17:12:35.133306 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lgskz/must-gather-24pds" event={"ID":"a912b0fc-de7a-49d3-ba0b-8ae0475bf650","Type":"ContainerDied","Data":"427af27be2b0091963f55d63f3f31dc6b52cc63fa57db1f5086aa07f82878b69"} Jan 27 17:12:35 crc kubenswrapper[4966]: I0127 17:12:35.135697 4966 scope.go:117] "RemoveContainer" containerID="427af27be2b0091963f55d63f3f31dc6b52cc63fa57db1f5086aa07f82878b69" Jan 27 17:12:36 crc kubenswrapper[4966]: I0127 17:12:36.076362 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lgskz_must-gather-24pds_a912b0fc-de7a-49d3-ba0b-8ae0475bf650/gather/0.log" Jan 27 17:12:44 crc kubenswrapper[4966]: I0127 17:12:44.744551 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lgskz/must-gather-24pds"] Jan 27 17:12:44 crc kubenswrapper[4966]: I0127 17:12:44.747983 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-lgskz/must-gather-24pds" podUID="a912b0fc-de7a-49d3-ba0b-8ae0475bf650" containerName="copy" containerID="cri-o://982a1a65f7d320712a671ab6c0c67e2642c2b38663d2742d518e8d03d7403ca4" gracePeriod=2 Jan 27 17:12:44 crc kubenswrapper[4966]: I0127 17:12:44.758729 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lgskz/must-gather-24pds"] Jan 27 17:12:45 crc kubenswrapper[4966]: I0127 17:12:45.260528 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lgskz_must-gather-24pds_a912b0fc-de7a-49d3-ba0b-8ae0475bf650/copy/0.log" Jan 27 17:12:45 crc kubenswrapper[4966]: I0127 17:12:45.261361 4966 generic.go:334] "Generic (PLEG): container finished" podID="a912b0fc-de7a-49d3-ba0b-8ae0475bf650" containerID="982a1a65f7d320712a671ab6c0c67e2642c2b38663d2742d518e8d03d7403ca4" exitCode=143 Jan 27 17:12:45 crc kubenswrapper[4966]: I0127 17:12:45.660727 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lgskz_must-gather-24pds_a912b0fc-de7a-49d3-ba0b-8ae0475bf650/copy/0.log" Jan 27 17:12:45 crc kubenswrapper[4966]: I0127 17:12:45.661366 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lgskz/must-gather-24pds" Jan 27 17:12:45 crc kubenswrapper[4966]: I0127 17:12:45.721374 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a912b0fc-de7a-49d3-ba0b-8ae0475bf650-must-gather-output\") pod \"a912b0fc-de7a-49d3-ba0b-8ae0475bf650\" (UID: \"a912b0fc-de7a-49d3-ba0b-8ae0475bf650\") " Jan 27 17:12:45 crc kubenswrapper[4966]: I0127 17:12:45.721475 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjcbr\" (UniqueName: \"kubernetes.io/projected/a912b0fc-de7a-49d3-ba0b-8ae0475bf650-kube-api-access-kjcbr\") pod \"a912b0fc-de7a-49d3-ba0b-8ae0475bf650\" (UID: \"a912b0fc-de7a-49d3-ba0b-8ae0475bf650\") " Jan 27 17:12:45 crc kubenswrapper[4966]: I0127 17:12:45.732368 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a912b0fc-de7a-49d3-ba0b-8ae0475bf650-kube-api-access-kjcbr" (OuterVolumeSpecName: "kube-api-access-kjcbr") pod "a912b0fc-de7a-49d3-ba0b-8ae0475bf650" (UID: "a912b0fc-de7a-49d3-ba0b-8ae0475bf650"). InnerVolumeSpecName "kube-api-access-kjcbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:12:45 crc kubenswrapper[4966]: I0127 17:12:45.826135 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjcbr\" (UniqueName: \"kubernetes.io/projected/a912b0fc-de7a-49d3-ba0b-8ae0475bf650-kube-api-access-kjcbr\") on node \"crc\" DevicePath \"\"" Jan 27 17:12:45 crc kubenswrapper[4966]: I0127 17:12:45.913341 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a912b0fc-de7a-49d3-ba0b-8ae0475bf650-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a912b0fc-de7a-49d3-ba0b-8ae0475bf650" (UID: "a912b0fc-de7a-49d3-ba0b-8ae0475bf650"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:12:45 crc kubenswrapper[4966]: I0127 17:12:45.928957 4966 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a912b0fc-de7a-49d3-ba0b-8ae0475bf650-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 17:12:46 crc kubenswrapper[4966]: I0127 17:12:46.273293 4966 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lgskz_must-gather-24pds_a912b0fc-de7a-49d3-ba0b-8ae0475bf650/copy/0.log" Jan 27 17:12:46 crc kubenswrapper[4966]: I0127 17:12:46.274829 4966 scope.go:117] "RemoveContainer" containerID="982a1a65f7d320712a671ab6c0c67e2642c2b38663d2742d518e8d03d7403ca4" Jan 27 17:12:46 crc kubenswrapper[4966]: I0127 17:12:46.274888 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lgskz/must-gather-24pds" Jan 27 17:12:46 crc kubenswrapper[4966]: I0127 17:12:46.298327 4966 scope.go:117] "RemoveContainer" containerID="427af27be2b0091963f55d63f3f31dc6b52cc63fa57db1f5086aa07f82878b69" Jan 27 17:12:46 crc kubenswrapper[4966]: I0127 17:12:46.533209 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a912b0fc-de7a-49d3-ba0b-8ae0475bf650" path="/var/lib/kubelet/pods/a912b0fc-de7a-49d3-ba0b-8ae0475bf650/volumes" Jan 27 17:12:49 crc kubenswrapper[4966]: I0127 17:12:49.901533 4966 scope.go:117] "RemoveContainer" containerID="5f48f709f7b43bf6680138d358597d0ba4012bd6c59136fd1ceddabb5fd2c265" Jan 27 17:13:10 crc kubenswrapper[4966]: I0127 17:13:10.124507 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:13:10 crc kubenswrapper[4966]: I0127 17:13:10.125160 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:13:40 crc kubenswrapper[4966]: I0127 17:13:40.119799 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:13:40 crc kubenswrapper[4966]: I0127 17:13:40.120424 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:14:10 crc kubenswrapper[4966]: I0127 17:14:10.119276 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:14:10 crc kubenswrapper[4966]: I0127 17:14:10.119973 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:14:10 crc kubenswrapper[4966]: I0127 17:14:10.120029 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 17:14:10 crc kubenswrapper[4966]: I0127 17:14:10.121385 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b82f5c4006e37ce4acea807f2f48ce9819f9a916fa418c6d1ef6e9b0f839312"} pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 17:14:10 crc kubenswrapper[4966]: I0127 17:14:10.121470 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" containerID="cri-o://0b82f5c4006e37ce4acea807f2f48ce9819f9a916fa418c6d1ef6e9b0f839312" gracePeriod=600 Jan 27 17:14:10 crc kubenswrapper[4966]: I0127 17:14:10.543332 4966 generic.go:334] "Generic (PLEG): container finished" podID="75889828-fc5d-4516-a3c4-db3affd4f810" containerID="0b82f5c4006e37ce4acea807f2f48ce9819f9a916fa418c6d1ef6e9b0f839312" exitCode=0 Jan 27 17:14:10 crc kubenswrapper[4966]: I0127 17:14:10.543404 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerDied","Data":"0b82f5c4006e37ce4acea807f2f48ce9819f9a916fa418c6d1ef6e9b0f839312"} Jan 27 17:14:10 crc kubenswrapper[4966]: I0127 17:14:10.543718 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerStarted","Data":"b2529efdffd5b304dd81582c40ddec07afe8d0d46aee1f06c3e3840d1e4319b8"} Jan 27 17:14:10 crc kubenswrapper[4966]: I0127 17:14:10.543745 4966 scope.go:117] "RemoveContainer" containerID="9841bf78727e11d933110f7b7944094396f81f277200fa659bb41b8bb0513246" Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.034589 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qnbzt"] Jan 27 17:14:19 crc kubenswrapper[4966]: E0127 17:14:19.036737 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5167bfdf-b60b-4f32-adbd-f133454ccf07" containerName="registry-server" Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.036859 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="5167bfdf-b60b-4f32-adbd-f133454ccf07" containerName="registry-server" Jan 27 17:14:19 crc kubenswrapper[4966]: E0127 17:14:19.036993 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a912b0fc-de7a-49d3-ba0b-8ae0475bf650" containerName="copy" Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.037086 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="a912b0fc-de7a-49d3-ba0b-8ae0475bf650" containerName="copy" Jan 27 17:14:19 crc kubenswrapper[4966]: E0127 17:14:19.037180 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5167bfdf-b60b-4f32-adbd-f133454ccf07" containerName="extract-utilities" Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.037257 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="5167bfdf-b60b-4f32-adbd-f133454ccf07" containerName="extract-utilities" Jan 27 17:14:19 crc kubenswrapper[4966]: E0127 17:14:19.037355 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5167bfdf-b60b-4f32-adbd-f133454ccf07" containerName="extract-content" Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.037430 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="5167bfdf-b60b-4f32-adbd-f133454ccf07" containerName="extract-content" Jan 27 17:14:19 crc kubenswrapper[4966]: E0127 17:14:19.037519 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a912b0fc-de7a-49d3-ba0b-8ae0475bf650" containerName="gather" Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.037596 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="a912b0fc-de7a-49d3-ba0b-8ae0475bf650" containerName="gather" Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.038079 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="5167bfdf-b60b-4f32-adbd-f133454ccf07" containerName="registry-server" Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.038194 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="a912b0fc-de7a-49d3-ba0b-8ae0475bf650" containerName="gather" Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.038288 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="a912b0fc-de7a-49d3-ba0b-8ae0475bf650" containerName="copy" Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.040627 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qnbzt" Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.062830 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qnbzt"] Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.146427 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86p8t\" (UniqueName: \"kubernetes.io/projected/b57fc368-d031-4eef-a1dc-42738660732d-kube-api-access-86p8t\") pod \"certified-operators-qnbzt\" (UID: \"b57fc368-d031-4eef-a1dc-42738660732d\") " pod="openshift-marketplace/certified-operators-qnbzt" Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.146474 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b57fc368-d031-4eef-a1dc-42738660732d-utilities\") pod \"certified-operators-qnbzt\" (UID: \"b57fc368-d031-4eef-a1dc-42738660732d\") " pod="openshift-marketplace/certified-operators-qnbzt" Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.146526 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b57fc368-d031-4eef-a1dc-42738660732d-catalog-content\") pod \"certified-operators-qnbzt\" (UID: \"b57fc368-d031-4eef-a1dc-42738660732d\") " pod="openshift-marketplace/certified-operators-qnbzt" Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.249798 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86p8t\" (UniqueName: \"kubernetes.io/projected/b57fc368-d031-4eef-a1dc-42738660732d-kube-api-access-86p8t\") pod \"certified-operators-qnbzt\" (UID: \"b57fc368-d031-4eef-a1dc-42738660732d\") " pod="openshift-marketplace/certified-operators-qnbzt" Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.249878 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b57fc368-d031-4eef-a1dc-42738660732d-utilities\") pod \"certified-operators-qnbzt\" (UID: \"b57fc368-d031-4eef-a1dc-42738660732d\") " pod="openshift-marketplace/certified-operators-qnbzt" Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.250520 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b57fc368-d031-4eef-a1dc-42738660732d-utilities\") pod \"certified-operators-qnbzt\" (UID: \"b57fc368-d031-4eef-a1dc-42738660732d\") " pod="openshift-marketplace/certified-operators-qnbzt" Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.250872 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b57fc368-d031-4eef-a1dc-42738660732d-catalog-content\") pod \"certified-operators-qnbzt\" (UID: \"b57fc368-d031-4eef-a1dc-42738660732d\") " pod="openshift-marketplace/certified-operators-qnbzt" Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.251535 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b57fc368-d031-4eef-a1dc-42738660732d-catalog-content\") pod \"certified-operators-qnbzt\" (UID: \"b57fc368-d031-4eef-a1dc-42738660732d\") " pod="openshift-marketplace/certified-operators-qnbzt" Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.282787 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86p8t\" (UniqueName: \"kubernetes.io/projected/b57fc368-d031-4eef-a1dc-42738660732d-kube-api-access-86p8t\") pod \"certified-operators-qnbzt\" (UID: \"b57fc368-d031-4eef-a1dc-42738660732d\") " pod="openshift-marketplace/certified-operators-qnbzt" Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.371053 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qnbzt" Jan 27 17:14:19 crc kubenswrapper[4966]: I0127 17:14:19.942376 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qnbzt"] Jan 27 17:14:19 crc kubenswrapper[4966]: W0127 17:14:19.950288 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb57fc368_d031_4eef_a1dc_42738660732d.slice/crio-3c2f6f33938dbf5b807734ab152c40982815bbd516beb8d95802fe3853cd61cb WatchSource:0}: Error finding container 3c2f6f33938dbf5b807734ab152c40982815bbd516beb8d95802fe3853cd61cb: Status 404 returned error can't find the container with id 3c2f6f33938dbf5b807734ab152c40982815bbd516beb8d95802fe3853cd61cb Jan 27 17:14:20 crc kubenswrapper[4966]: I0127 17:14:20.660956 4966 generic.go:334] "Generic (PLEG): container finished" podID="b57fc368-d031-4eef-a1dc-42738660732d" containerID="ade54ebdeeb917663d8d40cf69867eae7116bb4be546e516595fc7184784b064" exitCode=0 Jan 27 17:14:20 crc kubenswrapper[4966]: I0127 17:14:20.661175 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qnbzt" event={"ID":"b57fc368-d031-4eef-a1dc-42738660732d","Type":"ContainerDied","Data":"ade54ebdeeb917663d8d40cf69867eae7116bb4be546e516595fc7184784b064"} Jan 27 17:14:20 crc kubenswrapper[4966]: I0127 17:14:20.661402 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qnbzt" event={"ID":"b57fc368-d031-4eef-a1dc-42738660732d","Type":"ContainerStarted","Data":"3c2f6f33938dbf5b807734ab152c40982815bbd516beb8d95802fe3853cd61cb"} Jan 27 17:14:20 crc kubenswrapper[4966]: I0127 17:14:20.663593 4966 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 17:14:21 crc kubenswrapper[4966]: I0127 17:14:21.674969 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qnbzt" event={"ID":"b57fc368-d031-4eef-a1dc-42738660732d","Type":"ContainerStarted","Data":"e0cd61f2899322d2cd749f1b7ac4873ae97dcdc85cf67fa230bcd112844d2b47"} Jan 27 17:14:23 crc kubenswrapper[4966]: I0127 17:14:23.699373 4966 generic.go:334] "Generic (PLEG): container finished" podID="b57fc368-d031-4eef-a1dc-42738660732d" containerID="e0cd61f2899322d2cd749f1b7ac4873ae97dcdc85cf67fa230bcd112844d2b47" exitCode=0 Jan 27 17:14:23 crc kubenswrapper[4966]: I0127 17:14:23.699468 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qnbzt" event={"ID":"b57fc368-d031-4eef-a1dc-42738660732d","Type":"ContainerDied","Data":"e0cd61f2899322d2cd749f1b7ac4873ae97dcdc85cf67fa230bcd112844d2b47"} Jan 27 17:14:25 crc kubenswrapper[4966]: I0127 17:14:25.722601 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qnbzt" event={"ID":"b57fc368-d031-4eef-a1dc-42738660732d","Type":"ContainerStarted","Data":"065ea6567d52c22bf71ce9c0d5ff1e71c1ebee6abf14cc65c80db802784fed07"} Jan 27 17:14:25 crc kubenswrapper[4966]: I0127 17:14:25.742794 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qnbzt" podStartSLOduration=4.251123143 podStartE2EDuration="7.74277286s" podCreationTimestamp="2026-01-27 17:14:18 +0000 UTC" firstStartedPulling="2026-01-27 17:14:20.66330659 +0000 UTC m=+5526.966100078" lastFinishedPulling="2026-01-27 17:14:24.154956297 +0000 UTC m=+5530.457749795" observedRunningTime="2026-01-27 17:14:25.740607561 +0000 UTC m=+5532.043401079" watchObservedRunningTime="2026-01-27 17:14:25.74277286 +0000 UTC m=+5532.045566348" Jan 27 17:14:29 crc kubenswrapper[4966]: I0127 17:14:29.372089 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qnbzt" Jan 27 17:14:29 crc kubenswrapper[4966]: I0127 17:14:29.379011 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qnbzt" Jan 27 17:14:30 crc kubenswrapper[4966]: I0127 17:14:30.486973 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qnbzt" podUID="b57fc368-d031-4eef-a1dc-42738660732d" containerName="registry-server" probeResult="failure" output=< Jan 27 17:14:30 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:14:30 crc kubenswrapper[4966]: > Jan 27 17:14:39 crc kubenswrapper[4966]: I0127 17:14:39.428239 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qnbzt" Jan 27 17:14:39 crc kubenswrapper[4966]: I0127 17:14:39.509974 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qnbzt" Jan 27 17:14:39 crc kubenswrapper[4966]: I0127 17:14:39.668402 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qnbzt"] Jan 27 17:14:40 crc kubenswrapper[4966]: I0127 17:14:40.890986 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qnbzt" podUID="b57fc368-d031-4eef-a1dc-42738660732d" containerName="registry-server" containerID="cri-o://065ea6567d52c22bf71ce9c0d5ff1e71c1ebee6abf14cc65c80db802784fed07" gracePeriod=2 Jan 27 17:14:41 crc kubenswrapper[4966]: I0127 17:14:41.428858 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qnbzt" Jan 27 17:14:41 crc kubenswrapper[4966]: I0127 17:14:41.553422 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86p8t\" (UniqueName: \"kubernetes.io/projected/b57fc368-d031-4eef-a1dc-42738660732d-kube-api-access-86p8t\") pod \"b57fc368-d031-4eef-a1dc-42738660732d\" (UID: \"b57fc368-d031-4eef-a1dc-42738660732d\") " Jan 27 17:14:41 crc kubenswrapper[4966]: I0127 17:14:41.553612 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b57fc368-d031-4eef-a1dc-42738660732d-catalog-content\") pod \"b57fc368-d031-4eef-a1dc-42738660732d\" (UID: \"b57fc368-d031-4eef-a1dc-42738660732d\") " Jan 27 17:14:41 crc kubenswrapper[4966]: I0127 17:14:41.553755 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b57fc368-d031-4eef-a1dc-42738660732d-utilities\") pod \"b57fc368-d031-4eef-a1dc-42738660732d\" (UID: \"b57fc368-d031-4eef-a1dc-42738660732d\") " Jan 27 17:14:41 crc kubenswrapper[4966]: I0127 17:14:41.554637 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b57fc368-d031-4eef-a1dc-42738660732d-utilities" (OuterVolumeSpecName: "utilities") pod "b57fc368-d031-4eef-a1dc-42738660732d" (UID: "b57fc368-d031-4eef-a1dc-42738660732d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:14:41 crc kubenswrapper[4966]: I0127 17:14:41.555157 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b57fc368-d031-4eef-a1dc-42738660732d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:14:41 crc kubenswrapper[4966]: I0127 17:14:41.559399 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b57fc368-d031-4eef-a1dc-42738660732d-kube-api-access-86p8t" (OuterVolumeSpecName: "kube-api-access-86p8t") pod "b57fc368-d031-4eef-a1dc-42738660732d" (UID: "b57fc368-d031-4eef-a1dc-42738660732d"). InnerVolumeSpecName "kube-api-access-86p8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:14:41 crc kubenswrapper[4966]: I0127 17:14:41.614416 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b57fc368-d031-4eef-a1dc-42738660732d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b57fc368-d031-4eef-a1dc-42738660732d" (UID: "b57fc368-d031-4eef-a1dc-42738660732d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:14:41 crc kubenswrapper[4966]: I0127 17:14:41.657845 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b57fc368-d031-4eef-a1dc-42738660732d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:14:41 crc kubenswrapper[4966]: I0127 17:14:41.657912 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86p8t\" (UniqueName: \"kubernetes.io/projected/b57fc368-d031-4eef-a1dc-42738660732d-kube-api-access-86p8t\") on node \"crc\" DevicePath \"\"" Jan 27 17:14:41 crc kubenswrapper[4966]: I0127 17:14:41.908887 4966 generic.go:334] "Generic (PLEG): container finished" podID="b57fc368-d031-4eef-a1dc-42738660732d" containerID="065ea6567d52c22bf71ce9c0d5ff1e71c1ebee6abf14cc65c80db802784fed07" exitCode=0 Jan 27 17:14:41 crc kubenswrapper[4966]: I0127 17:14:41.909129 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qnbzt" event={"ID":"b57fc368-d031-4eef-a1dc-42738660732d","Type":"ContainerDied","Data":"065ea6567d52c22bf71ce9c0d5ff1e71c1ebee6abf14cc65c80db802784fed07"} Jan 27 17:14:41 crc kubenswrapper[4966]: I0127 17:14:41.909253 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qnbzt" Jan 27 17:14:41 crc kubenswrapper[4966]: I0127 17:14:41.910658 4966 scope.go:117] "RemoveContainer" containerID="065ea6567d52c22bf71ce9c0d5ff1e71c1ebee6abf14cc65c80db802784fed07" Jan 27 17:14:41 crc kubenswrapper[4966]: I0127 17:14:41.910614 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qnbzt" event={"ID":"b57fc368-d031-4eef-a1dc-42738660732d","Type":"ContainerDied","Data":"3c2f6f33938dbf5b807734ab152c40982815bbd516beb8d95802fe3853cd61cb"} Jan 27 17:14:41 crc kubenswrapper[4966]: I0127 17:14:41.939017 4966 scope.go:117] "RemoveContainer" containerID="e0cd61f2899322d2cd749f1b7ac4873ae97dcdc85cf67fa230bcd112844d2b47" Jan 27 17:14:41 crc kubenswrapper[4966]: I0127 17:14:41.967073 4966 scope.go:117] "RemoveContainer" containerID="ade54ebdeeb917663d8d40cf69867eae7116bb4be546e516595fc7184784b064" Jan 27 17:14:41 crc kubenswrapper[4966]: I0127 17:14:41.967400 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qnbzt"] Jan 27 17:14:41 crc kubenswrapper[4966]: I0127 17:14:41.981265 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qnbzt"] Jan 27 17:14:42 crc kubenswrapper[4966]: I0127 17:14:42.038789 4966 scope.go:117] "RemoveContainer" containerID="065ea6567d52c22bf71ce9c0d5ff1e71c1ebee6abf14cc65c80db802784fed07" Jan 27 17:14:42 crc kubenswrapper[4966]: E0127 17:14:42.040131 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"065ea6567d52c22bf71ce9c0d5ff1e71c1ebee6abf14cc65c80db802784fed07\": container with ID starting with 065ea6567d52c22bf71ce9c0d5ff1e71c1ebee6abf14cc65c80db802784fed07 not found: ID does not exist" containerID="065ea6567d52c22bf71ce9c0d5ff1e71c1ebee6abf14cc65c80db802784fed07" Jan 27 17:14:42 crc kubenswrapper[4966]: I0127 17:14:42.040204 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"065ea6567d52c22bf71ce9c0d5ff1e71c1ebee6abf14cc65c80db802784fed07"} err="failed to get container status \"065ea6567d52c22bf71ce9c0d5ff1e71c1ebee6abf14cc65c80db802784fed07\": rpc error: code = NotFound desc = could not find container \"065ea6567d52c22bf71ce9c0d5ff1e71c1ebee6abf14cc65c80db802784fed07\": container with ID starting with 065ea6567d52c22bf71ce9c0d5ff1e71c1ebee6abf14cc65c80db802784fed07 not found: ID does not exist" Jan 27 17:14:42 crc kubenswrapper[4966]: I0127 17:14:42.040239 4966 scope.go:117] "RemoveContainer" containerID="e0cd61f2899322d2cd749f1b7ac4873ae97dcdc85cf67fa230bcd112844d2b47" Jan 27 17:14:42 crc kubenswrapper[4966]: E0127 17:14:42.040688 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0cd61f2899322d2cd749f1b7ac4873ae97dcdc85cf67fa230bcd112844d2b47\": container with ID starting with e0cd61f2899322d2cd749f1b7ac4873ae97dcdc85cf67fa230bcd112844d2b47 not found: ID does not exist" containerID="e0cd61f2899322d2cd749f1b7ac4873ae97dcdc85cf67fa230bcd112844d2b47" Jan 27 17:14:42 crc kubenswrapper[4966]: I0127 17:14:42.040731 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0cd61f2899322d2cd749f1b7ac4873ae97dcdc85cf67fa230bcd112844d2b47"} err="failed to get container status \"e0cd61f2899322d2cd749f1b7ac4873ae97dcdc85cf67fa230bcd112844d2b47\": rpc error: code = NotFound desc = could not find container \"e0cd61f2899322d2cd749f1b7ac4873ae97dcdc85cf67fa230bcd112844d2b47\": container with ID starting with e0cd61f2899322d2cd749f1b7ac4873ae97dcdc85cf67fa230bcd112844d2b47 not found: ID does not exist" Jan 27 17:14:42 crc kubenswrapper[4966]: I0127 17:14:42.040753 4966 scope.go:117] "RemoveContainer" containerID="ade54ebdeeb917663d8d40cf69867eae7116bb4be546e516595fc7184784b064" Jan 27 17:14:42 crc kubenswrapper[4966]: E0127 17:14:42.041096 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ade54ebdeeb917663d8d40cf69867eae7116bb4be546e516595fc7184784b064\": container with ID starting with ade54ebdeeb917663d8d40cf69867eae7116bb4be546e516595fc7184784b064 not found: ID does not exist" containerID="ade54ebdeeb917663d8d40cf69867eae7116bb4be546e516595fc7184784b064" Jan 27 17:14:42 crc kubenswrapper[4966]: I0127 17:14:42.041126 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ade54ebdeeb917663d8d40cf69867eae7116bb4be546e516595fc7184784b064"} err="failed to get container status \"ade54ebdeeb917663d8d40cf69867eae7116bb4be546e516595fc7184784b064\": rpc error: code = NotFound desc = could not find container \"ade54ebdeeb917663d8d40cf69867eae7116bb4be546e516595fc7184784b064\": container with ID starting with ade54ebdeeb917663d8d40cf69867eae7116bb4be546e516595fc7184784b064 not found: ID does not exist" Jan 27 17:14:42 crc kubenswrapper[4966]: I0127 17:14:42.543143 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b57fc368-d031-4eef-a1dc-42738660732d" path="/var/lib/kubelet/pods/b57fc368-d031-4eef-a1dc-42738660732d/volumes" Jan 27 17:14:45 crc kubenswrapper[4966]: I0127 17:14:45.828886 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hhdnc"] Jan 27 17:14:45 crc kubenswrapper[4966]: E0127 17:14:45.829756 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b57fc368-d031-4eef-a1dc-42738660732d" containerName="registry-server" Jan 27 17:14:45 crc kubenswrapper[4966]: I0127 17:14:45.829780 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57fc368-d031-4eef-a1dc-42738660732d" containerName="registry-server" Jan 27 17:14:45 crc kubenswrapper[4966]: E0127 17:14:45.829800 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b57fc368-d031-4eef-a1dc-42738660732d" containerName="extract-utilities" Jan 27 17:14:45 crc kubenswrapper[4966]: I0127 17:14:45.829811 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57fc368-d031-4eef-a1dc-42738660732d" containerName="extract-utilities" Jan 27 17:14:45 crc kubenswrapper[4966]: E0127 17:14:45.829851 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b57fc368-d031-4eef-a1dc-42738660732d" containerName="extract-content" Jan 27 17:14:45 crc kubenswrapper[4966]: I0127 17:14:45.829866 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57fc368-d031-4eef-a1dc-42738660732d" containerName="extract-content" Jan 27 17:14:45 crc kubenswrapper[4966]: I0127 17:14:45.830330 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="b57fc368-d031-4eef-a1dc-42738660732d" containerName="registry-server" Jan 27 17:14:45 crc kubenswrapper[4966]: I0127 17:14:45.833292 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hhdnc" Jan 27 17:14:45 crc kubenswrapper[4966]: I0127 17:14:45.850144 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhdnc"] Jan 27 17:14:45 crc kubenswrapper[4966]: I0127 17:14:45.968528 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7417edb-da02-4a16-84d0-8f044323f991-catalog-content\") pod \"redhat-marketplace-hhdnc\" (UID: \"b7417edb-da02-4a16-84d0-8f044323f991\") " pod="openshift-marketplace/redhat-marketplace-hhdnc" Jan 27 17:14:45 crc kubenswrapper[4966]: I0127 17:14:45.968816 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9m7l\" (UniqueName: \"kubernetes.io/projected/b7417edb-da02-4a16-84d0-8f044323f991-kube-api-access-g9m7l\") pod \"redhat-marketplace-hhdnc\" (UID: \"b7417edb-da02-4a16-84d0-8f044323f991\") " pod="openshift-marketplace/redhat-marketplace-hhdnc" Jan 27 17:14:45 crc kubenswrapper[4966]: I0127 17:14:45.968845 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7417edb-da02-4a16-84d0-8f044323f991-utilities\") pod \"redhat-marketplace-hhdnc\" (UID: \"b7417edb-da02-4a16-84d0-8f044323f991\") " pod="openshift-marketplace/redhat-marketplace-hhdnc" Jan 27 17:14:46 crc kubenswrapper[4966]: I0127 17:14:46.072645 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7417edb-da02-4a16-84d0-8f044323f991-catalog-content\") pod \"redhat-marketplace-hhdnc\" (UID: \"b7417edb-da02-4a16-84d0-8f044323f991\") " pod="openshift-marketplace/redhat-marketplace-hhdnc" Jan 27 17:14:46 crc kubenswrapper[4966]: I0127 17:14:46.072711 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9m7l\" (UniqueName: \"kubernetes.io/projected/b7417edb-da02-4a16-84d0-8f044323f991-kube-api-access-g9m7l\") pod \"redhat-marketplace-hhdnc\" (UID: \"b7417edb-da02-4a16-84d0-8f044323f991\") " pod="openshift-marketplace/redhat-marketplace-hhdnc" Jan 27 17:14:46 crc kubenswrapper[4966]: I0127 17:14:46.072735 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7417edb-da02-4a16-84d0-8f044323f991-utilities\") pod \"redhat-marketplace-hhdnc\" (UID: \"b7417edb-da02-4a16-84d0-8f044323f991\") " pod="openshift-marketplace/redhat-marketplace-hhdnc" Jan 27 17:14:46 crc kubenswrapper[4966]: I0127 17:14:46.073364 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7417edb-da02-4a16-84d0-8f044323f991-utilities\") pod \"redhat-marketplace-hhdnc\" (UID: \"b7417edb-da02-4a16-84d0-8f044323f991\") " pod="openshift-marketplace/redhat-marketplace-hhdnc" Jan 27 17:14:46 crc kubenswrapper[4966]: I0127 17:14:46.073722 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7417edb-da02-4a16-84d0-8f044323f991-catalog-content\") pod \"redhat-marketplace-hhdnc\" (UID: \"b7417edb-da02-4a16-84d0-8f044323f991\") " pod="openshift-marketplace/redhat-marketplace-hhdnc" Jan 27 17:14:46 crc kubenswrapper[4966]: I0127 17:14:46.103628 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9m7l\" (UniqueName: \"kubernetes.io/projected/b7417edb-da02-4a16-84d0-8f044323f991-kube-api-access-g9m7l\") pod \"redhat-marketplace-hhdnc\" (UID: \"b7417edb-da02-4a16-84d0-8f044323f991\") " pod="openshift-marketplace/redhat-marketplace-hhdnc" Jan 27 17:14:46 crc kubenswrapper[4966]: I0127 17:14:46.156330 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hhdnc" Jan 27 17:14:46 crc kubenswrapper[4966]: I0127 17:14:46.662602 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhdnc"] Jan 27 17:14:46 crc kubenswrapper[4966]: I0127 17:14:46.986020 4966 generic.go:334] "Generic (PLEG): container finished" podID="b7417edb-da02-4a16-84d0-8f044323f991" containerID="4387f75cb1dfe50e0853495b9f95046f2986a7c1959ae9019e71ebc6097c9e5e" exitCode=0 Jan 27 17:14:46 crc kubenswrapper[4966]: I0127 17:14:46.987543 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhdnc" event={"ID":"b7417edb-da02-4a16-84d0-8f044323f991","Type":"ContainerDied","Data":"4387f75cb1dfe50e0853495b9f95046f2986a7c1959ae9019e71ebc6097c9e5e"} Jan 27 17:14:46 crc kubenswrapper[4966]: I0127 17:14:46.987601 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhdnc" event={"ID":"b7417edb-da02-4a16-84d0-8f044323f991","Type":"ContainerStarted","Data":"e4ad16ef068e7d0ad9081d418b58feacda4b17e98943cb7aecc515d718424d79"} Jan 27 17:14:49 crc kubenswrapper[4966]: I0127 17:14:49.015999 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhdnc" event={"ID":"b7417edb-da02-4a16-84d0-8f044323f991","Type":"ContainerStarted","Data":"96c0c1c2cbd44d0766a6af6b72822a452d4e86660e3b1e7124f778ac80f4aa4f"} Jan 27 17:14:49 crc kubenswrapper[4966]: I0127 17:14:49.998146 4966 scope.go:117] "RemoveContainer" containerID="aa3cc1d946686866bb370c1dfff1fa945a5796194e3ae8570b768b9b52fa135b" Jan 27 17:14:50 crc kubenswrapper[4966]: I0127 17:14:50.025957 4966 scope.go:117] "RemoveContainer" containerID="232c065cbf0672985d2c3380536966da23da0c140c9314d4d23eb3548e9e26d8" Jan 27 17:14:50 crc kubenswrapper[4966]: I0127 17:14:50.029802 4966 generic.go:334] "Generic (PLEG): container finished" podID="b7417edb-da02-4a16-84d0-8f044323f991" containerID="96c0c1c2cbd44d0766a6af6b72822a452d4e86660e3b1e7124f778ac80f4aa4f" exitCode=0 Jan 27 17:14:50 crc kubenswrapper[4966]: I0127 17:14:50.029850 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhdnc" event={"ID":"b7417edb-da02-4a16-84d0-8f044323f991","Type":"ContainerDied","Data":"96c0c1c2cbd44d0766a6af6b72822a452d4e86660e3b1e7124f778ac80f4aa4f"} Jan 27 17:14:50 crc kubenswrapper[4966]: I0127 17:14:50.072977 4966 scope.go:117] "RemoveContainer" containerID="d289b06978927acacb72d92e670415f6f36daecc97048c1c5f1eb83c0aaf2d07" Jan 27 17:14:51 crc kubenswrapper[4966]: I0127 17:14:51.043943 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhdnc" event={"ID":"b7417edb-da02-4a16-84d0-8f044323f991","Type":"ContainerStarted","Data":"9478cd861518291e224a055597f4fe05d9e96ddad634b3fc15cf16d3ecd2561a"} Jan 27 17:14:51 crc kubenswrapper[4966]: I0127 17:14:51.066450 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hhdnc" podStartSLOduration=2.625247511 podStartE2EDuration="6.066431958s" podCreationTimestamp="2026-01-27 17:14:45 +0000 UTC" firstStartedPulling="2026-01-27 17:14:46.991770851 +0000 UTC m=+5553.294564339" lastFinishedPulling="2026-01-27 17:14:50.432955298 +0000 UTC m=+5556.735748786" observedRunningTime="2026-01-27 17:14:51.060935616 +0000 UTC m=+5557.363729114" watchObservedRunningTime="2026-01-27 17:14:51.066431958 +0000 UTC m=+5557.369225446" Jan 27 17:14:56 crc kubenswrapper[4966]: I0127 17:14:56.157440 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hhdnc" Jan 27 17:14:56 crc kubenswrapper[4966]: I0127 17:14:56.158181 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hhdnc" Jan 27 17:14:56 crc kubenswrapper[4966]: I0127 17:14:56.232094 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hhdnc" Jan 27 17:14:57 crc kubenswrapper[4966]: I0127 17:14:57.157854 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hhdnc" Jan 27 17:14:57 crc kubenswrapper[4966]: I0127 17:14:57.217553 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhdnc"] Jan 27 17:14:59 crc kubenswrapper[4966]: I0127 17:14:59.143267 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hhdnc" podUID="b7417edb-da02-4a16-84d0-8f044323f991" containerName="registry-server" containerID="cri-o://9478cd861518291e224a055597f4fe05d9e96ddad634b3fc15cf16d3ecd2561a" gracePeriod=2 Jan 27 17:14:59 crc kubenswrapper[4966]: I0127 17:14:59.761376 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hhdnc" Jan 27 17:14:59 crc kubenswrapper[4966]: I0127 17:14:59.852881 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9m7l\" (UniqueName: \"kubernetes.io/projected/b7417edb-da02-4a16-84d0-8f044323f991-kube-api-access-g9m7l\") pod \"b7417edb-da02-4a16-84d0-8f044323f991\" (UID: \"b7417edb-da02-4a16-84d0-8f044323f991\") " Jan 27 17:14:59 crc kubenswrapper[4966]: I0127 17:14:59.852985 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7417edb-da02-4a16-84d0-8f044323f991-utilities\") pod \"b7417edb-da02-4a16-84d0-8f044323f991\" (UID: \"b7417edb-da02-4a16-84d0-8f044323f991\") " Jan 27 17:14:59 crc kubenswrapper[4966]: I0127 17:14:59.853220 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7417edb-da02-4a16-84d0-8f044323f991-catalog-content\") pod \"b7417edb-da02-4a16-84d0-8f044323f991\" (UID: \"b7417edb-da02-4a16-84d0-8f044323f991\") " Jan 27 17:14:59 crc kubenswrapper[4966]: I0127 17:14:59.854437 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7417edb-da02-4a16-84d0-8f044323f991-utilities" (OuterVolumeSpecName: "utilities") pod "b7417edb-da02-4a16-84d0-8f044323f991" (UID: "b7417edb-da02-4a16-84d0-8f044323f991"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:14:59 crc kubenswrapper[4966]: I0127 17:14:59.869297 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7417edb-da02-4a16-84d0-8f044323f991-kube-api-access-g9m7l" (OuterVolumeSpecName: "kube-api-access-g9m7l") pod "b7417edb-da02-4a16-84d0-8f044323f991" (UID: "b7417edb-da02-4a16-84d0-8f044323f991"). InnerVolumeSpecName "kube-api-access-g9m7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:14:59 crc kubenswrapper[4966]: I0127 17:14:59.876308 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7417edb-da02-4a16-84d0-8f044323f991-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7417edb-da02-4a16-84d0-8f044323f991" (UID: "b7417edb-da02-4a16-84d0-8f044323f991"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:14:59 crc kubenswrapper[4966]: I0127 17:14:59.956710 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9m7l\" (UniqueName: \"kubernetes.io/projected/b7417edb-da02-4a16-84d0-8f044323f991-kube-api-access-g9m7l\") on node \"crc\" DevicePath \"\"" Jan 27 17:14:59 crc kubenswrapper[4966]: I0127 17:14:59.956776 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7417edb-da02-4a16-84d0-8f044323f991-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:14:59 crc kubenswrapper[4966]: I0127 17:14:59.956800 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7417edb-da02-4a16-84d0-8f044323f991-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.163416 4966 generic.go:334] "Generic (PLEG): container finished" podID="b7417edb-da02-4a16-84d0-8f044323f991" containerID="9478cd861518291e224a055597f4fe05d9e96ddad634b3fc15cf16d3ecd2561a" exitCode=0 Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.163463 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhdnc" event={"ID":"b7417edb-da02-4a16-84d0-8f044323f991","Type":"ContainerDied","Data":"9478cd861518291e224a055597f4fe05d9e96ddad634b3fc15cf16d3ecd2561a"} Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.163491 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhdnc" event={"ID":"b7417edb-da02-4a16-84d0-8f044323f991","Type":"ContainerDied","Data":"e4ad16ef068e7d0ad9081d418b58feacda4b17e98943cb7aecc515d718424d79"} Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.163507 4966 scope.go:117] "RemoveContainer" containerID="9478cd861518291e224a055597f4fe05d9e96ddad634b3fc15cf16d3ecd2561a" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.163654 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hhdnc" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.200216 4966 scope.go:117] "RemoveContainer" containerID="96c0c1c2cbd44d0766a6af6b72822a452d4e86660e3b1e7124f778ac80f4aa4f" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.215339 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhdnc"] Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.227643 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492235-2zjx7"] Jan 27 17:15:00 crc kubenswrapper[4966]: E0127 17:15:00.228318 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7417edb-da02-4a16-84d0-8f044323f991" containerName="registry-server" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.228340 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7417edb-da02-4a16-84d0-8f044323f991" containerName="registry-server" Jan 27 17:15:00 crc kubenswrapper[4966]: E0127 17:15:00.228369 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7417edb-da02-4a16-84d0-8f044323f991" containerName="extract-utilities" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.228378 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7417edb-da02-4a16-84d0-8f044323f991" containerName="extract-utilities" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.228439 4966 scope.go:117] "RemoveContainer" containerID="4387f75cb1dfe50e0853495b9f95046f2986a7c1959ae9019e71ebc6097c9e5e" Jan 27 17:15:00 crc kubenswrapper[4966]: E0127 17:15:00.228450 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7417edb-da02-4a16-84d0-8f044323f991" containerName="extract-content" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.228628 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7417edb-da02-4a16-84d0-8f044323f991" containerName="extract-content" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.229266 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7417edb-da02-4a16-84d0-8f044323f991" containerName="registry-server" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.230939 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-2zjx7" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.249673 4966 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.250072 4966 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.253780 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhdnc"] Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.269221 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492235-2zjx7"] Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.282262 4966 scope.go:117] "RemoveContainer" containerID="9478cd861518291e224a055597f4fe05d9e96ddad634b3fc15cf16d3ecd2561a" Jan 27 17:15:00 crc kubenswrapper[4966]: E0127 17:15:00.283473 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9478cd861518291e224a055597f4fe05d9e96ddad634b3fc15cf16d3ecd2561a\": container with ID starting with 9478cd861518291e224a055597f4fe05d9e96ddad634b3fc15cf16d3ecd2561a not found: ID does not exist" containerID="9478cd861518291e224a055597f4fe05d9e96ddad634b3fc15cf16d3ecd2561a" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.283550 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9478cd861518291e224a055597f4fe05d9e96ddad634b3fc15cf16d3ecd2561a"} err="failed to get container status \"9478cd861518291e224a055597f4fe05d9e96ddad634b3fc15cf16d3ecd2561a\": rpc error: code = NotFound desc = could not find container \"9478cd861518291e224a055597f4fe05d9e96ddad634b3fc15cf16d3ecd2561a\": container with ID starting with 9478cd861518291e224a055597f4fe05d9e96ddad634b3fc15cf16d3ecd2561a not found: ID does not exist" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.283597 4966 scope.go:117] "RemoveContainer" containerID="96c0c1c2cbd44d0766a6af6b72822a452d4e86660e3b1e7124f778ac80f4aa4f" Jan 27 17:15:00 crc kubenswrapper[4966]: E0127 17:15:00.283985 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96c0c1c2cbd44d0766a6af6b72822a452d4e86660e3b1e7124f778ac80f4aa4f\": container with ID starting with 96c0c1c2cbd44d0766a6af6b72822a452d4e86660e3b1e7124f778ac80f4aa4f not found: ID does not exist" containerID="96c0c1c2cbd44d0766a6af6b72822a452d4e86660e3b1e7124f778ac80f4aa4f" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.284028 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c0c1c2cbd44d0766a6af6b72822a452d4e86660e3b1e7124f778ac80f4aa4f"} err="failed to get container status \"96c0c1c2cbd44d0766a6af6b72822a452d4e86660e3b1e7124f778ac80f4aa4f\": rpc error: code = NotFound desc = could not find container \"96c0c1c2cbd44d0766a6af6b72822a452d4e86660e3b1e7124f778ac80f4aa4f\": container with ID starting with 96c0c1c2cbd44d0766a6af6b72822a452d4e86660e3b1e7124f778ac80f4aa4f not found: ID does not exist" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.284066 4966 scope.go:117] "RemoveContainer" containerID="4387f75cb1dfe50e0853495b9f95046f2986a7c1959ae9019e71ebc6097c9e5e" Jan 27 17:15:00 crc kubenswrapper[4966]: E0127 17:15:00.284473 4966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4387f75cb1dfe50e0853495b9f95046f2986a7c1959ae9019e71ebc6097c9e5e\": container with ID starting with 4387f75cb1dfe50e0853495b9f95046f2986a7c1959ae9019e71ebc6097c9e5e not found: ID does not exist" containerID="4387f75cb1dfe50e0853495b9f95046f2986a7c1959ae9019e71ebc6097c9e5e" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.284508 4966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4387f75cb1dfe50e0853495b9f95046f2986a7c1959ae9019e71ebc6097c9e5e"} err="failed to get container status \"4387f75cb1dfe50e0853495b9f95046f2986a7c1959ae9019e71ebc6097c9e5e\": rpc error: code = NotFound desc = could not find container \"4387f75cb1dfe50e0853495b9f95046f2986a7c1959ae9019e71ebc6097c9e5e\": container with ID starting with 4387f75cb1dfe50e0853495b9f95046f2986a7c1959ae9019e71ebc6097c9e5e not found: ID does not exist" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.367368 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28506225-cf9e-4452-a627-60cedf13019a-secret-volume\") pod \"collect-profiles-29492235-2zjx7\" (UID: \"28506225-cf9e-4452-a627-60cedf13019a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-2zjx7" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.367836 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28506225-cf9e-4452-a627-60cedf13019a-config-volume\") pod \"collect-profiles-29492235-2zjx7\" (UID: \"28506225-cf9e-4452-a627-60cedf13019a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-2zjx7" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.368397 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dct8g\" (UniqueName: \"kubernetes.io/projected/28506225-cf9e-4452-a627-60cedf13019a-kube-api-access-dct8g\") pod \"collect-profiles-29492235-2zjx7\" (UID: \"28506225-cf9e-4452-a627-60cedf13019a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-2zjx7" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.470662 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28506225-cf9e-4452-a627-60cedf13019a-secret-volume\") pod \"collect-profiles-29492235-2zjx7\" (UID: \"28506225-cf9e-4452-a627-60cedf13019a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-2zjx7" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.470941 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28506225-cf9e-4452-a627-60cedf13019a-config-volume\") pod \"collect-profiles-29492235-2zjx7\" (UID: \"28506225-cf9e-4452-a627-60cedf13019a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-2zjx7" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.471128 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dct8g\" (UniqueName: \"kubernetes.io/projected/28506225-cf9e-4452-a627-60cedf13019a-kube-api-access-dct8g\") pod \"collect-profiles-29492235-2zjx7\" (UID: \"28506225-cf9e-4452-a627-60cedf13019a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-2zjx7" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.471834 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28506225-cf9e-4452-a627-60cedf13019a-config-volume\") pod \"collect-profiles-29492235-2zjx7\" (UID: \"28506225-cf9e-4452-a627-60cedf13019a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-2zjx7" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.478357 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28506225-cf9e-4452-a627-60cedf13019a-secret-volume\") pod \"collect-profiles-29492235-2zjx7\" (UID: \"28506225-cf9e-4452-a627-60cedf13019a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-2zjx7" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.504468 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dct8g\" (UniqueName: \"kubernetes.io/projected/28506225-cf9e-4452-a627-60cedf13019a-kube-api-access-dct8g\") pod \"collect-profiles-29492235-2zjx7\" (UID: \"28506225-cf9e-4452-a627-60cedf13019a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-2zjx7" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.542282 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7417edb-da02-4a16-84d0-8f044323f991" path="/var/lib/kubelet/pods/b7417edb-da02-4a16-84d0-8f044323f991/volumes" Jan 27 17:15:00 crc kubenswrapper[4966]: I0127 17:15:00.622599 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-2zjx7" Jan 27 17:15:01 crc kubenswrapper[4966]: W0127 17:15:01.171868 4966 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28506225_cf9e_4452_a627_60cedf13019a.slice/crio-5dd811f6e85c94bba2d6f304bc5af64de02fe39a06b7f4ab4b6bb2c39b98e41f WatchSource:0}: Error finding container 5dd811f6e85c94bba2d6f304bc5af64de02fe39a06b7f4ab4b6bb2c39b98e41f: Status 404 returned error can't find the container with id 5dd811f6e85c94bba2d6f304bc5af64de02fe39a06b7f4ab4b6bb2c39b98e41f Jan 27 17:15:01 crc kubenswrapper[4966]: I0127 17:15:01.173715 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492235-2zjx7"] Jan 27 17:15:02 crc kubenswrapper[4966]: I0127 17:15:02.193465 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-2zjx7" event={"ID":"28506225-cf9e-4452-a627-60cedf13019a","Type":"ContainerStarted","Data":"26f16c0207ee057815ad3f840f25e631fc753bdbbd24818218983e65c6bffe53"} Jan 27 17:15:02 crc kubenswrapper[4966]: I0127 17:15:02.194917 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-2zjx7" event={"ID":"28506225-cf9e-4452-a627-60cedf13019a","Type":"ContainerStarted","Data":"5dd811f6e85c94bba2d6f304bc5af64de02fe39a06b7f4ab4b6bb2c39b98e41f"} Jan 27 17:15:02 crc kubenswrapper[4966]: I0127 17:15:02.224994 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-2zjx7" podStartSLOduration=2.224973526 podStartE2EDuration="2.224973526s" podCreationTimestamp="2026-01-27 17:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:15:02.214332123 +0000 UTC m=+5568.517125631" watchObservedRunningTime="2026-01-27 17:15:02.224973526 +0000 UTC m=+5568.527767014" Jan 27 17:15:03 crc kubenswrapper[4966]: I0127 17:15:03.262569 4966 generic.go:334] "Generic (PLEG): container finished" podID="28506225-cf9e-4452-a627-60cedf13019a" containerID="26f16c0207ee057815ad3f840f25e631fc753bdbbd24818218983e65c6bffe53" exitCode=0 Jan 27 17:15:03 crc kubenswrapper[4966]: I0127 17:15:03.262872 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-2zjx7" event={"ID":"28506225-cf9e-4452-a627-60cedf13019a","Type":"ContainerDied","Data":"26f16c0207ee057815ad3f840f25e631fc753bdbbd24818218983e65c6bffe53"} Jan 27 17:15:05 crc kubenswrapper[4966]: I0127 17:15:05.165210 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-2zjx7" Jan 27 17:15:05 crc kubenswrapper[4966]: I0127 17:15:05.285632 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dct8g\" (UniqueName: \"kubernetes.io/projected/28506225-cf9e-4452-a627-60cedf13019a-kube-api-access-dct8g\") pod \"28506225-cf9e-4452-a627-60cedf13019a\" (UID: \"28506225-cf9e-4452-a627-60cedf13019a\") " Jan 27 17:15:05 crc kubenswrapper[4966]: I0127 17:15:05.285994 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28506225-cf9e-4452-a627-60cedf13019a-secret-volume\") pod \"28506225-cf9e-4452-a627-60cedf13019a\" (UID: \"28506225-cf9e-4452-a627-60cedf13019a\") " Jan 27 17:15:05 crc kubenswrapper[4966]: I0127 17:15:05.286031 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28506225-cf9e-4452-a627-60cedf13019a-config-volume\") pod \"28506225-cf9e-4452-a627-60cedf13019a\" (UID: \"28506225-cf9e-4452-a627-60cedf13019a\") " Jan 27 17:15:05 crc kubenswrapper[4966]: I0127 17:15:05.286792 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28506225-cf9e-4452-a627-60cedf13019a-config-volume" (OuterVolumeSpecName: "config-volume") pod "28506225-cf9e-4452-a627-60cedf13019a" (UID: "28506225-cf9e-4452-a627-60cedf13019a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:15:05 crc kubenswrapper[4966]: I0127 17:15:05.292567 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28506225-cf9e-4452-a627-60cedf13019a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "28506225-cf9e-4452-a627-60cedf13019a" (UID: "28506225-cf9e-4452-a627-60cedf13019a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:15:05 crc kubenswrapper[4966]: I0127 17:15:05.294514 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-2zjx7" event={"ID":"28506225-cf9e-4452-a627-60cedf13019a","Type":"ContainerDied","Data":"5dd811f6e85c94bba2d6f304bc5af64de02fe39a06b7f4ab4b6bb2c39b98e41f"} Jan 27 17:15:05 crc kubenswrapper[4966]: I0127 17:15:05.294565 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dd811f6e85c94bba2d6f304bc5af64de02fe39a06b7f4ab4b6bb2c39b98e41f" Jan 27 17:15:05 crc kubenswrapper[4966]: I0127 17:15:05.294630 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-2zjx7" Jan 27 17:15:05 crc kubenswrapper[4966]: I0127 17:15:05.295035 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28506225-cf9e-4452-a627-60cedf13019a-kube-api-access-dct8g" (OuterVolumeSpecName: "kube-api-access-dct8g") pod "28506225-cf9e-4452-a627-60cedf13019a" (UID: "28506225-cf9e-4452-a627-60cedf13019a"). InnerVolumeSpecName "kube-api-access-dct8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:15:05 crc kubenswrapper[4966]: I0127 17:15:05.316627 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492190-gp2zc"] Jan 27 17:15:05 crc kubenswrapper[4966]: I0127 17:15:05.328397 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492190-gp2zc"] Jan 27 17:15:05 crc kubenswrapper[4966]: I0127 17:15:05.389247 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dct8g\" (UniqueName: \"kubernetes.io/projected/28506225-cf9e-4452-a627-60cedf13019a-kube-api-access-dct8g\") on node \"crc\" DevicePath \"\"" Jan 27 17:15:05 crc kubenswrapper[4966]: I0127 17:15:05.389296 4966 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28506225-cf9e-4452-a627-60cedf13019a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 17:15:05 crc kubenswrapper[4966]: I0127 17:15:05.389309 4966 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28506225-cf9e-4452-a627-60cedf13019a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 17:15:06 crc kubenswrapper[4966]: I0127 17:15:06.542690 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e03c47b-3202-4b85-a3dd-af2ab3f241c6" path="/var/lib/kubelet/pods/3e03c47b-3202-4b85-a3dd-af2ab3f241c6/volumes" Jan 27 17:15:18 crc kubenswrapper[4966]: I0127 17:15:18.039101 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-np4dm"] Jan 27 17:15:18 crc kubenswrapper[4966]: E0127 17:15:18.040351 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28506225-cf9e-4452-a627-60cedf13019a" containerName="collect-profiles" Jan 27 17:15:18 crc kubenswrapper[4966]: I0127 17:15:18.040369 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="28506225-cf9e-4452-a627-60cedf13019a" containerName="collect-profiles" Jan 27 17:15:18 crc kubenswrapper[4966]: I0127 17:15:18.040659 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="28506225-cf9e-4452-a627-60cedf13019a" containerName="collect-profiles" Jan 27 17:15:18 crc kubenswrapper[4966]: I0127 17:15:18.043427 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-np4dm" Jan 27 17:15:18 crc kubenswrapper[4966]: I0127 17:15:18.052954 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-np4dm"] Jan 27 17:15:18 crc kubenswrapper[4966]: I0127 17:15:18.133654 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec82d0e4-47ab-456d-aa5c-646aaa7cee42-catalog-content\") pod \"community-operators-np4dm\" (UID: \"ec82d0e4-47ab-456d-aa5c-646aaa7cee42\") " pod="openshift-marketplace/community-operators-np4dm" Jan 27 17:15:18 crc kubenswrapper[4966]: I0127 17:15:18.134178 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec82d0e4-47ab-456d-aa5c-646aaa7cee42-utilities\") pod \"community-operators-np4dm\" (UID: \"ec82d0e4-47ab-456d-aa5c-646aaa7cee42\") " pod="openshift-marketplace/community-operators-np4dm" Jan 27 17:15:18 crc kubenswrapper[4966]: I0127 17:15:18.134418 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhp96\" (UniqueName: \"kubernetes.io/projected/ec82d0e4-47ab-456d-aa5c-646aaa7cee42-kube-api-access-lhp96\") pod \"community-operators-np4dm\" (UID: \"ec82d0e4-47ab-456d-aa5c-646aaa7cee42\") " pod="openshift-marketplace/community-operators-np4dm" Jan 27 17:15:18 crc kubenswrapper[4966]: I0127 17:15:18.237484 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec82d0e4-47ab-456d-aa5c-646aaa7cee42-utilities\") pod \"community-operators-np4dm\" (UID: \"ec82d0e4-47ab-456d-aa5c-646aaa7cee42\") " pod="openshift-marketplace/community-operators-np4dm" Jan 27 17:15:18 crc kubenswrapper[4966]: I0127 17:15:18.237647 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhp96\" (UniqueName: \"kubernetes.io/projected/ec82d0e4-47ab-456d-aa5c-646aaa7cee42-kube-api-access-lhp96\") pod \"community-operators-np4dm\" (UID: \"ec82d0e4-47ab-456d-aa5c-646aaa7cee42\") " pod="openshift-marketplace/community-operators-np4dm" Jan 27 17:15:18 crc kubenswrapper[4966]: I0127 17:15:18.237882 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec82d0e4-47ab-456d-aa5c-646aaa7cee42-catalog-content\") pod \"community-operators-np4dm\" (UID: \"ec82d0e4-47ab-456d-aa5c-646aaa7cee42\") " pod="openshift-marketplace/community-operators-np4dm" Jan 27 17:15:18 crc kubenswrapper[4966]: I0127 17:15:18.238221 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec82d0e4-47ab-456d-aa5c-646aaa7cee42-utilities\") pod \"community-operators-np4dm\" (UID: \"ec82d0e4-47ab-456d-aa5c-646aaa7cee42\") " pod="openshift-marketplace/community-operators-np4dm" Jan 27 17:15:18 crc kubenswrapper[4966]: I0127 17:15:18.238771 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec82d0e4-47ab-456d-aa5c-646aaa7cee42-catalog-content\") pod \"community-operators-np4dm\" (UID: \"ec82d0e4-47ab-456d-aa5c-646aaa7cee42\") " pod="openshift-marketplace/community-operators-np4dm" Jan 27 17:15:18 crc kubenswrapper[4966]: I0127 17:15:18.647913 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhp96\" (UniqueName: \"kubernetes.io/projected/ec82d0e4-47ab-456d-aa5c-646aaa7cee42-kube-api-access-lhp96\") pod \"community-operators-np4dm\" (UID: \"ec82d0e4-47ab-456d-aa5c-646aaa7cee42\") " pod="openshift-marketplace/community-operators-np4dm" Jan 27 17:15:18 crc kubenswrapper[4966]: I0127 17:15:18.671471 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-np4dm" Jan 27 17:15:19 crc kubenswrapper[4966]: I0127 17:15:19.301683 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-np4dm"] Jan 27 17:15:19 crc kubenswrapper[4966]: I0127 17:15:19.472188 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-np4dm" event={"ID":"ec82d0e4-47ab-456d-aa5c-646aaa7cee42","Type":"ContainerStarted","Data":"61ae4d054c98f97657a5668ffa86bc9b661f42e6bbc18a5492ef337fa0fa012a"} Jan 27 17:15:20 crc kubenswrapper[4966]: I0127 17:15:20.489336 4966 generic.go:334] "Generic (PLEG): container finished" podID="ec82d0e4-47ab-456d-aa5c-646aaa7cee42" containerID="3dfa4e6e393f0412e2d43d567e7d49e2b0f75450b5c721ad660a99861125a584" exitCode=0 Jan 27 17:15:20 crc kubenswrapper[4966]: I0127 17:15:20.489433 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-np4dm" event={"ID":"ec82d0e4-47ab-456d-aa5c-646aaa7cee42","Type":"ContainerDied","Data":"3dfa4e6e393f0412e2d43d567e7d49e2b0f75450b5c721ad660a99861125a584"} Jan 27 17:15:26 crc kubenswrapper[4966]: I0127 17:15:26.567663 4966 generic.go:334] "Generic (PLEG): container finished" podID="ec82d0e4-47ab-456d-aa5c-646aaa7cee42" containerID="4e675662234e2ffc890b520ab02374b6540ccfc6d440eaf2e7f7706b209185aa" exitCode=0 Jan 27 17:15:26 crc kubenswrapper[4966]: I0127 17:15:26.567745 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-np4dm" event={"ID":"ec82d0e4-47ab-456d-aa5c-646aaa7cee42","Type":"ContainerDied","Data":"4e675662234e2ffc890b520ab02374b6540ccfc6d440eaf2e7f7706b209185aa"} Jan 27 17:15:28 crc kubenswrapper[4966]: I0127 17:15:28.596267 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-np4dm" event={"ID":"ec82d0e4-47ab-456d-aa5c-646aaa7cee42","Type":"ContainerStarted","Data":"2b7211868658a5ef38cc44cd90cc717654251f3ccd099d38f92d337101ceb1fc"} Jan 27 17:15:28 crc kubenswrapper[4966]: I0127 17:15:28.616464 4966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-np4dm" podStartSLOduration=3.924059493 podStartE2EDuration="10.61644599s" podCreationTimestamp="2026-01-27 17:15:18 +0000 UTC" firstStartedPulling="2026-01-27 17:15:20.492203213 +0000 UTC m=+5586.794996711" lastFinishedPulling="2026-01-27 17:15:27.18458972 +0000 UTC m=+5593.487383208" observedRunningTime="2026-01-27 17:15:28.614913783 +0000 UTC m=+5594.917707281" watchObservedRunningTime="2026-01-27 17:15:28.61644599 +0000 UTC m=+5594.919239488" Jan 27 17:15:28 crc kubenswrapper[4966]: I0127 17:15:28.672070 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-np4dm" Jan 27 17:15:28 crc kubenswrapper[4966]: I0127 17:15:28.672217 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-np4dm" Jan 27 17:15:29 crc kubenswrapper[4966]: I0127 17:15:29.721775 4966 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-np4dm" podUID="ec82d0e4-47ab-456d-aa5c-646aaa7cee42" containerName="registry-server" probeResult="failure" output=< Jan 27 17:15:29 crc kubenswrapper[4966]: timeout: failed to connect service ":50051" within 1s Jan 27 17:15:29 crc kubenswrapper[4966]: > Jan 27 17:15:38 crc kubenswrapper[4966]: I0127 17:15:38.727847 4966 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-np4dm" Jan 27 17:15:38 crc kubenswrapper[4966]: I0127 17:15:38.790839 4966 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-np4dm" Jan 27 17:15:39 crc kubenswrapper[4966]: I0127 17:15:39.722225 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-np4dm"] Jan 27 17:15:39 crc kubenswrapper[4966]: I0127 17:15:39.871508 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-85mqp"] Jan 27 17:15:39 crc kubenswrapper[4966]: I0127 17:15:39.871792 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-85mqp" podUID="055a60e0-b7aa-4b35-9807-01fe7095113d" containerName="registry-server" containerID="cri-o://c5ef936eaa3fb8347ffddd13c05f053ebbd12d16c3c2c2007442555e9a147b8b" gracePeriod=2 Jan 27 17:15:40 crc kubenswrapper[4966]: I0127 17:15:40.744763 4966 generic.go:334] "Generic (PLEG): container finished" podID="055a60e0-b7aa-4b35-9807-01fe7095113d" containerID="c5ef936eaa3fb8347ffddd13c05f053ebbd12d16c3c2c2007442555e9a147b8b" exitCode=0 Jan 27 17:15:40 crc kubenswrapper[4966]: I0127 17:15:40.744855 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85mqp" event={"ID":"055a60e0-b7aa-4b35-9807-01fe7095113d","Type":"ContainerDied","Data":"c5ef936eaa3fb8347ffddd13c05f053ebbd12d16c3c2c2007442555e9a147b8b"} Jan 27 17:15:40 crc kubenswrapper[4966]: I0127 17:15:40.745344 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85mqp" event={"ID":"055a60e0-b7aa-4b35-9807-01fe7095113d","Type":"ContainerDied","Data":"48daecc54db6494bfa81292f5c46d4f90a04f2a5e576ebd4a83e6885c1f3e2a8"} Jan 27 17:15:40 crc kubenswrapper[4966]: I0127 17:15:40.745357 4966 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48daecc54db6494bfa81292f5c46d4f90a04f2a5e576ebd4a83e6885c1f3e2a8" Jan 27 17:15:40 crc kubenswrapper[4966]: I0127 17:15:40.788950 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-85mqp" Jan 27 17:15:40 crc kubenswrapper[4966]: I0127 17:15:40.815636 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxzcq\" (UniqueName: \"kubernetes.io/projected/055a60e0-b7aa-4b35-9807-01fe7095113d-kube-api-access-dxzcq\") pod \"055a60e0-b7aa-4b35-9807-01fe7095113d\" (UID: \"055a60e0-b7aa-4b35-9807-01fe7095113d\") " Jan 27 17:15:40 crc kubenswrapper[4966]: I0127 17:15:40.815948 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055a60e0-b7aa-4b35-9807-01fe7095113d-catalog-content\") pod \"055a60e0-b7aa-4b35-9807-01fe7095113d\" (UID: \"055a60e0-b7aa-4b35-9807-01fe7095113d\") " Jan 27 17:15:40 crc kubenswrapper[4966]: I0127 17:15:40.816030 4966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055a60e0-b7aa-4b35-9807-01fe7095113d-utilities\") pod \"055a60e0-b7aa-4b35-9807-01fe7095113d\" (UID: \"055a60e0-b7aa-4b35-9807-01fe7095113d\") " Jan 27 17:15:40 crc kubenswrapper[4966]: I0127 17:15:40.821005 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/055a60e0-b7aa-4b35-9807-01fe7095113d-utilities" (OuterVolumeSpecName: "utilities") pod "055a60e0-b7aa-4b35-9807-01fe7095113d" (UID: "055a60e0-b7aa-4b35-9807-01fe7095113d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:15:40 crc kubenswrapper[4966]: I0127 17:15:40.836662 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/055a60e0-b7aa-4b35-9807-01fe7095113d-kube-api-access-dxzcq" (OuterVolumeSpecName: "kube-api-access-dxzcq") pod "055a60e0-b7aa-4b35-9807-01fe7095113d" (UID: "055a60e0-b7aa-4b35-9807-01fe7095113d"). InnerVolumeSpecName "kube-api-access-dxzcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:15:40 crc kubenswrapper[4966]: I0127 17:15:40.882265 4966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/055a60e0-b7aa-4b35-9807-01fe7095113d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "055a60e0-b7aa-4b35-9807-01fe7095113d" (UID: "055a60e0-b7aa-4b35-9807-01fe7095113d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:15:40 crc kubenswrapper[4966]: I0127 17:15:40.918817 4966 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055a60e0-b7aa-4b35-9807-01fe7095113d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:15:40 crc kubenswrapper[4966]: I0127 17:15:40.919121 4966 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055a60e0-b7aa-4b35-9807-01fe7095113d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:15:40 crc kubenswrapper[4966]: I0127 17:15:40.919215 4966 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxzcq\" (UniqueName: \"kubernetes.io/projected/055a60e0-b7aa-4b35-9807-01fe7095113d-kube-api-access-dxzcq\") on node \"crc\" DevicePath \"\"" Jan 27 17:15:41 crc kubenswrapper[4966]: I0127 17:15:41.755529 4966 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-85mqp" Jan 27 17:15:41 crc kubenswrapper[4966]: I0127 17:15:41.797642 4966 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-85mqp"] Jan 27 17:15:41 crc kubenswrapper[4966]: I0127 17:15:41.816782 4966 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-85mqp"] Jan 27 17:15:42 crc kubenswrapper[4966]: I0127 17:15:42.540018 4966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="055a60e0-b7aa-4b35-9807-01fe7095113d" path="/var/lib/kubelet/pods/055a60e0-b7aa-4b35-9807-01fe7095113d/volumes" Jan 27 17:15:50 crc kubenswrapper[4966]: I0127 17:15:50.212375 4966 scope.go:117] "RemoveContainer" containerID="d7a05510a11a99728ea20e6ee856e10ff7570727172dfb1e7817f0aba3b8ec44" Jan 27 17:15:50 crc kubenswrapper[4966]: I0127 17:15:50.240379 4966 scope.go:117] "RemoveContainer" containerID="557e37632b334ffef502c75e04fa7fe38acde25762befcf7ccaf57b982472e01" Jan 27 17:15:50 crc kubenswrapper[4966]: I0127 17:15:50.301791 4966 scope.go:117] "RemoveContainer" containerID="c5ef936eaa3fb8347ffddd13c05f053ebbd12d16c3c2c2007442555e9a147b8b" Jan 27 17:15:50 crc kubenswrapper[4966]: I0127 17:15:50.367699 4966 scope.go:117] "RemoveContainer" containerID="4e3017c62d9a17bc475e6ac95dba93e1231a00670830abde9338be0ad9bdbe00" Jan 27 17:16:10 crc kubenswrapper[4966]: I0127 17:16:10.127368 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:16:10 crc kubenswrapper[4966]: I0127 17:16:10.127985 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:16:40 crc kubenswrapper[4966]: I0127 17:16:40.119332 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:16:40 crc kubenswrapper[4966]: I0127 17:16:40.119967 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:17:10 crc kubenswrapper[4966]: I0127 17:17:10.121177 4966 patch_prober.go:28] interesting pod/machine-config-daemon-wtl9v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:17:10 crc kubenswrapper[4966]: I0127 17:17:10.121706 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:17:10 crc kubenswrapper[4966]: I0127 17:17:10.121745 4966 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" Jan 27 17:17:10 crc kubenswrapper[4966]: I0127 17:17:10.122588 4966 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b2529efdffd5b304dd81582c40ddec07afe8d0d46aee1f06c3e3840d1e4319b8"} pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 17:17:10 crc kubenswrapper[4966]: I0127 17:17:10.122637 4966 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" containerName="machine-config-daemon" containerID="cri-o://b2529efdffd5b304dd81582c40ddec07afe8d0d46aee1f06c3e3840d1e4319b8" gracePeriod=600 Jan 27 17:17:10 crc kubenswrapper[4966]: E0127 17:17:10.243767 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:17:10 crc kubenswrapper[4966]: I0127 17:17:10.842760 4966 generic.go:334] "Generic (PLEG): container finished" podID="75889828-fc5d-4516-a3c4-db3affd4f810" containerID="b2529efdffd5b304dd81582c40ddec07afe8d0d46aee1f06c3e3840d1e4319b8" exitCode=0 Jan 27 17:17:10 crc kubenswrapper[4966]: I0127 17:17:10.842825 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" event={"ID":"75889828-fc5d-4516-a3c4-db3affd4f810","Type":"ContainerDied","Data":"b2529efdffd5b304dd81582c40ddec07afe8d0d46aee1f06c3e3840d1e4319b8"} Jan 27 17:17:10 crc kubenswrapper[4966]: I0127 17:17:10.843109 4966 scope.go:117] "RemoveContainer" containerID="0b82f5c4006e37ce4acea807f2f48ce9819f9a916fa418c6d1ef6e9b0f839312" Jan 27 17:17:10 crc kubenswrapper[4966]: I0127 17:17:10.843911 4966 scope.go:117] "RemoveContainer" containerID="b2529efdffd5b304dd81582c40ddec07afe8d0d46aee1f06c3e3840d1e4319b8" Jan 27 17:17:10 crc kubenswrapper[4966]: E0127 17:17:10.844267 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:17:22 crc kubenswrapper[4966]: I0127 17:17:22.522312 4966 scope.go:117] "RemoveContainer" containerID="b2529efdffd5b304dd81582c40ddec07afe8d0d46aee1f06c3e3840d1e4319b8" Jan 27 17:17:22 crc kubenswrapper[4966]: E0127 17:17:22.525468 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:17:36 crc kubenswrapper[4966]: I0127 17:17:36.521934 4966 scope.go:117] "RemoveContainer" containerID="b2529efdffd5b304dd81582c40ddec07afe8d0d46aee1f06c3e3840d1e4319b8" Jan 27 17:17:36 crc kubenswrapper[4966]: E0127 17:17:36.522883 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:17:51 crc kubenswrapper[4966]: I0127 17:17:51.521747 4966 scope.go:117] "RemoveContainer" containerID="b2529efdffd5b304dd81582c40ddec07afe8d0d46aee1f06c3e3840d1e4319b8" Jan 27 17:17:51 crc kubenswrapper[4966]: E0127 17:17:51.522722 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:18:02 crc kubenswrapper[4966]: I0127 17:18:02.520835 4966 scope.go:117] "RemoveContainer" containerID="b2529efdffd5b304dd81582c40ddec07afe8d0d46aee1f06c3e3840d1e4319b8" Jan 27 17:18:02 crc kubenswrapper[4966]: E0127 17:18:02.521626 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:18:16 crc kubenswrapper[4966]: I0127 17:18:16.527659 4966 scope.go:117] "RemoveContainer" containerID="b2529efdffd5b304dd81582c40ddec07afe8d0d46aee1f06c3e3840d1e4319b8" Jan 27 17:18:16 crc kubenswrapper[4966]: E0127 17:18:16.529369 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:18:27 crc kubenswrapper[4966]: I0127 17:18:27.521521 4966 scope.go:117] "RemoveContainer" containerID="b2529efdffd5b304dd81582c40ddec07afe8d0d46aee1f06c3e3840d1e4319b8" Jan 27 17:18:27 crc kubenswrapper[4966]: E0127 17:18:27.522335 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:18:40 crc kubenswrapper[4966]: I0127 17:18:40.521871 4966 scope.go:117] "RemoveContainer" containerID="b2529efdffd5b304dd81582c40ddec07afe8d0d46aee1f06c3e3840d1e4319b8" Jan 27 17:18:40 crc kubenswrapper[4966]: E0127 17:18:40.522772 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:18:41 crc kubenswrapper[4966]: I0127 17:18:41.808772 4966 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-99tk5"] Jan 27 17:18:41 crc kubenswrapper[4966]: E0127 17:18:41.809630 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055a60e0-b7aa-4b35-9807-01fe7095113d" containerName="registry-server" Jan 27 17:18:41 crc kubenswrapper[4966]: I0127 17:18:41.809659 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="055a60e0-b7aa-4b35-9807-01fe7095113d" containerName="registry-server" Jan 27 17:18:41 crc kubenswrapper[4966]: E0127 17:18:41.809739 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055a60e0-b7aa-4b35-9807-01fe7095113d" containerName="extract-content" Jan 27 17:18:41 crc kubenswrapper[4966]: I0127 17:18:41.809752 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="055a60e0-b7aa-4b35-9807-01fe7095113d" containerName="extract-content" Jan 27 17:18:41 crc kubenswrapper[4966]: E0127 17:18:41.809779 4966 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055a60e0-b7aa-4b35-9807-01fe7095113d" containerName="extract-utilities" Jan 27 17:18:41 crc kubenswrapper[4966]: I0127 17:18:41.809790 4966 state_mem.go:107] "Deleted CPUSet assignment" podUID="055a60e0-b7aa-4b35-9807-01fe7095113d" containerName="extract-utilities" Jan 27 17:18:41 crc kubenswrapper[4966]: I0127 17:18:41.810185 4966 memory_manager.go:354] "RemoveStaleState removing state" podUID="055a60e0-b7aa-4b35-9807-01fe7095113d" containerName="registry-server" Jan 27 17:18:41 crc kubenswrapper[4966]: I0127 17:18:41.813182 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99tk5" Jan 27 17:18:41 crc kubenswrapper[4966]: I0127 17:18:41.825365 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-99tk5"] Jan 27 17:18:41 crc kubenswrapper[4966]: I0127 17:18:41.846562 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25a10ab4-f3c5-44e8-8178-dcbd58a585a2-utilities\") pod \"redhat-operators-99tk5\" (UID: \"25a10ab4-f3c5-44e8-8178-dcbd58a585a2\") " pod="openshift-marketplace/redhat-operators-99tk5" Jan 27 17:18:41 crc kubenswrapper[4966]: I0127 17:18:41.846958 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25a10ab4-f3c5-44e8-8178-dcbd58a585a2-catalog-content\") pod \"redhat-operators-99tk5\" (UID: \"25a10ab4-f3c5-44e8-8178-dcbd58a585a2\") " pod="openshift-marketplace/redhat-operators-99tk5" Jan 27 17:18:41 crc kubenswrapper[4966]: I0127 17:18:41.846995 4966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rwgj\" (UniqueName: \"kubernetes.io/projected/25a10ab4-f3c5-44e8-8178-dcbd58a585a2-kube-api-access-8rwgj\") pod \"redhat-operators-99tk5\" (UID: \"25a10ab4-f3c5-44e8-8178-dcbd58a585a2\") " pod="openshift-marketplace/redhat-operators-99tk5" Jan 27 17:18:41 crc kubenswrapper[4966]: I0127 17:18:41.949141 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25a10ab4-f3c5-44e8-8178-dcbd58a585a2-catalog-content\") pod \"redhat-operators-99tk5\" (UID: \"25a10ab4-f3c5-44e8-8178-dcbd58a585a2\") " pod="openshift-marketplace/redhat-operators-99tk5" Jan 27 17:18:41 crc kubenswrapper[4966]: I0127 17:18:41.949217 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rwgj\" (UniqueName: \"kubernetes.io/projected/25a10ab4-f3c5-44e8-8178-dcbd58a585a2-kube-api-access-8rwgj\") pod \"redhat-operators-99tk5\" (UID: \"25a10ab4-f3c5-44e8-8178-dcbd58a585a2\") " pod="openshift-marketplace/redhat-operators-99tk5" Jan 27 17:18:41 crc kubenswrapper[4966]: I0127 17:18:41.949501 4966 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25a10ab4-f3c5-44e8-8178-dcbd58a585a2-utilities\") pod \"redhat-operators-99tk5\" (UID: \"25a10ab4-f3c5-44e8-8178-dcbd58a585a2\") " pod="openshift-marketplace/redhat-operators-99tk5" Jan 27 17:18:41 crc kubenswrapper[4966]: I0127 17:18:41.949727 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25a10ab4-f3c5-44e8-8178-dcbd58a585a2-catalog-content\") pod \"redhat-operators-99tk5\" (UID: \"25a10ab4-f3c5-44e8-8178-dcbd58a585a2\") " pod="openshift-marketplace/redhat-operators-99tk5" Jan 27 17:18:41 crc kubenswrapper[4966]: I0127 17:18:41.950147 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25a10ab4-f3c5-44e8-8178-dcbd58a585a2-utilities\") pod \"redhat-operators-99tk5\" (UID: \"25a10ab4-f3c5-44e8-8178-dcbd58a585a2\") " pod="openshift-marketplace/redhat-operators-99tk5" Jan 27 17:18:41 crc kubenswrapper[4966]: I0127 17:18:41.976786 4966 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rwgj\" (UniqueName: \"kubernetes.io/projected/25a10ab4-f3c5-44e8-8178-dcbd58a585a2-kube-api-access-8rwgj\") pod \"redhat-operators-99tk5\" (UID: \"25a10ab4-f3c5-44e8-8178-dcbd58a585a2\") " pod="openshift-marketplace/redhat-operators-99tk5" Jan 27 17:18:42 crc kubenswrapper[4966]: I0127 17:18:42.141347 4966 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99tk5" Jan 27 17:18:42 crc kubenswrapper[4966]: I0127 17:18:42.650829 4966 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-99tk5"] Jan 27 17:18:42 crc kubenswrapper[4966]: I0127 17:18:42.964299 4966 generic.go:334] "Generic (PLEG): container finished" podID="25a10ab4-f3c5-44e8-8178-dcbd58a585a2" containerID="502382f2497f1d3183232e97df5574483032a203f4117d05be63fcb2684f856c" exitCode=0 Jan 27 17:18:42 crc kubenswrapper[4966]: I0127 17:18:42.964398 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99tk5" event={"ID":"25a10ab4-f3c5-44e8-8178-dcbd58a585a2","Type":"ContainerDied","Data":"502382f2497f1d3183232e97df5574483032a203f4117d05be63fcb2684f856c"} Jan 27 17:18:42 crc kubenswrapper[4966]: I0127 17:18:42.964619 4966 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99tk5" event={"ID":"25a10ab4-f3c5-44e8-8178-dcbd58a585a2","Type":"ContainerStarted","Data":"6dbab30a6f9dd22c747d64b2da9c4d83310ccf5b60d7f275c6e4fc8e460324f3"} Jan 27 17:18:54 crc kubenswrapper[4966]: I0127 17:18:54.522348 4966 scope.go:117] "RemoveContainer" containerID="b2529efdffd5b304dd81582c40ddec07afe8d0d46aee1f06c3e3840d1e4319b8" Jan 27 17:18:54 crc kubenswrapper[4966]: E0127 17:18:54.523267 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" Jan 27 17:18:57 crc kubenswrapper[4966]: I0127 17:18:57.740306 4966 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="1dc01362-ea5a-48fe-b67f-1e00b193c36e" containerName="galera" probeResult="failure" output="command timed out" Jan 27 17:18:57 crc kubenswrapper[4966]: I0127 17:18:57.743106 4966 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="1dc01362-ea5a-48fe-b67f-1e00b193c36e" containerName="galera" probeResult="failure" output="command timed out" Jan 27 17:19:01 crc kubenswrapper[4966]: E0127 17:19:01.846851 4966 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 17:19:01 crc kubenswrapper[4966]: E0127 17:19:01.850513 4966 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8rwgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-99tk5_openshift-marketplace(25a10ab4-f3c5-44e8-8178-dcbd58a585a2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 17:19:01 crc kubenswrapper[4966]: E0127 17:19:01.852277 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-99tk5" podUID="25a10ab4-f3c5-44e8-8178-dcbd58a585a2" Jan 27 17:19:02 crc kubenswrapper[4966]: E0127 17:19:02.225228 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-99tk5" podUID="25a10ab4-f3c5-44e8-8178-dcbd58a585a2" Jan 27 17:19:07 crc kubenswrapper[4966]: I0127 17:19:07.522075 4966 scope.go:117] "RemoveContainer" containerID="b2529efdffd5b304dd81582c40ddec07afe8d0d46aee1f06c3e3840d1e4319b8" Jan 27 17:19:07 crc kubenswrapper[4966]: E0127 17:19:07.523349 4966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtl9v_openshift-machine-config-operator(75889828-fc5d-4516-a3c4-db3affd4f810)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtl9v" podUID="75889828-fc5d-4516-a3c4-db3affd4f810" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136171636024456 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136171637017374 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136156010016503 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136156010015453 5ustar corecore